Introduction If you’re reading this, chances are you’re a computational linguist and chances are you have not had a lot of contact with computer vision. You might even think to yourself “Well yeah, why would I? It has nothing to do with language, does it?” But what if a language model could also rely on […]
Macro F1 and macro F1 Earlier this year we got slightly puzzled about how to best calculate the “macro F1” score to measure the performance of a classifier. To provide a bit of background, the macro F1 metric is frequently used when classes are considered equally important despite their relative frequency. For instance, consider the […]
What does adversarial mean in NLP? In the past two years, machine learning, particularly neural computer vision and NLP, have seen a tremendous rise in popularity of all things adversarial. In this blog post I will give an overview of the two most popular training methods that are commonly referred to as adversarial: Injecting adversarial […]
In this blogpost we want to learn how to do dimensionality reduction for datasets. This can be used to visualise word embeddings or other data with more than 2 or 3 dimensions.
In the summer term of 2018 the ICL Heidelberg offered an advanced course on Neural Networks for Natural Language Processing. During this course we presented and discussed two papers on neural language generation.