Ordinarily, we call the channels of communication and sensation modalities. We experience the world involving multiple modalities, such as vision, hearing or touch. A system which handles dataset includes multiple modalities is characterized as a multi-modality modal. For example, MSCOCO dataset contains not only the images, but also the language captions for each image. Utilizing […]
This blog post aims to give a broad overview over the development of Commonsense Reasoning (CR) in the field of Natural Language Processing (NLP) and its multimodal intersection with Computer Vision (CV). What makes CR so important in the age of Deep Learning, how did the approaches to it and datasets for it change over […]
Introduction If you’re reading this, chances are you’re a computational linguist and chances are you have not had a lot of contact with computer vision. You might even think to yourself “Well yeah, why would I? It has nothing to do with language, does it?” But what if a language model could also rely on […]
Macro F1 and macro F1 Earlier this year we got slightly puzzled about how to best calculate the “macro F1” score to measure the performance of a classifier. To provide a bit of background, the macro F1 metric is frequently used when classes are considered equally important despite their relative frequency. For instance, consider the […]
What does adversarial mean in NLP? In the past two years, machine learning, particularly neural computer vision and NLP, have seen a tremendous rise in popularity of all things adversarial. In this blog post I will give an overview of the two most popular training methods that are commonly referred to as adversarial: Injecting adversarial […]
In this blogpost we want to learn how to do dimensionality reduction for datasets. This can be used to visualise word embeddings or other data with more than 2 or 3 dimensions.
In the summer term of 2018 the ICL Heidelberg offered an advanced course on Neural Networks for Natural Language Processing. During this course we presented and discussed two papers on neural language generation.