NLU: A search for a generalizable model that consistently performs well across multiple tasks

Image from PapersWithCode.com

Below I attempt to paraphrase the following paper in plain in English:  Better Fine Tuning by Reducing Representational Collapse by Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta from Facebook submitted to Arxiv on August 6th, 2020. tldr Today NLU is about leveraging pre-trained models on large datasets like RoBERTa… Continue reading NLU: A search for a generalizable model that consistently performs well across multiple tasks