This issue highlights the top ML Papers of the Week (Feb 27 - Mar 5).
1). Language Is Not All You Need - introduces a multimodal large language model called Kosmos-1; achieves great performance on language understanding, OCR-free NLP, perception-language tasks, visual QA, and more. (paper)
2). Comparing Brain Activations and Language Models - finds that human brain activity is best explained by the activations of modern language models enhanced with long-range and hierarchical predictions. (paper)
3). EvoPrompting - combines evolutionary prompt engineering with soft prompt-tuning to find high-performing models; it leverages few-shot prompting which is further improved by using an evolutionary search approach to improve the in-context examples. (paper)
4). Consistency Models - a new family of generative models that achieve high sample quality without adversarial training. (paper)
5). D5 - a new task that automatically discovers corpus-level differences via language description in a goal-driven way; applications include discovering insights from commercial reviews and error patterns in NLP systems. (paper)
6). Reconstructing Images from Human Brain Activity with Diffusion Models - proposes an approach for high-resolution image reconstruction with latent diffusion models from human brain activity. (paper)
7). Grounded Decoding - a scalable approach to planning with LLMs in embodied settings through grounding functions; GD is found to be a general, flexible, and expressive approach to embodied tasks. (paper)
8). Voltron - a framework for language-driven representation learning from human videos and captions for robotics. (paper)
9). Dropout Reduces Underfitting - demonstrates that dropout can mitigate underfitting when used at the start of training; it counteracts SGD stochasticity and limits the influence of individual batches when training models. (paper)
10). LLM for Conversational Interactions with Mobile UIs - an approach that enables versatile conversational interactions with mobile UIs using a single LLM. (paper)
See you next week for another round of awesome ML papers!