1). Self-Adaptive LLMs - introduces Transformer^2, a novel self-adaptation framework that adapts LLMs for unseen tasks in real-time by selectively adjusting singular components of their weight matrices; it’s built with two key phases: 1) a dispatch system that analyzes and identifies the properties of the incoming task, and 2) a step that combines "expert" vectors (trained via reinforcement learning) to create task-specific behaviors; claims to be more efficient than LoRA with fewer parameters and can works across different LLM architectures. (paper | tweet)
2). MiniMax-01 - introduces a new series of models that integrate Mixture-of-Experts; introduces a model with 32 experts and 456B parameters, and 45.9B are activated for each token; claims match the performance of state-of-the-art models like GPT-4o and Claude-3.5-Sonnet while offering a 20-32x longer context window; it can handle context windows of up to 4 million tokens; it integrates linear attention with optimized hardware utilization which enhances the efficiency and scalability of the LLM; there is also a vision model called MiniMax-VL-01 built through continued training with 512 billion vision-language tokens. (paper | tweet)
3). VideoRAG - a framework that enhances RAG by leveraging video content as an external knowledge source; unlike existing RAG approaches that primarily focus on text or images, VideoRAG dynamically retrieves relevant videos based on queries and incorporates both their visual and textual elements into the generation process; the framework utilizes Large Video Language Models (LVLMs) to process video content directly, enabling more effective capture of temporal dynamics, spatial details, and multimodal cues that static modalities often fail to convey; for videos lacking textual descriptions, they propose using automatic speech recognition to generate transcripts, ensuring both visual and textual modalities can be leveraged. (paper | tweet)
Editor Message
Learn how to build with LLMs, RAG, and AI Agents in our new courses. Use AGENT20 for a 20% discount.
4). Learning to Memorize at Test Time - introduces a neural long-term memory module to memorize historical context and help attention to attend to the current context while utilizing long past information; the neural memory module acts as a long-term, more persistent memory than just using attention alone (considered more short-term); Titan, which is based on neural memory, shows good results in language modeling, common-sense reasoning, genomics, and time series tasks. (paper | tweet)
5). Foundations of LLMs - new survey on the foundations of LLMs covering areas such as pre-training, prompting, and alignment methods. (paper | tweet)
6). OmniThink - a new framework that emulates a human-like process of iterative expansion and reflection; it's built to simulate the cognitive behavior of learners as they deepen their knowledge; compared to RAG and role-playing, OmniThink can expand knowledge boundaries through continuous reflection and exploration; this makes it ideal for use cases that require long-form generation. (paper | tweet)
7). Enhancing RAG - systematically explores the factors and methods that improve RAG systems such as retrieval strategies, query expansion, contrastive in-context learning, prompt design, and chunking. (paper | tweet)
8). AutoCBT - proposes a multi-agent framework, AutoCBT, for Cognitive Behavioral Therapy; the work proposes a general multi-agent framework that generates high-quality responses for single-turn psychological consultation scenarios; it uses a combination of dynamic routing, memory, and supervisory mechanisms to enhance the autonomous ability of each agent; experimental results show that AutoCBT can provide higher-quality automated psychological counseling services; AutoCBT improves dialogue quality compared to other purely prompt-based counseling frameworks. (paper | tweet)
9). Imagine while Reasoning in Space - introduces MVoT (Multimodal Visualization-of-Thought), a new reasoning framework that enables AI models to "think" in both text and images; MVoT enhances the traditional Chain-of-Thought prompting by allowing models to generate visual representations of their reasoning steps alongside text explanations; the framework is implemented in Chameleon-7B, a multimodal language model, and introduces a "token discrepancy loss" to improve the quality of generated visualizations; MVoT significantly outperforms traditional approaches, especially in complex scenarios; MVoT achieves over 90% accuracy on maze and printer installation tasks. (paper | tweet)
10). ChemAgent - presents a new framework designed to improve the performance of LLMs on chemical reasoning through a dynamic, self-updating library; the library is developed by decomposing chemical tasks into sub-tasks and compiling them into a structured collection that can be referenced for future queries; when the system is given a new problem, it retries and refines relevant information from the library to enable more effective task decomposition; the library is dynamically updated with new sub-tasks and solutions as they are encountered and validated; experiments on SciBench demonstrate that ChemAgent achieves performance gains of up to 46% (GPT-4), significantly outperforming existing methods. (paper | tweet)
Thanks, Elvis! Your newsletter is quite helpful to keep in touch with the latest in AI.
What follows in the next two comments is based on the ten papers Elvis chose for this week. I'm noting this just in case it isn't clear.
I'm also sharing links to the Emergent Mind summaries for five of this week's papers:
Transformer^2: Self-adaptive LLMs
https://www.emergentmind.com/research/18d6a640fc53e5b7258dda4a
MiniMax-01: Scaling Foundation Models with Lightning Attention
https://www.emergentmind.com/research/43e061ce569499d2685491f1
Titans: Learning to Memorize at Test Time
https://www.emergentmind.com/assistant/35317107d23ca7cbd602bc77
Enhancing Retrieval-Augmented Generation: A Study of Best Practices
https://www.emergentmind.com/research/4db5b7e30566f56c046175c6
ChemAgent: Self-updating Library in Large Language Models Improves Chemical Reasoning
https://www.emergentmind.com/research/83a083be8c5c1f43ce00234c
Full disclosure: Although I'm a huge fan of Emergent Mind, I have no financial, official, or legal interest in this CS research tool.