1). AlphaGeometry - an AI system that acts as a theorem prover that can solve Olympiad geometry problems without human demonstrations; this system is trained on synthetic data involving millions of theorems and proofs across different levels of complexity; the data is used to train a neural language model that can solve olympiad-level problems and approaches the performance of an average International Mathematical Olympiad (IMO) gold medallist. (paper | tweet)
2). AlphaCodium - a code-oriented iterative flow that improves LLMs on code generation; it involves two key steps to improve code generation capabilities in LLMs: i) additional generated data (problem self-reflection and test reasoning) to aid the iterative process, and ii) enriching public tests using additional AI-generated tests; using the CodeContests validation dataset, GPT-4 pass@5 accuracy increased from 19% using a single well-crafted prompt to 44% using the AlphaCodium flow; it even outperforms AlphaCode using a significantly smaller computation budget and 4 orders of magnitude fewer LLM calls. (paper | tweet)
3). RAG vs. Finetuning - report discussing the tradeoff between RAG and fine-tuning when using LLMs like Llama 2 and GPT-4; performs a detailed analysis and highlights insights when applying the pipelines on an agricultural dataset; observes that there is an accuracy increase of over 6 p.p. when fine-tuning the model and this is cumulative with RAG, which increases accuracy by 5 p.p. further. (paper | tweet)
4). Self-Rewarding Models - proposes a self-alignment method that uses the model itself for LLM-as-a-Judge prompting to provide its rewards during training; Iterative DPO is used for instruction following training using the preference pairs built from the generated data which comes from a self-instruction creation phase; using this approach, fine-tuning a Llama 2 70B model on three iterations can lead to a model that outperforms LLMs like Claude 2 and Gemini Pro on the AlpacaEval 2.0 leaderboard. (paper | tweet)
5). Tuning Language Models by Proxy - introduces proxy-tuning, a decoding-time algorithm that modifies logits of a target LLM with the logits’ difference between a small base model and a fine-tuned base model; this can enable a larger target base model to perform as well as would a fine-tuned version of it; proxy-tuning is applied to Llama2-70B using proxies of only 7B size to close 88% of the gap between Llama2-70B and its tuned chat version. (paper | tweet)
6). Reasoning with Reinforced Fine-Tuning - proposes an approach, ReFT, to enhance the generalizability of LLMs for reasoning; it starts with applying SFT and then applies online RL for further refinement while automatically sampling reasoning paths to learn from; this differs from RLHF in that it doesn’t utilize a reward model learned from human-labeled data; ReFT demonstrates improved performance and generalization abilities on math problem-solving. (paper | tweet)
7). Overview of LLMs for Evaluation - thoroughly surveys the methodologies and explores their strengths and limitations; provides a taxonomy of different approaches involving prompt engineering or calibrating open-source LLMs for evaluation. (paper | tweet)
8). Patchscopes - proposes a framework that leverages a model itself to explain its internal representations; it decodes information from LLM hidden representations which is possible by “patching” representations into a separate inference pass that encourages the extraction of that information; it can be used to answer questions about an LLM’s computation and can even be used to fix latent multi-hop reasoning errors. (paper | tweet)
9). The Unreasonable Effectiveness of Easy Training Data for Hard Tasks - suggests that language models often generalize well from easy to hard data, i.e., easy-to-hard generalization; it argues that it can be better to train on easy data as opposed to hard data, even when the emphasis is on improving performance on hard data, and suggests that the scalable oversight problem may be easier than previously thought. (paper | tweet)
10). MoE-Mamba - an approach to efficiently scale LLMs by combining state space models (SSMs) with Mixture of Experts (MoE); MoE-Mamba, outperforms both Mamba and Transformer-MoE; it reaches the same performance as Mamba in 2.2x less training steps while preserving the inference performance gains of Mamba against the Transformer. (paper | tweet)