1). AlphaMissense - an AI model classifying missense variants to help pinpoint the cause of diseases; the model is used to develop a catalogue of genetic mutations; it can categorize 89% of all 71 million possible missense variants as either likely pathogenic or likely benign. (paper | tweet)
2). Chain-of-Verification reduces Hallucination in LLMs - develops a method to enable LLMs to "deliberate" on responses to correct mistakes; include the following steps: 1) draft initial response, 2) plan verification questions to fact-check the draft, 3) answer questions independently to avoid bias from other responses, and 4) generate a final verified response. (paper | tweet)
3). Contrastive Decoding Improves Reasoning in Large Language Models - shows that contrastive decoding leads Llama-65B to outperform Llama 2 and other models on commonsense reasoning and reasoning benchmarks. (paper | tweet)
4). LongLoRA - an efficient fine-tuning approach to significantly extend the context windows of pre-trained LLMs; implements shift short attention, a substitute that approximates the standard self-attention pattern during training; it has less GPU memory cost and training time compared to full fine-tuning while not compromising accuracy. (paper | tweet)
5). LLMs for Generating Structured Data - studies the use of LLMs for generating complex structured data; proposes a structure-aware fine-tuning method, applied to Llama-7B, which significantly outperform other model like GPT-3.5/4 and Vicuna-13B. (paper | tweet)
We hope you have been enjoying the newsletter. Your support, through paid subscription or resharing, would help immensely with our efforts.
6). LMSYS-Chat-1M - a large-scale dataset containing 1 million real-world conversations with 25 state-of-the-art LLM; it is collected from 210K unique IP addresses on the Vincuna demo and Chatbot Arena website. (paper | tweet)
7). Language Modeling is Compression - evaluates the compression capabilities of LLMs; it investigates how and why compression and prediction are equivalent; shows that LLMs are powerful general-purpose compressors due to their in-context learning abilities; finds that Chinchilla 70B compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. (paper | tweet)
8). Compositional Foundation Models - proposes foundation models that leverage multiple expert foundation models trained on language, vision, and action data to solve long-horizon goals. (paper | tweet)
9). LLMs for IT Operations - proposes OWL, an LLM for IT operations tuned using a self-instruct strategy based on IT-related tasks; it discusses how to collect a quality instruction dataset and how to put together a benchmark. (paper | tweet)
10). KOSMOS-2.5 - a multimodal model for machine reading of text-intensive images, capable of document-level text generation and image-to-markdown text generation. (paper | tweet)
Pretty sure the Twitter link for the "Compositional Foundation Models" is pointing to the wrong tweet.