Episodes
-
In this episode, we discuss High-Dimension Human Value Representation in Large Language Models by Samuel Cahyawijaya, Delong Chen, Yejin Bang, Leila Khalatbari, Bryan Wilie, Ziwei Ji, Etsuko Ishii, Pascale Fung. The paper addresses the importance of aligning large language models (LLMs) with human values, introducing a new method called UniVaR for representing human value distributions within these models. UniVaR, which is independent of model architecture and training data, has been applied to eight multilingual LLMs and tested on four distinct LLMs to compare the embedded value distributions. The findings show that UniVaR can illuminate the variation in human values across different languages and cultures within various LLMs.
-
In this episode, we discuss Why do small language models underperform? Studying Language Model Saturation via the Softmax Bottleneck by Nathan Godey, Éric de la Clergerie, Benoît Sagot. This paper investigates the phenomenon of performance saturation in small language models, attributing the issue to a mismatch between the model's hidden layer size and the complexity of the targeted probability distribution. The softmax bottleneck, a known limitation in neural networks, is identified as a contributing factor to this mismatch, leading to suboptimal performance due to the emergence of degenerate latent representations during late pretraining stages. The study demonstrates that models with fewer than 1000 hidden dimensions are particularly susceptible to this effect, resulting in decreased effectiveness upon evaluation.
-
Missing episodes?
-
In this episode, we discuss Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention by Tsendsuren Munkhdalai, Manaal Faruqui, Siddharth Gopal. The paper presents a novel method for enabling Transformer-based Large Language Models to process extremely long inputs while keeping memory and computational requirements fixed. The technique introduced, called Infini-attention, blends a new form of memory-augmented attention with local and linear long-term attention within a single Transformer layer. The effectiveness of this method is demonstrated through impressive performance on long-context challenges, including a one million length sequence task and a half-million word book summarization, while maintaining efficient streaming capabilities and a minimal increase in memory parameters.
-
In this episode, we discuss Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs by Keen You, Haotian Zhang, Eldon Schoop, Floris Weers, Amanda Swearngin, Jeffrey Nichols, Yinfei Yang, Zhe Gan. The paper presents Ferret-UI, a new multimodal large language model tailored for interpreting and interacting with mobile user interface screens, which overcomes common challenges through a novel approach of dividing screens into sub-images for enhanced detail processing. The model has been trained on a variety of UI-focused tasks with improved instruction-following and region annotations, enhancing its abilities in tasks like icon recognition and conversational interaction. Ferret-UI demonstrates superior performance in UI comprehension and task execution compared to existing models, establishing a new benchmark for the evaluation of MLLMs in the context of user interface understanding.
-
In this episode, we discuss Evaluating Text-to-Visual Generation with Image-to-Text Generation by Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, Deva Ramanan. The paper introduces VQAScore, a novel metric for evaluating the alignment of generated images to text prompts, utilizing a visual-question-answering model to score the relevance of images to prompts based on a simple yes-or-no question. Unlike existing metrics, the proposed VQAScore effectively handles complex prompts, demonstrating superior performance across numerous benchmarks, even when compared to proprietary models like GPT-4V. Additionally, the paper presents GenAI-Bench, a challenging new benchmark consisting of compositional text prompts and human ratings, and provides open-source access to their data and models to facilitate further research in text-to-visual generation evaluations.
-
In this episode, we discuss Future Lens: Anticipating Subsequent Tokens from a Single Hidden State by Koyena Pal, Jiuding Sun, Andrew Yuan, Byron C. Wallace, David Bau. The paper investigates if single hidden state vectors from an input token in a model like GPT-J-6B can predict multiple future tokens in a sequence. Using linear approximation and causal intervention methods, the researchers found that certain layers allow accurate future token prediction from a single hidden state with over 48% accuracy. They introduce "Future Lens," a visualization tool that leverages their findings to give a new perspective on transformer states and their predictive capabilities.
-
In this episode, we discuss Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity by Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, Jong C. Park. The paper introduces an adaptive QA model that optimizes the balance between efficiency and accuracy by choosing the appropriate response strategy for questions of varying complexity. A smaller language model classifies the question's complexity level, enabling the system to switch between different retrieval-augmented LLM strategies and even non-retrieval methods. The model outperforms existing baselines on various open-domain QA datasets, and the authors have made the code available on GitHub.
-
In this episode, we discuss Mixture-of-Depths: Dynamically allocating compute in transformer-based language models by David Raposo, Sam Ritter, Blake Richards, Timothy Lillicrap, Peter Conway Humphreys, Adam Santoro. The study presents a method for transformers that allows for the dynamic allocation of computational resources within sequences by limiting the number of tokens processed at each layer using a top-k routing mechanism. This approach maintains a fixed tensor size and a static computation graph, which differs from other conditional computation strategies. The resulting model operates with fewer computations per forward pass and provides up to a 50% faster sampling rate post-training, while still matching the performance of baseline models with the same computational budget and training duration.
-
In this episode, we discuss WavLLM: Towards Robust and Adaptive Speech Large Language Model by Shujie Hu, Long Zhou, Shujie Liu, Sanyuan Chen, Hongkun Hao, Jing Pan, Xunying Liu, Jinyu Li, Sunit Sivasankaran, Linquan Liu, Furu Wei. The paper introduces WavLLM, a robust speech large language model with a unique dual-encoder system—one for semantic content and another for speaker identity—enhanced by a two-stage curriculum learning approach and a prompt-aware weight adapter for flexible task handling. WavLLM excels at a broad range of speech-processing tasks such as ASR, ST, SV, ER, and SQA, demonstrating state-of-the-art performance and strong generalization across various contexts. Resources related to the model, including codes and evaluation sets, have been made available for further research.
-
In this episode, we discuss Gecko: Versatile Text Embeddings Distilled from Large Language Models by Jinhyuk Lee, Zhuyun Dai, Xiaoqi Ren, Blair Chen, Daniel Cer, Jeremy R. Cole, Kai Hui, Michael Boratko, Rajvi Kapadia, Wen Ding, Yi Luan, Sai Meher Karthik Duddu, Gustavo Hernandez Abrego, Weiqiang Shi, Nithi Gupta, Aditya Kusupati, Prateek Jain, Siddhartha Reddy Jonnalagadda, Ming-Wei Chang, Iftekhar Naim. Gecko is a new text embedding model designed for efficient retrieval, using a novel two-step knowledge distillation process from large language models. First, it creates varied synthetic query-passage pairs, then it improves the data by selecting and relabeling high-quality candidates. Despite its smaller size, Gecko demonstrates superior retrieval performance, outpacing larger models with higher dimensionality on a benchmark test.
-
In this episode, we discuss ReALM: Reference Resolution As Language Modeling by Joel Ruben Antony Moniz, Soundarya Krishnan, Melis Ozyildirim, Prathamesh Saraf, Halim Cagri Ates, Yuan Zhang, Hong Yu, Nidhi Rajshree. This paper presents a method for using Large Language Models (LLMs) to resolve references, including complex ones such as entities on a user's screen or in the background, by framing reference resolution as a language modeling task. The proposed system shows significant improvements, with over 5% gains in handling on-screen references, compared to an existing system. Moreover, the paper reports that even the smallest model within their framework performs comparably to GPT-4, while their larger models outperform GPT-4.
-
In this episode, we discuss sDPO: Don't Use Your Data All at Once by Dahyun Kim, Yungi Kim, Wonho Song, Hyeonwoo Kim, Yunsu Kim, Sanghoon Kim, Chanjun Park. The paper introduces stepwise DPO (sDPO), a novel technique for better aligning large language models (LLM) with human preferences by utilizing preference datasets in stages rather than all at once. sDPO improves upon the direct preference optimization (DPO) process by employing progressively aligned reference models throughout training. The results showed that models trained using sDPO outperformed larger, more parameter-heavy LLMs, demonstrating the effectiveness of this stepwise approach.
-
In this episode, we discuss LITA: Language Instructed Temporal-Localization Assistant by De-An Huang, Shijia Liao, Subhashree Radhakrishnan, Hongxu Yin, Pavlo Molchanov, Zhiding Yu, Jan Kautz. The paper introduces the Language Instructed Temporal-Localization Assistant (LITA), which tackles the issue of temporal localization in Large Language Models (LLMs) processing video content, where they struggle to identify "when" an event occurs in a video. LITA incorporates time tokens for better temporal representation, uses a SlowFast token architecture for finer temporal resolution, and emphasizes training on temporal localization data, introducing a new task with its dataset (ActivityNet-RTL). The implementation of LITA demonstrates strong performance improvements in temporal localization tasks and video-based text generation, with the code available on GitHub for public use.
-
In this episode, we discuss AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks by Max Ku, Cong Wei, Weiming Ren, Harry Yang, Wenhu Chen. This paper presents AnyV2V, a framework that simplifies video-to-video editing by breaking it down into two main steps. It leverages existing image editing models to edit individual frames and then uses a temporally coherent transformation to create a new video. The framework is designed to be versatile and does not require any additional training, allowing it to meet a broad range of user requirements for video editing.
-
In this episode, we discuss InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding by Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, Yansong Shi, Tianxiang Jiang, Songze Li, Hongjie Zhang, Yifei Huang, Yu Qiao, Yali Wang, Limin Wang. InternVideo2 is a cutting-edge video foundation model designed to understand and generate video content, achieving superior performance across multiple video and audio tasks. The training involves a progressive strategy that combines multiple learning techniques and emphasizes the connection between video and text, enhanced through semantic segmentation and the generation of captions. The model's capabilities were proven through rigorous testing, displaying exceptional proficiency in video captioning, dialogue, and understanding of extended video sequences.
-
In this episode, we discuss Giraffe: Adventures in Expanding Context Lengths in LLMs by Arka Pal, Deep Karkhanis, Manley Roberts, Samuel Dooley, Arvind Sundararajan, Siddartha Naidu. The paper reviews techniques for overcoming the fixed context length limitation in large language models like LLaMA or LLaMA 2 by modifying positional encodings and introduces a new truncation strategy. It presents three novel tasks for evaluation, finding that linear scaling of contexts at evaluation time improves model performance, especially with a truncated positional basis. The researchers release new models named Giraffe with extended context lengths, along with datasets and code on HuggingFace to encourage further exploration in context length extrapolation.
-
In this episode, we discuss Explorative Inbetweening of Time and Space by Haiwen Feng, Zheng Ding, Zhihao Xia, Simon Niklaus, Victoria Abrevaya, Michael J. Black, Xuaner Zhang. The paper presents a method for generating video sequences from just a starting and ending frame, called bounded generation, by utilizing a new sampling strategy named Time Reversal Fusion. This strategy merges the forward and backward denoising processes guided by the start and end frames to create videos that naturally transition between the two given frames, enable smooth inbetweening of motion, and create looping videos when the frames are the same. Time Reversal Fusion is shown to outperform previous methods in terms of generating complex movements and 3D-consistent visuals without additional training on a model.
-
In this episode, we discuss Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking by Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, Noah D. Goodman. The paper presents Quiet-STaR, an advancement of Self-Taught Reasoner (STaR), which teaches Language Models to generate internal rationales to enhance text predictions. By introducing a tokenwise parallel sampling algorithm, learnable tokens for marking thoughts, and extending teacher-forcing, the approach addresses practical challenges in model implementation. Results demonstrate that the approach enables the model to better predict challenging tokens, answer complex questions, and improve performance on benchmarks without task-specific fine-tuning, signifying progress towards more generative and scalable reasoning in LMs.
-
In this episode, we discuss Evaluating Large Language Models at Evaluating Instruction Following by Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, Danqi Chen. This paper examines the effectiveness of using large language models (LLMs) to evaluate the performance of other models in following instructions, and introduces a new meta-evaluation benchmark called LLM-BAR. The benchmark consists of 419 pairs of texts, with one text in each pair following a given instruction and the other not, designed to challenge the evaluative capabilities of LLMs. The findings show that LLM evaluators vary in their ability to judge instruction adherence and suggest that even the best evaluators need improvement, with the paper proposing new prompting strategies to enhance LLM evaluator performance.
-
In this episode, we discuss Evaluating Large Language Models as Generative User Simulators for Conversational Recommendation by Se-eun Yoon, Zhankui He, Jessica Maria Echterhoff, Julian McAuley. The paper presents a new protocol with five tasks to assess the performance of synthetic users, generated by large language models, aiming to mimic human behavior in conversational recommender systems. The tasks evaluate essential features such as discussing items, stating preferences, asking for recommendations, and providing feedback. Initial evaluations show that these tasks can identify how language models differ from actual human behavior and suggest how model tuning and prompting can improve the synthetic users' resemblance to real users.
- Show more