Folgen
-
LLM alignment is the process of steering Large Language Models to operate in a manner consistent with intended human goals, preferences, and ethical principles. Its primary objective is to make LLMs helpful, honest, and harmless, ensuring their outputs align with specific values and are advantageous to users. This critical process prevents unintended or harmful outputs, mitigates issues like specification gaming and reward hacking, addresses biases and falsehoods, and manages the complexity of these powerful AI systems. Alignment is vital to transform unpredictable models into reliable, trustworthy, and beneficial tools, especially as AI capabilities advance.
-
The "Why We Think" from Lilian Weng, examines improving language models by allocating more computation at test time, drawing an analogy to human "slow thinking" or System 2. By treating computation as a resource, the aim is to design systems that can utilize this test-time effort effectively for better performance. Key approaches involve generating intermediate steps like Chain-of-Thought, employing decoding methods such as parallel sampling and sequential revision, using reinforcement learning to enhance reasoning, enabling external tool use, and implementing adaptive computation time. This allows models to spend more resources on analysis, similar to human deliberation, to achieve improved results.
-
Fehlende Folgen?
-
Deep Research is an autonomous research agent built into ChatGPT. It performs multi-step online research over several minutes, behaving like a human researcher by searching, reading, analyzing, and synthesizing information from multiple sources. It produces detailed, cited reports. Unlike standard ChatGPT's single-step responses, Deep Research uses an agent architecture orchestrating specialized reasoning models (like o3-mini) and generalist models (like GPT-4).
-
vLLM is a high-throughput serving system for large language models. It addresses inefficient KV cache memory management in existing systems caused by fragmentation and lack of sharing, which limits batch size. vLLM uses PagedAttention, inspired by OS paging, to manage KV cache in non-contiguous blocks. This minimizes memory waste and enables flexible sharing, allowing vLLM to batch significantly more requests. As a result, vLLM achieves 2-4x higher throughput compared to state-of-the-art systems like FasterTransformer and Orca.
-
Qwen3 models introduce both Mixture-of-Experts (MoE) and dense architectures. They utilize hybrid thinking modes, allowing users to balance response speed and reasoning depth for tasks, controllable via parameters or tags. Developed through a multi-stage post-training pipeline, Qwen3 is trained on a significantly expanded dataset of approximately 36 trillion tokens across 119 languages. This enhances its multilingual support for global applications. The models also feature improved agentic capabilities, notably excelling in tool calling, which increases their utility for complex, interactive tasks.
-
RAGEN is a modular system for training and evaluating LLM agents using multi-turn reinforcement learning. Built on the StarPO framework, it implements the full training loop including rollout generation, reward assignment, and trajectory optimization. RAGEN serves as research infrastructure to analyze LLM agent training dynamics, focusing on challenges like stability, generalization, and the emergence of reasoning in interactive environments.
-
DeepSeek-Prover-V2 is an open-source large language model designed for formal theorem proving in Lean 4. Its training relies heavily on synthetic data, generated by using DeepSeek-V3 to decompose problems into subgoals, which are then recursively solved by a smaller 7B prover model. The model uses a two-stage training process, including supervised fine-tuning and reinforcement learning (GRPO), to bridge informal reasoning with formal proofs. It achieves state-of-the-art performance, particularly with its high-precision Chain-of-Thought mode.
-
The DeepSeek-Prover project aims to advance large language model capabilities in formal theorem proving by addressing the scarcity of training data. It uses autoformalization to convert informal high school and undergraduate math competition problems into formal statements, generating a large dataset of 8 million synthetic proofs. Quality filtering and formal verification with Lean 4 ensure data reliability. An iterative process enhances the model, leading to state-of-the-art performance on miniF2F and FIMO benchmarks, outperforming models like GPT-4.
-
The Model Context Protocol (MCP), introduced by Anthropic in November 2024, is an open protocol standardizing how applications provide context to LLMs. Acting like a "USB-C port for AI applications," it provides a standardized way to connect AI models to different data sources and tools. MCP employs a client-server architecture to overcome the complex "MxN integration problem" by establishing a common interface, reducing complexity to M+N. This allows for more robust and scalable AI applications by eliminating the need for custom connectors and fostering a unified ecosystem for LLM integration.
-
LLM post-training is crucial for refining the reasoning abilities developed during pretraining. It employs fine-tuning on specific reasoning tasks, reinforcement learning to reward logical steps and coherent thought processes, and test-time scaling to enhance reasoning during inference. Techniques like Chain-of-Thought (CoT) and Tree-of-Thoughts (ToT) prompting, along with methods like Monte Carlo Tree Search (MCTS), allow LLMs to explore and refine reasoning paths. These post-training strategies aim to bridge the gap between statistical pattern learning and human-like logical inference, leading to improved performance on complex reasoning tasks.
-
Agent AI refers to interactive systems that perceive visual, language, and environmental data to produce meaningful embodied actions in physical and virtual worlds. It aims to create sophisticated and context-aware AI, potentially paving the way for AGI by leveraging generative AI and cross-reality training. Agent AI systems often use large foundation models (LLMs and VLMs) for enhanced perception, reasoning, and task planning. Continuous learning is crucial for these agents to adapt to dynamic environments, refine their behavior through interaction and feedback, and achieve self-improvement.
-
FlashAttention-3 accelerates attention on NVIDIA Hopper GPUs through three key innovations. It achieves producer-consumer asynchrony by dividing warps into producer (data loading with TMA) and consumer (computation with asynchronous Tensor Cores) roles, overlapping these critical phases. Second, it hides softmax latency by interleaving softmax operations with asynchronous GEMMs using techniques like pingpong scheduling and intra-warpgroup pipelining. Lastly, FlashAttention-3 leverages hardware-accelerated low-precision FP8 GEMM, employing block quantization and incoherent processing to enhance throughput while mitigating accuracy loss. This summary is based on the provided sources.
-
FlashAttention-2 builds upon FlashAttention to achieve faster attention computation with better GPU resource utilization. It enhances parallelism by also parallelizing along the sequence length dimension, optimizing work partitioning between thread blocks and warps to reduce shared memory access. A key improvement is the reduction of non-matmul FLOPs, which are less efficient on modern GPUs optimized for matrix multiplication. These enhancements lead to significant speedups compared to FlashAttention and standard attention, reaching higher throughput and better model FLOPs utilization in end-to-end training for Transformers.
-
FlashAttention is an IO-aware attention mechanism designed to be fast and memory-efficient, especially for long sequences. Its core innovation is tiling, where input sequences are divided into blocks processed within the fast on-chip SRAM, significantly reducing reads and writes to the slower HBM. This contrasts with standard attention, which materializes the entire attention matrix in HBM. By minimizing HBM access and recomputing the attention matrix in the backward pass, FlashAttention achieves faster Transformer training and a linear memory footprint, outperforming many approximate attention methods that overlook memory access costs.
-
PPO (Proximal Policy Optimization) is a reinforcement learning algorithm that balances simplicity, stability, sample efficiency, general applicability, and strong performance. PPO replaced TRPO (Trust Region Policy Optimization) as the default algorithm at OpenAI due to its simpler implementation and greater computational efficiency, while maintaining comparable performance. PPO approximates TRPO by clipping the policy gradient and using first-order optimization, avoiding the computationally intensive Hessian matrix and strict KL divergence constraints of TRPO. The clipping mechanism in PPO constrains policy updates, prevents excessively large changes, and promotes stability during training. Its surrogate objectives and clip function enable the reuse of training data, making PPO sample efficient, especially for complex tasks.
-
Andrej Karpathy's tech talk (youtube), provides a comprehensive yet accessible overview of Large Language Models (LLMs) like ChatGPT. The talk details the process of building an LLM, including pre-training, data processing, and neural network training.Key stages include downloading and filtering internet text, tokenizing the text, and training neural networks to model token relationships. The discussion covers the distinction between base models and assistants, highlighting fine-tuning to create conversational AIs. It also addresses challenges like hallucinations and mitigation strategies, such as knowledge-based refusal and tool use. The talk further explores reinforcement learning and the emergence of "thinking" in models.
-
Andrej Karpathy's talk, "Intro to Large Language Models," demystifies LLMs by portraying them as systems with two key components:a parameters file (the weights of the neural network) anda run file (the code that runs the network). The creation of these files starts with a computationally intensive training process, where a large amount of internet text is compressed into the model's parameters. The scaling laws show that LLM performance depends on the number of parameters and the amount of training data.Karpathy reviews how LLMs are evolving to incorporate external tools and multiple modalities. He presents his view of LLMs as the kernel process of an emerging operating system and also discusses the security challenges of LLMs, including jailbreak attacks, prompt injection attacks, and data poisoning.
-
DeepSeek-V2 is a Mixture-of-Experts (MoE) language model that balances strong performance with economical training and efficient inference. It uses a total of 236B parameters, with 21B activated for each token, and supports a context length of 128K tokens. Key architectural innovations includeMulti-Head Latent Attention (MLA), which compresses the KV cache for faster inference, andDeepSeekMoE, which enables economical training through sparse computation. Compared to DeepSeek 67B, DeepSeek-V2 saves 42.5% of training costs, reduces the KV cache by 93.3%, and boosts maximum generation throughput by 5.76 times. It is pre-trained on 8.1T tokens of high-quality data and further aligned through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL).
-
Matrix calculus is essential for understanding and implementing deep learning. It provides the mathematical tools to optimize neural networks using gradient descent. The Jacobian matrix, a key concept, organizes partial derivatives of vector-valued functions. The vector chain rule simplifies derivative calculations in nested functions, common in neural networks. Automatic differentiation, used in modern libraries, relies on these principles. Grasping matrix calculus allows for a deeper understanding of model training and the implementation of custom neural networks.
-
'S1' refers to simple test-time scaling, an efficient approach to enhance language model reasoning with minimal resources. It involves training a model on a small, carefully curated dataset like s1K and using budget forcing to control test-time compute. Budget forcing enforces maximum or minimum thinking tokens by appending delimiters or the word "Wait". The s1-32B model, developed using this method, outperforms other models on competition math questions. The approach combines a curated dataset with a straightforward test-time technique, leading to strong reasoning performance and effective test-time scaling.
- Mehr anzeigen