haonan3 / AnchorContextLinks
AnchorAttention: Improved attention for LLMs long-context training
☆213Updated last year
Alternatives and similar repositories for AnchorContext
Users that are interested in AnchorContext are comparing it to the libraries listed below
Sorting:
- Large Language Models Can Self-Improve in Long-context Reasoning☆72Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 9 months ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆75Updated 6 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆107Updated 3 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆226Updated last year
- FROM $f(x)$ AND $g(x)$ TO $f(g(x))$: LLMs Learn New Skills in RL by Composing Old Ones☆56Updated 2 months ago
- Diffusion Language Models For Code Infilling Beyond Fixed-size Canvas☆99Updated 4 months ago
- ☆346Updated 5 months ago
- ☆85Updated 2 months ago
- ☆115Updated 3 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆210Updated last month
- A Sober Look at Language Model Reasoning☆92Updated last month
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆51Updated 6 months ago
- A repo for open research on building large reasoning models☆127Updated this week
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆151Updated 6 months ago
- ☆202Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆182Updated 5 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆55Updated last week
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆243Updated 4 months ago
- [NeurIPS 2024] Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆193Updated 10 months ago
- ☆143Updated 4 months ago
- The official repository of paper "Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models''☆110Updated 5 months ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- A brief and partial summary of RLHF algorithms.☆142Updated 10 months ago
- [COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?☆82Updated 11 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆181Updated 6 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆155Updated 6 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆78Updated last year