haonan3 / AnchorContext
AnchorAttention: Improved attention for LLMs long-context training
☆207Updated 3 months ago
Alternatives and similar repositories for AnchorContext:
Users that are interested in AnchorContext are comparing it to the libraries listed below
- A brief and partial summary of RLHF algorithms.☆128Updated 2 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆190Updated 9 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆179Updated 2 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆163Updated 3 weeks ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆195Updated last month
- ☆176Updated last year
- Official Implementation for the paper "d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning"☆100Updated 3 weeks ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 8 months ago
- [NeurIPS 2024] Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆147Updated 2 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆134Updated 7 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆75Updated 5 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆82Updated 11 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆72Updated 6 months ago
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆158Updated last month
- ☆192Updated 2 months ago
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆63Updated 2 months ago
- ☆77Updated 2 weeks ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆136Updated last month
- Repo of paper "Free Process Rewards without Process Labels"☆145Updated last month
- ☆95Updated 2 weeks ago
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆41Updated 3 weeks ago
- ☆78Updated 3 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆69Updated 5 months ago
- Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆128Updated 3 months ago
- ☆95Updated last month
- ☆170Updated 2 weeks ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆215Updated this week
- ☆163Updated last month
- ☆69Updated 2 months ago