sail-sg / Attention-SinkLinks
[ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)
☆142Updated 4 months ago
Alternatives and similar repositories for Attention-Sink
Users that are interested in Attention-Sink are comparing it to the libraries listed below
Sorting:
- A Sober Look at Language Model Reasoning☆89Updated 2 weeks ago
- Code for "Reasoning to Learn from Latent Thoughts"☆122Updated 8 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆46Updated last year
- ☆185Updated 6 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆105Updated last month
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆48Updated 4 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆187Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆69Updated 8 months ago
- ☆110Updated 2 months ago
- ☆45Updated 2 months ago
- ☆134Updated 8 months ago
- A brief and partial summary of RLHF algorithms.☆139Updated 9 months ago
- ☆344Updated 4 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆62Updated 3 months ago
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆81Updated last month
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated 10 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆86Updated 5 months ago
- A collection of papers on discrete diffusion models☆166Updated 5 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 9 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆49Updated last year
- [NeurIPS 2024] Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆186Updated 9 months ago
- Test-time-training on nearest neighbors for large language models☆47Updated last year
- ☆104Updated 2 months ago
- ☆32Updated 6 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆127Updated last month
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆89Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆55Updated 6 months ago
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆80Updated 10 months ago