sail-sg / Attention-SinkView external linksLinks
[ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)
☆155Jul 8, 2025Updated 7 months ago
Alternatives and similar repositories for Attention-Sink
Users that are interested in Attention-Sink are comparing it to the libraries listed below
Sorting:
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆196Mar 4, 2024Updated last year
- ☆19May 20, 2025Updated 8 months ago
- Graph Transformers for Large Graphs☆22Apr 26, 2024Updated last year
- Long Context Extension and Generalization in LLMs☆62Sep 21, 2024Updated last year
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated last month
- Improving Your Model Ranking on Chatbot Arena by Vote Rigging (ICML 2025)☆26Feb 25, 2025Updated 11 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Apr 15, 2025Updated 10 months ago
- V1: Toward Multimodal Reasoning by Designing Auxiliary Task☆36Apr 14, 2025Updated 10 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- ☆52May 19, 2025Updated 8 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆84Oct 23, 2024Updated last year
- [TMLR 2025] On Memorization in Diffusion Models☆30Oct 5, 2023Updated 2 years ago
- cliptrase☆47Sep 1, 2024Updated last year
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated 9 months ago
- ☆27Nov 25, 2025Updated 2 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆213Jan 15, 2025Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆246Sep 12, 2025Updated 5 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆191Apr 17, 2025Updated 10 months ago
- ☆118Feb 11, 2025Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- 🔱 Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs☆71Mar 21, 2025Updated 10 months ago
- ☆131May 29, 2025Updated 8 months ago
- ☆25May 20, 2025Updated 8 months ago
- Learning to route instances for Human vs AI Feedback (ACL Main '25)☆26Jul 23, 2025Updated 6 months ago
- ☆25Jun 29, 2025Updated 7 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆51Jul 15, 2025Updated 7 months ago
- ☆14May 14, 2019Updated 6 years ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆24Nov 25, 2024Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated 11 months ago
- ☆13Apr 10, 2025Updated 10 months ago
- ☆13Sep 2, 2023Updated 2 years ago
- DatasetResearch: Benchmarking Agent Systems for Demand-Driven Dataset Discovery☆20Sep 24, 2025Updated 4 months ago
- LAGr: Label Aligned Graphs for Better Systematic Generalization in Semantic Parsing☆10Jun 1, 2022Updated 3 years ago
- https://arxiv.org/abs/2502.08942☆15Mar 31, 2025Updated 10 months ago
- Implementation of a Hierarchical Mamba as described in the paper: "Hierarchical State Space Models for Continuous Sequence-to-Sequence Mo…☆15Nov 11, 2024Updated last year