Code for studying the super weight in LLM
☆122Dec 3, 2024Updated last year
Alternatives and similar repositories for LLMSuperWeight
Users that are interested in LLMSuperWeight are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆38Sep 24, 2024Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆196Mar 4, 2024Updated 2 years ago
- ☆19Jul 31, 2025Updated 7 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆51Oct 18, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 3 months ago
- An automated data pipeline scaling RL to pretraining levels☆73Oct 11, 2025Updated 4 months ago
- Work in progress.☆79Nov 25, 2025Updated 3 months ago
- [TMLR 2025] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆125Feb 15, 2026Updated 3 weeks ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆21Dec 20, 2024Updated last year
- [ICLR'25] Code for KaSA, an official implementation of "KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models"☆20Jan 16, 2025Updated last year
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆53Oct 19, 2024Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆388Apr 13, 2025Updated 10 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆80Mar 17, 2025Updated 11 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆157Jul 8, 2025Updated 8 months ago
- ☆159Feb 15, 2025Updated last year
- Code for the paper "A Mechanistic Interpretation of Arithmetic Reasoning in Language Models using Causal Mediation Analysis"☆19Jun 12, 2025Updated 8 months ago
- The open-source Mixture of Depths code and the official implementation of the paper "Router-Tuning: A Simple and Effective Approach for E…☆28Feb 28, 2026Updated last week
- ☆165Jun 22, 2025Updated 8 months ago
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆42Oct 28, 2025Updated 4 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆97Nov 17, 2024Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- ☆25Oct 31, 2024Updated last year
- Paper dataset for "Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic Papers"☆12Oct 20, 2024Updated last year
- ☆47Nov 8, 2024Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆24Nov 25, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 7 months ago
- [ICLR 2026] SparseD: Sparse Attention for Diffusion Language Models☆59Feb 22, 2026Updated 2 weeks ago
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆33Jun 9, 2025Updated 9 months ago
- Implementation of the Pairformer model used in AlphaFold 3☆14Mar 2, 2026Updated last week
- ☆15Jan 12, 2026Updated last month
- ☆13Jun 22, 2025Updated 8 months ago
- Setup an MCP server in 60 seconds.☆13Dec 12, 2024Updated last year
- Residual vector quantization for KV cache compression in large language model☆12Oct 22, 2024Updated last year
- Reference implementation of models from Nyonic Model Factory☆12May 13, 2024Updated last year
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Jun 25, 2024Updated last year
- ☆33Apr 22, 2025Updated 10 months ago
- ☆235Jun 11, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆527Feb 10, 2025Updated last year