mengxiayu / LLMSuperWeightView external linksLinks
Code for studying the super weight in LLM
☆121Dec 3, 2024Updated last year
Alternatives and similar repositories for LLMSuperWeight
Users that are interested in LLMSuperWeight are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆37Sep 24, 2024Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆196Mar 4, 2024Updated last year
- ☆19Jul 31, 2025Updated 6 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Nov 26, 2025Updated 2 months ago
- An automated data pipeline scaling RL to pretraining levels☆72Oct 11, 2025Updated 4 months ago
- Work in progress.☆79Nov 25, 2025Updated 2 months ago
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆122Feb 10, 2025Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- [ICLR'25] Code for KaSA, an official implementation of "KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models"☆20Jan 16, 2025Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆20Dec 20, 2024Updated last year
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆52Oct 19, 2024Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆388Apr 13, 2025Updated 10 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆75Mar 17, 2025Updated 10 months ago
- ☆158Feb 15, 2025Updated last year
- Code for the paper "A Mechanistic Interpretation of Arithmetic Reasoning in Language Models using Causal Mediation Analysis"☆19Jun 12, 2025Updated 8 months ago
- The open-source Mixture of Depths code and the official implementation of the paper "Router-Tuning: A Simple and Effective Approach for E…☆28Oct 1, 2025Updated 4 months ago
- ☆163Jun 22, 2025Updated 7 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Nov 17, 2024Updated last year
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆42Oct 28, 2025Updated 3 months ago
- ☆25Oct 31, 2024Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- ☆46Nov 8, 2024Updated last year
- Paper dataset for "Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic Papers"☆13Oct 20, 2024Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆24Nov 25, 2024Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆52Aug 6, 2025Updated 6 months ago
- [ICLR 2026] SparseD: Sparse Attention for Diffusion Language Models☆57Oct 7, 2025Updated 4 months ago
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆31Jun 9, 2025Updated 8 months ago
- ☆13Jun 22, 2025Updated 7 months ago
- Residual vector quantization for KV cache compression in large language model☆11Oct 22, 2024Updated last year
- ☆15Jan 12, 2026Updated last month
- Reference implementation of models from Nyonic Model Factory☆12May 13, 2024Updated last year
- Implementation of the Pairformer model used in AlphaFold 3☆14Updated this week
- Setup an MCP server in 60 seconds.☆13Dec 12, 2024Updated last year
- ☆33Apr 22, 2025Updated 9 months ago
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Jun 25, 2024Updated last year
- ☆235Jun 11, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- ☆13Jul 2, 2025Updated 7 months ago