MAGICS-LAB / OutEffHopLinks
[ICML 2024] Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
☆20Updated 6 months ago
Alternatives and similar repositories for OutEffHop
Users that are interested in OutEffHop are comparing it to the libraries listed below
Sorting:
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆110Updated last year
- Compressing Large Language Models using Low Precision and Low Rank Decomposition☆104Updated 11 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆54Updated 11 months ago
- Fast and memory-efficient exact attention☆72Updated 8 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 11 months ago
- ☆152Updated 4 months ago
- ☆83Updated last year
- Work in progress.☆74Updated 4 months ago
- QuIP quantization☆59Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆124Updated 4 months ago
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆45Updated last year
- ☆146Updated 8 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆80Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Updated 3 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆227Updated 9 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆144Updated 8 months ago
- ☆64Updated last year
- The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]☆59Updated 4 months ago
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆37Updated 9 months ago
- The homepage of OneBit model quantization framework.☆193Updated 9 months ago
- The evaluation framework for training-free sparse attention in LLMs☆102Updated 3 weeks ago
- ☆36Updated 3 months ago
- ☆105Updated last week
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆26Updated 2 weeks ago
- Official implementation for Training LLMs with MXFP4☆101Updated 6 months ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆104Updated last year
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆28Updated 8 months ago
- Code for studying the super weight in LLM☆119Updated 11 months ago
- Experiments on Multi-Head Latent Attention☆98Updated last year