MAGICS-LAB / OutEffHopLinks
[ICML 2024] Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
☆21Updated 7 months ago
Alternatives and similar repositories for OutEffHop
Users that are interested in OutEffHop are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆74Updated 9 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Updated last year
- ☆159Updated 5 months ago
- ☆155Updated 10 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Updated 4 months ago
- Compressing Large Language Models using Low Precision and Low Rank Decomposition☆106Updated 3 weeks ago
- ☆83Updated 2 years ago
- QuIP quantization☆61Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆82Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆148Updated 9 months ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Updated 9 months ago
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆37Updated 10 months ago
- Transformers components but in Triton☆34Updated 7 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆54Updated last year
- APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention☆265Updated 2 weeks ago
- Code for studying the super weight in LLM☆121Updated last year
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆31Updated 2 weeks ago
- ☆112Updated 3 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- 16-fold memory access reduction with nearly no loss☆109Updated 8 months ago
- The evaluation framework for training-free sparse attention in LLMs☆106Updated 2 months ago
- ☆31Updated last year
- Official implementation for Training LLMs with MXFP4☆112Updated 7 months ago
- Code for the paper “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling”☆66Updated last week
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Updated last year
- Work in progress.☆75Updated 2 weeks ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆126Updated 5 months ago
- ☆37Updated 3 weeks ago
- Explore training for quantized models☆25Updated 5 months ago