MAGICS-LAB / OutEffHopLinks
[ICML 2024] Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
☆21Updated 9 months ago
Alternatives and similar repositories for OutEffHop
Users that are interested in OutEffHop are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆75Updated 11 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Updated last year
- ☆163Updated 7 months ago
- Transformers components but in Triton☆34Updated 8 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆84Updated last year
- ☆158Updated 11 months ago
- APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention☆270Updated 2 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆229Updated last year
- QuIP quantization☆61Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆56Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Updated 6 months ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆106Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆114Updated last week
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆112Updated last year
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Updated 11 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Experiments on Multi-Head Latent Attention☆99Updated last year
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆77Updated last year
- Token Omission Via Attention☆128Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 7 months ago
- ☆83Updated 2 years ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆154Updated 11 months ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆32Updated 4 months ago
- Compressing Large Language Models using Low Precision and Low Rank Decomposition☆106Updated 2 months ago
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- The homepage of OneBit model quantization framework.☆200Updated last year
- 16-fold memory access reduction with nearly no loss☆110Updated 10 months ago
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆188Updated 4 months ago
- Explore training for quantized models☆26Updated 6 months ago
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆39Updated last year