MAGICS-LAB / OutEffHopLinks
[ICML 2024] Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
☆21Updated 8 months ago
Alternatives and similar repositories for OutEffHop
Users that are interested in OutEffHop are comparing it to the libraries listed below
Sorting:
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Updated last year
- Fast and memory-efficient exact attention☆74Updated 10 months ago
- Code for studying the super weight in LLM☆120Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention☆268Updated last month
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆38Updated 11 months ago
- ☆157Updated 10 months ago
- ☆162Updated 6 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆229Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated last year
- Experiments on Multi-Head Latent Attention☆99Updated last year
- QuIP quantization☆61Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆83Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆90Updated 5 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆151Updated 10 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 6 months ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆105Updated last year
- Token Omission Via Attention☆128Updated last year
- Official implementation for Training LLMs with MXFP4☆116Updated 8 months ago
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- Compressing Large Language Models using Low Precision and Low Rank Decomposition☆105Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆244Updated 7 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- ☆83Updated 2 years ago
- The homepage of OneBit model quantization framework.☆197Updated 11 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆55Updated last year
- Explore training for quantized models☆25Updated 5 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 9 months ago