FasterDecoding / BitDeltaLinks
☆204Updated last year
Alternatives and similar repositories for BitDelta
Users that are interested in BitDelta are comparing it to the libraries listed below
Sorting:
- ☆128Updated 2 years ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- ☆71Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- ☆85Updated 2 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 9 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆251Updated last year
- ☆235Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆33Updated 7 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- Code for studying the super weight in LLM☆120Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 3 months ago
- ☆207Updated 2 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Updated 2 years ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆355Updated last week
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆188Updated 2 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆148Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆114Updated this week
- Token Omission Via Attention☆128Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆260Updated last year
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆106Updated last year
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆143Updated 2 years ago