FasterDecoding / BitDeltaLinks
☆203Updated 11 months ago
Alternatives and similar repositories for BitDelta
Users that are interested in BitDelta are comparing it to the libraries listed below
Sorting:
- ☆128Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- ☆69Updated last year
- ☆85Updated 3 weeks ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 7 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆175Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 10 months ago
- ☆235Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 7 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆389Updated last year
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆51Updated last month
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated last month
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆278Updated 2 years ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- ☆151Updated 9 months ago
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- The evaluation framework for training-free sparse attention in LLMs☆106Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆357Updated 3 weeks ago
- Code for studying the super weight in LLM☆121Updated 11 months ago
- QuIP quantization☆61Updated last year
- Low-bit optimizers for PyTorch☆132Updated 2 years ago
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- ☆53Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆31Updated 5 months ago