FasterDecoding / BitDelta
☆192Updated last month
Alternatives and similar repositories for BitDelta:
Users that are interested in BitDelta are comparing it to the libraries listed below
- ☆214Updated 7 months ago
- ☆108Updated 4 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆114Updated 7 months ago
- ☆125Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆145Updated 7 months ago
- PB-LLM: Partially Binarized Large Language Models☆150Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆105Updated last month
- ☆217Updated 8 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆190Updated 6 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆263Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆327Updated 5 months ago
- Explorations into some recent techniques surrounding speculative decoding☆233Updated last month
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 5 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆204Updated last month
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆269Updated last week
- ☆140Updated last year
- Official PyTorch implementation of QA-LoRA☆122Updated 10 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆152Updated 6 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 3 months ago
- ☆72Updated 2 weeks ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆368Updated 2 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 9 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆358Updated 11 months ago
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆175Updated 2 weeks ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆212Updated 9 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆147Updated last week
- For releasing code related to compression methods for transformers, accompanying our publications☆405Updated last week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆221Updated last week
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆113Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆219Updated last month