hahnyuan / PB-LLM
PB-LLM: Partially Binarized Large Language Models
☆151Updated last year
Alternatives and similar repositories for PB-LLM:
Users that are interested in PB-LLM are comparing it to the libraries listed below
- ☆125Updated last year
- ☆145Updated last year
- ☆196Updated 4 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆261Updated 6 months ago
- QuIP quantization☆52Updated last year
- ☆220Updated 10 months ago
- ☆122Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆153Updated 5 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆273Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆158Updated 9 months ago
- Work in progress.☆55Updated last week
- ☆119Updated 2 weeks ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆82Updated last month
- Reorder-based post-training quantization for large language model☆188Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆125Updated 2 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆68Updated 11 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆363Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆195Updated 8 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆214Updated 3 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆156Updated 9 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆61Updated 5 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆372Updated 4 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆105Updated 5 months ago
- ☆50Updated 5 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆112Updated 4 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆111Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆230Updated 2 months ago
- Official PyTorch implementation of QA-LoRA☆131Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆340Updated 8 months ago
- ☆67Updated 9 months ago