astramind-ai / BitMat
An efficent implementation of the method proposed in "The Era of 1-bit LLMs"
☆154Updated 4 months ago
Alternatives and similar repositories for BitMat:
Users that are interested in BitMat are comparing it to the libraries listed below
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆196Updated 7 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆270Updated last year
- ☆193Updated 3 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆223Updated 10 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆123Updated 3 months ago
- 1.58-bit LLaMa model☆82Updated 11 months ago
- PB-LLM: Partially Binarized Large Language Models☆151Updated last year
- PyTorch implementation of models from the Zamba2 series.☆177Updated last month
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆222Updated last month
- Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning☆37Updated 2 weeks ago
- ☆112Updated 2 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆362Updated last year
- ☆126Updated 6 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆231Updated 2 weeks ago
- RWKV-7: Surpassing GPT☆80Updated 3 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆143Updated 11 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆250Updated 5 months ago
- ☆113Updated 5 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆80Updated last week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆277Updated 2 weeks ago
- Inference of Mamba models in pure C☆186Updated last year
- Token Omission Via Attention☆124Updated 5 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆233Updated 9 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆99Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated 10 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 9 months ago
- ☆181Updated this week
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆150Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆413Updated last month