dvmazur / mixtral-offloading
Run Mixtral-8x7B models in Colab or consumer desktops
☆2,308Updated last year
Alternatives and similar repositories for mixtral-offloading:
Users that are interested in mixtral-offloading are comparing it to the libraries listed below
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,930Updated 2 weeks ago
- Training LLMs with QLoRA + FSDP☆1,472Updated 5 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,249Updated last week
- ☆954Updated 2 months ago
- Tools for merging pretrained large language models.☆5,571Updated this week
- Reaching LLaMA2 Performance with 0.1M Dollars☆981Updated 9 months ago
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆683Updated 8 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,131Updated this week
- [ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct☆2,016Updated 5 months ago
- Modeling, training, eval, and inference code for OLMo☆5,519Updated this week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,378Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,942Updated last week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,542Updated 5 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,962Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,002Updated last month
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,859Updated last year
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,042Updated last month
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,795Updated 3 weeks ago
- A unified evaluation framework for large language models☆2,597Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,818Updated 2 weeks ago
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,662Updated 9 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,503Updated 10 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,104Updated 2 weeks ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,093Updated last year
- Inference Llama 2 in one file of pure 🔥☆2,110Updated 11 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,151Updated 4 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,863Updated last year
- Minimalistic large language model 3D-parallelism training☆1,808Updated this week
- ☆4,076Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,242Updated last month