Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.
☆683Aug 22, 2024Updated last year
Alternatives and similar repositories for GPTFast
Users that are interested in GPTFast are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Training LLMs with QLoRA + FSDP☆1,542Nov 9, 2024Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Apr 2, 2024Updated 2 years ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,204Aug 22, 2025Updated 8 months ago
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,318Feb 26, 2026Updated 2 months ago
- Automatically evaluate your LLMs in Google Colab☆688May 7, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Tools for merging pretrained large language models.☆7,023Mar 15, 2026Updated last month
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Apr 11, 2024Updated 2 years ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Apr 8, 2024Updated 2 years ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,511Mar 4, 2026Updated last month
- PyTorch native post-training library☆5,739Apr 24, 2026Updated last week
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,877Apr 23, 2026Updated last week
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Jul 23, 2024Updated last year
- Go ahead and axolotl questions☆11,779Updated this week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,914May 17, 2025Updated 11 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆1,034Apr 3, 2026Updated 3 weeks ago
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,418Nov 29, 2024Updated last year
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,454Updated this week
- Sparsity-aware deep learning inference runtime for CPUs☆3,162Jun 2, 2025Updated 10 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆6,085Apr 8, 2026Updated 3 weeks ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,727Jun 25, 2024Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Structured Outputs☆13,741Apr 16, 2026Updated 2 weeks ago
- Large Language Model Text Generation Inference☆10,848Mar 21, 2026Updated last month
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,954May 3, 2024Updated last year
- Large Action Model framework to develop AI Web Agents☆6,328Jan 21, 2025Updated last year
- Large World Model -- Modeling Text and Video with Millions Context☆7,410Oct 19, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- High-speed Large Language Model Serving for Local Deployment☆9,390Jan 24, 2026Updated 3 months ago
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,329Apr 8, 2024Updated 2 years ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,199Updated this week
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Robust recipes to align language models with human and AI preferences☆5,587Apr 8, 2026Updated 3 weeks ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,326Apr 25, 2026Updated last week
- Train Models Contrastively in Pytorch☆788Mar 26, 2025Updated last year
- Official inference library for Mistral models☆10,781Apr 20, 2026Updated last week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,909Jan 21, 2024Updated 2 years ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Oct 15, 2024Updated last year
- Fast, flexible LLM inference☆7,074Apr 15, 2026Updated 2 weeks ago