meta-pytorch / gpt-fastLinks
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
☆6,172Updated 4 months ago
Alternatives and similar repositories for gpt-fast
Users that are interested in gpt-fast are comparing it to the libraries listed below
Sorting:
- PyTorch native post-training library☆5,639Updated this week
- ☆4,109Updated last year
- Robust recipes to align language models with human and AI preferences☆5,466Updated 3 months ago
- Tools for merging pretrained large language models.☆6,647Updated 2 weeks ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,165Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,635Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,855Updated 3 weeks ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,852Updated last year
- A PyTorch native platform for training generative AI models☆4,892Updated this week
- Modeling, training, eval, and inference code for OLMo☆6,263Updated last month
- Training LLMs with QLoRA + FSDP☆1,534Updated last year
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,169Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,408Updated 5 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,242Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,818Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,016Updated 8 months ago
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,327Updated last year
- Minimalistic large language model 3D-parallelism training☆2,396Updated 3 weeks ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,684Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,648Updated last year
- Go ahead and axolotl questions☆11,005Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,805Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,234Updated last week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,884Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,406Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,795Updated last week
- The official PyTorch implementation of Google's Gemma models☆5,589Updated 7 months ago
- PyTorch native quantization and sparsity for training and inference☆2,601Updated this week
- AllenAI's post-training codebase☆3,488Updated this week
- An Open-source Toolkit for LLM Development☆2,797Updated 11 months ago