Dao-AILab / flash-attentionLinks
Fast and memory-efficient exact attention
☆21,957Updated this week
Alternatives and similar repositories for flash-attention
Users that are interested in flash-attention are comparing it to the libraries listed below
Sorting:
- Accessible large language models via k-bit quantization for PyTorch.☆7,931Updated last week
- Ongoing research training transformer models at scale☆15,100Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,477Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,543Updated last week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,311Updated last week
- Transformer related optimization, including BERT, GPT☆6,390Updated last year
- Train transformer language models with reinforcement learning.☆17,206Updated this week
- A framework for few-shot evaluation of language models.☆11,298Updated last week
- Mamba SSM architecture☆17,104Updated 3 weeks ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,755Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆22,800Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,219Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,560Updated this week
- Development repository for the Triton language and compiler☆18,319Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆69,007Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆18,756Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,424Updated 6 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆8,898Updated last week
- An open source implementation of CLIP.☆13,324Updated 2 months ago
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3, Qwen3-MoE, DeepSeek-R1, GLM4.5, InternLM3, Llama4, ...) and 300+ MLLMs (…☆12,453Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,402Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,022Updated 9 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,830Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,509Updated this week
- Large Language Model Text Generation Inference☆10,749Updated 3 weeks ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,990Updated last week
- Tools for merging pretrained large language models.☆6,718Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,886Updated last year
- PyTorch native post-training library☆5,654Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,180Updated 5 months ago