Dao-AILab / flash-attentionLinks
Fast and memory-efficient exact attention
☆20,023Updated last week
Alternatives and similar repositories for flash-attention
Users that are interested in flash-attention are comparing it to the libraries listed below
Sorting:
- Accessible large language models via k-bit quantization for PyTorch.☆7,659Updated 3 weeks ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,009Updated last week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,211Updated last week
- Ongoing research training transformer models at scale☆13,906Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,832Updated last week
- Transformer related optimization, including BERT, GPT☆6,331Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆12,813Updated 10 months ago
- Train transformer language models with reinforcement learning.☆15,934Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆19,094Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆14,648Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆11,880Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆60,385Updated last week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,199Updated this week
- A framework for few-shot evaluation of language models.☆10,433Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,710Updated last year
- Development repository for the Triton language and compiler☆17,289Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,318Updated 3 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,970Updated 6 months ago
- PyTorch native post-training library☆5,547Updated this week
- Large Language Model Text Generation Inference☆10,580Updated last month
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,461Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆23,763Updated last year
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Asy…☆8,180Updated 2 weeks ago
- Tools for merging pretrained large language models.☆6,394Updated last month
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,662Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,787Updated 3 months ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,037Updated last week
- Example models using DeepSpeed☆6,698Updated last week
- An open source implementation of CLIP.☆12,790Updated last month
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,128Updated 2 months ago