Dao-AILab / flash-attention
Fast and memory-efficient exact attention
ā16,835Updated this week
Alternatives and similar repositories for flash-attention:
Users that are interested in flash-attention are comparing it to the libraries listed below
- š¤ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.ā18,082Updated this week
- Accessible large language models via k-bit quantization for PyTorch.ā6,901Updated this week
- š A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iā¦ā8,608Updated this week
- Train transformer language models with reinforcement learning.ā13,166Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMsā10,366Updated 10 months ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"ā11,699Updated 3 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.ā9,319Updated this week
- Ongoing research training transformer models at scaleā12,032Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesā21,052Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMsā44,418Updated this week
- Transformer related optimization, including BERT, GPTā6,116Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.ā22,171Updated 8 months ago
- SGLang is a fast serving framework for large language models and vision language models.ā13,051Updated this week
- An open source implementation of CLIP.ā11,481Updated last week
- Large Language Model Text Generation Inferenceā9,992Updated this week
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)ā¦ā13,504Updated last week
- A framework for few-shot evaluation of language models.ā8,595Updated this week
- Development repository for the Triton language and compilerā15,146Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.ā37,834Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinksā6,848Updated 9 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.ā4,802Updated 3 weeks ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.ā6,056Updated this week
- Latest Advances on Multimodal Large Language Modelsā14,664Updated this week
- Retrieval and Retrieval-augmented LLMsā9,296Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.ā11,948Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligenceā10,440Updated 4 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersā5,856Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.ā40,626Updated 4 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.ā8,380Updated 11 months ago
- Universal LLM Deployment Engine with ML Compilationā20,355Updated last week