huggingface / accelerateLinks
π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
β8,896Updated this week
Alternatives and similar repositories for accelerate
Users that are interested in accelerate are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attentionβ18,150Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β7,192Updated this week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β18,912Updated this week
- Ongoing research training transformer models at scaleβ12,701Updated last week
- Transformer related optimization, including BERT, GPTβ6,224Updated last year
- Train transformer language models with reinforcement learning.β14,435Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ21,461Updated last month
- Hackable and optimized Transformers building blocks, supporting a composable construction.