huggingface / accelerate
π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
β7,958Updated this week
Related projects β
Alternatives and complementary repositories for accelerate
- Fast and memory-efficient exact attentionβ14,279Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β6,299Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.β8,660Updated this week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β16,471Updated this week
- Ongoing research training transformer models at scaleβ10,595Updated this week
- Transformer related optimization, including BERT, GPTβ5,890Updated 7 months ago
- Train transformer language models with reinforcement learning.β10,086Updated this week
- PyTorch extensions for high performance and large scale training.β3,195Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β10,776Updated 3 months ago
- An open source implementation of CLIP.β10,344Updated last week
- A concise but complete full-attention transformer with a set of promising experimental features from various papersβ4,793Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ9,943Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ20,194Updated last week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)β8,524Updated this week
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,581Updated 2 weeks ago
- π€ Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.β26,242Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,059Updated 5 months ago
- Foundation Architecture for (M)LLMsβ3,034Updated 7 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,502Updated 10 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorchβ8,415Updated 2 weeks ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β35,508Updated this week
- A framework for few-shot evaluation of language models.β6,990Updated this week
- π Accelerate training and inference of π€ Transformers and π€ Diffusers with easy to use hardware optimization toolsβ2,576Updated this week
- Serve, optimize and scale PyTorch models in productionβ4,218Updated 3 weeks ago
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)β6,952Updated last year
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Autoβ¦β12,150Updated this week
- π€ Evaluate: A library for easily evaluating machine learning models and datasets.β2,037Updated 2 months ago
- Geometric Computer Vision Library for Spatial AIβ9,978Updated this week
- This repository contains demos I made with the Transformers library by HuggingFace.β9,500Updated 3 weeks ago