huggingface / accelerate
π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
β8,178Updated this week
Alternatives and similar repositories for accelerate:
Users that are interested in accelerate are comparing it to the libraries listed below
- Fast and memory-efficient exact attentionβ15,064Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.β8,910Updated this week
- Ongoing research training transformer models at scaleβ11,109Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β6,522Updated this week
- Train transformer language models with reinforcement learning.β10,609Updated this week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β16,978Updated this week
- Transformer related optimization, including BERT, GPTβ5,981Updated 9 months ago
- PyTorch extensions for high performance and large scale training.β3,232Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)β8,655Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β11,119Updated last month
- A concise but complete full-attention transformer with a set of promising experimental features from various papersβ4,985Updated last week
- An open source implementation of CLIP.β10,804Updated last week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ20,584Updated last week
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)β7,075Updated last year
- Foundation Architecture for (M)LLMsβ3,038Updated 9 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorchβ8,502Updated last month
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)β¦β13,005Updated last week
- Unsupervised text tokenizer for Neural Network-based text generation.β10,479Updated last month
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,667Updated this week
- This repository contains demos I made with the Transformers library by HuggingFace.β9,799Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ10,161Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,168Updated 7 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,567Updated last year
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,631Updated last week
- π€ Evaluate: A library for easily evaluating machine learning models and datasets.β2,082Updated last week
- β10,780Updated last month
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β36,255Updated this week
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.β30,809Updated last week
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.β2,426Updated last month