deepspeedai / DeepSpeedLinks
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
☆41,509Updated this week
Alternatives and similar repositories for DeepSpeed
Users that are interested in DeepSpeed are comparing it to the libraries listed below
Sorting:
- Example models using DeepSpeed☆6,779Updated last month
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,392Updated 8 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,587Updated this week
- Ongoing research training transformer models at scale☆15,100Updated this week
- Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)☆25,762Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,264Updated last year
- Fast and memory-efficient exact attention☆22,113Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,219Updated last year
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,477Updated last week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,002Updated 2 weeks ago
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆155,967Updated last week
- Train transformer language models with reinforcement learning.☆17,297Updated this week
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,501Updated last week
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,346Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,830Updated last year
- Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!☆41,496Updated last week
- Inference code for Llama models☆59,112Updated last year
- Universal LLM Deployment Engine with ML Compilation☆21,981Updated last week
- Accessible large language models via k-bit quantization for PyTorch.☆7,931Updated 2 weeks ago
- Instruct-tune LLaMA on consumer hardware☆18,979Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,426Updated last year
- Making large AI models cheaper, faster and more accessible☆41,336Updated 2 weeks ago
- The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.☆20,277Updated last week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,326Updated this week
- ModelScope: bring the notion of Model-as-a-Service to life.☆8,694Updated 2 weeks ago
- Transformer related optimization, including BERT, GPT☆6,392Updated last year
- Development repository for the Triton language and compiler☆18,319Updated this week
- 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)☆18,966Updated 6 months ago
- JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf☆24,523Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆69,622Updated this week