Repo for external large-scale work
☆6,554Apr 27, 2024Updated 2 years ago
Alternatives and similar repositories for metaseq
Users that are interested in metaseq are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Ongoing research training transformer models at scale☆16,203Updated this week
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,212Sep 30, 2025Updated 7 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,114Jan 23, 2026Updated 3 months ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,426Apr 13, 2026Updated 3 weeks ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,258Jul 17, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,745Jan 8, 2024Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,231Updated this week
- PyTorch extensions for high performance and large scale training.☆3,409Apr 26, 2025Updated last year
- Making large AI models cheaper, faster and more accessible☆41,379Apr 27, 2026Updated last week
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,366Oct 28, 2024Updated last year
- ☆2,964Apr 21, 2026Updated 2 weeks ago
- Fast and memory-efficient exact attention☆23,628Updated this week
- Transformer related optimization, including BERT, GPT☆6,415Mar 27, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,868Oct 11, 2025Updated 6 months ago
- Inference code for Llama models☆59,382Jan 26, 2025Updated last year
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,463Updated this week
- Train transformer language models with reinforcement learning.☆18,282Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,442Apr 21, 2026Updated 2 weeks ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆21,052Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,658Apr 29, 2026Updated last week
- Instruct-tune LLaMA on consumer hardware☆18,945Jul 29, 2024Updated last year
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,513Jan 14, 2026Updated 3 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Accessible large language models via k-bit quantization for PyTorch.☆8,178Updated this week
- Foundation Architecture for (M)LLMs☆3,133Apr 11, 2024Updated 2 years ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,658Jul 25, 2023Updated 2 years ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,941Dec 7, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,503Apr 28, 2026Updated last week
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆160,288Updated this week
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,928Mar 14, 2024Updated 2 years ago
- Model parallel transformers in JAX and Haiku☆6,368Jan 21, 2023Updated 3 years ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,234Jul 19, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,899Jun 10, 2024Updated last year
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,279Feb 25, 2022Updated 4 years ago
- OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamical…☆37,409Aug 17, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,438Mar 20, 2024Updated 2 years ago
- Example models using DeepSpeed☆6,820Mar 30, 2026Updated last month
- CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.☆5,175Oct 27, 2025Updated 6 months ago
- A collection of libraries to optimise AI model performances☆8,349Jul 22, 2024Updated last year