Repo for external large-scale work
☆6,542Apr 27, 2024Updated last year
Alternatives and similar repositories for metaseq
Users that are interested in metaseq are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Ongoing research training transformer models at scale☆15,744Mar 20, 2026Updated last week
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,190Sep 30, 2025Updated 5 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,059Jan 23, 2026Updated 2 months ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,400Feb 3, 2026Updated last month
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,256Jul 17, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Jan 8, 2024Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,869Mar 18, 2026Updated last week
- PyTorch extensions for high performance and large scale training.☆3,404Apr 26, 2025Updated 11 months ago
- Making large AI models cheaper, faster and more accessible☆41,376Mar 16, 2026Updated last week
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,379Oct 28, 2024Updated last year
- Fast and memory-efficient exact attention☆22,938Updated this week
- ☆2,956Mar 9, 2026Updated 2 weeks ago
- Transformer related optimization, including BERT, GPT☆6,400Mar 27, 2024Updated 2 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,875Oct 11, 2025Updated 5 months ago
- Inference code for Llama models☆59,250Jan 26, 2025Updated last year
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,445Jun 2, 2025Updated 9 months ago
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,388Mar 18, 2026Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,841Mar 18, 2026Updated last week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,580Updated this week
- Instruct-tune LLaMA on consumer hardware☆18,961Jul 29, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,078Updated this week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,494Jan 14, 2026Updated 2 months ago
- Foundation Architecture for (M)LLMs☆3,137Apr 11, 2024Updated last year
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,664Jul 25, 2023Updated 2 years ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,930Dec 7, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,431Mar 5, 2026Updated 3 weeks ago
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆158,424Updated this week
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,932Mar 14, 2024Updated 2 years ago
- Model parallel transformers in JAX and Haiku☆6,365Jan 21, 2023Updated 3 years ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,219Jul 19, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.☆8,282Feb 25, 2022Updated 4 years ago
- OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamical…☆37,435Aug 17, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,438Mar 20, 2024Updated 2 years ago
- Example models using DeepSpeed☆6,807Mar 4, 2026Updated 3 weeks ago
- CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.☆5,172Oct 27, 2025Updated 5 months ago
- A collection of libraries to optimise AI model performances☆8,352Jul 22, 2024Updated last year