Foundation Architecture for (M)LLMs
☆3,135Apr 11, 2024Updated last year
Alternatives and similar repositories for torchscale
Users that are interested in torchscale are comparing it to the libraries listed below
Sorting:
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,033Jan 23, 2026Updated last month
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,353Feb 20, 2026Updated last week
- Fast and memory-efficient exact attention☆22,361Updated this week
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,289Dec 22, 2025Updated 2 months ago
- Transformer related optimization, including BERT, GPT☆6,394Mar 27, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,648Updated this week
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,706Jan 12, 2026Updated last month
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,513Updated this week
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- Ongoing research training transformer models at scale☆15,242Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,187Jul 11, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,678Updated this week
- Repo for external large-scale work☆6,543Apr 27, 2024Updated last year
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,212Oct 22, 2023Updated 2 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,159Sep 30, 2025Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,838Jun 10, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,375Feb 21, 2026Updated last week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,383Oct 28, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,401Feb 20, 2026Updated last week
- An open-source framework for training large multimodal models.☆4,068Aug 31, 2024Updated last year
- Mamba SSM architecture☆17,257Feb 18, 2026Updated last week
- Development repository for the Triton language and compiler☆18,460Updated this week
- Making large AI models cheaper, faster and more accessible☆41,359Updated this week
- Large Language Model Text Generation Inference☆10,774Jan 8, 2026Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,170Feb 21, 2026Updated last week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,296Feb 9, 2026Updated 2 weeks ago
- An open source implementation of CLIP.☆13,397Feb 20, 2026Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,182Aug 22, 2025Updated 6 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,698Updated this week
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.☆30,860Feb 21, 2026Updated last week
- A PyTorch native platform for training generative AI models☆5,084Updated this week
- Minimalistic large language model 3D-parallelism training☆2,569Feb 19, 2026Updated last week
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,271Jul 17, 2024Updated last year
- PyTorch native post-training library☆5,689Updated this week