NVIDIA / Megatron-LM
Ongoing research training transformer models at scale
☆12,261Updated this week
Alternatives and similar repositories for Megatron-LM:
Users that are interested in Megatron-LM are comparing it to the libraries listed below
- Transformer related optimization, including BERT, GPT☆6,144Updated last year
- Fast and memory-efficient exact attention☆17,192Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,972Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,673Updated this week
- PyTorch extensions for high performance and large scale training.☆3,308Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18,274Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆38,206Updated this week
- Train transformer language models with reinforcement learning.☆13,559Updated this week
- Unsupervised text tokenizer for Neural Network-based text generation.☆10,836Updated last month
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,435Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆11,847Updated 4 months ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,342Updated last week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,062Updated last month
- Development repository for the Triton language and compiler☆15,447Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,645Updated 3 weeks ago
- Example models using DeepSpeed☆6,470Updated 2 weeks ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,411Updated 10 months ago
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆9,646Updated 2 weeks ago
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆13,769Updated this week
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & LoRA & vLLM & RFT)☆6,518Updated this week
- ☆2,800Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,171Updated 2 months ago
- Repo for external large-scale work☆6,524Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,633Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆13,564Updated this week
- A framework for few-shot evaluation of language models.☆8,815Updated this week
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)☆7,373Updated last year
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,390Updated 3 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,970Updated 3 weeks ago
- PyTorch native post-training library☆5,138Updated this week