NVIDIA / Megatron-LMLinks
Ongoing research training transformer models at scale
β12,600Updated this week
Alternatives and similar repositories for Megatron-LM
Users that are interested in Megatron-LM are comparing it to the libraries listed below
Sorting:
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β8,839Updated this week
- Transformer related optimization, including BERT, GPTβ6,211Updated last year
- Fast and memory-efficient exact attentionβ17,846Updated this week
- PyTorch extensions for high performance and large scale training.β3,331Updated last month
- Accessible large language models via k-bit quantization for PyTorch.β7,142Updated this week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β18,774Updated last week
- Train transformer language models with reinforcement learning.β14,193Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β38,997Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.β9,591Updated last week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorchβ8,686Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β2,088Updated 2 months ago
- An annotated implementation of the Transformer paper.β6,296Updated last year
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"β6,372Updated last month
- Unsupervised text tokenizer for Neural Network-based text generation.β10,994Updated 2 months ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.β31,543Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β12,090Updated 6 months ago
- Large Language Model Text Generation Inferenceβ10,236Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ21,420Updated 2 weeks ago
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Asyβ¦β7,075Updated last week
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,942Updated this week
- PyTorch native post-training libraryβ5,273Updated this week
- A framework for few-shot evaluation of language models.β9,326Updated this week
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)β7,473Updated 2 weeks ago
- SGLang is a fast serving framework for large language models and vision language models.β15,276Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,500Updated last year
- Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.β29,644Updated this week
- Example models using DeepSpeedβ6,539Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMsβ9,710Updated this week
- Training and serving large-scale neural networks with auto parallelization.β3,136Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,667Updated last year