NVIDIA / Megatron-LM
Ongoing research training transformer models at scale
β12,032Updated this week
Alternatives and similar repositories for Megatron-LM:
Users that are interested in Megatron-LM are comparing it to the libraries listed below
- Fast and memory-efficient exact attentionβ16,835Updated this week
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β8,608Updated this week
- Transformer related optimization, including BERT, GPTβ6,116Updated last year
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β18,082Updated this week
- Train transformer language models with reinforcement learning.β13,166Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β37,834Updated this week
- PyTorch extensions for high performance and large scale training.β3,293Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β6,901Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,366Updated 10 months ago
- Development repository for the Triton language and compilerβ15,146Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ21,052Updated last month
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β11,699Updated 3 months ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"β6,325Updated last month
- Unsupervised text tokenizer for Neural Network-based text generation.β10,771Updated last week
- A framework for few-shot evaluation of language models.β8,595Updated this week
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)β7,306Updated last year
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) trainingβ21,717Updated 7 months ago
- Example models using DeepSpeedβ6,414Updated 2 weeks ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.β9,319Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β2,042Updated 3 weeks ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.β9,037Updated this week
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Autoβ¦β13,613Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,621Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMsβ44,418Updated this week
- State-of-the-Art Text Embeddingsβ16,415Updated last week
- Large Language Model Text Generation Inferenceβ9,992Updated this week
- β2,782Updated this week
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed librariesβ7,154Updated last week
- Repo for external large-scale workβ6,522Updated 11 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.β4,802Updated 3 weeks ago