NVIDIA / Megatron-LM
Ongoing research training transformer models at scale
β11,837Updated this week
Alternatives and similar repositories for Megatron-LM:
Users that are interested in Megatron-LM are comparing it to the libraries listed below
- Fast and memory-efficient exact attentionβ16,370Updated last week
- Transformer related optimization, including BERT, GPTβ6,089Updated 11 months ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β8,497Updated last week
- Accessible large language models via k-bit quantization for PyTorch.β6,818Updated this week
- Train transformer language models with reinforcement learning.β12,591Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β37,533Updated this week
- PyTorch extensions for high performance and large scale training.β3,278Updated 2 months ago
- Unsupervised text tokenizer for Neural Network-based text generation.β10,707Updated 3 weeks ago
- Development repository for the Triton language and compilerβ14,931Updated this week
- Large Language Model Text Generation Inferenceβ9,905Updated this week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that containβ¦β9,762Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.β9,187Updated this week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β17,837Updated this week
- A framework for few-shot evaluation of language models.β8,337Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ20,950Updated 2 weeks ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,603Updated last year
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.β31,179Updated 2 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β2,022Updated 3 weeks ago
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Autoβ¦β13,419Updated this week
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)β7,237Updated last year
- SGLang is a fast serving framework for large language models and vision language models.β12,220Updated this week
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,812Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMsβ41,815Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.β8,926Updated this week
- Example models using DeepSpeedβ6,377Updated last week
- π₯ Fast State-of-the-Art Tokenizers optimized for Research and Productionβ9,509Updated this week
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterβ¦β14,058Updated 7 months ago
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,322Updated 9 months ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β11,536Updated 3 months ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"β6,302Updated 3 weeks ago