NVIDIA / NeMoLinks
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
☆15,022Updated this week
Alternatives and similar repositories for NeMo
Users that are interested in NeMo are comparing it to the libraries listed below
Sorting:
- Ongoing research training transformer models at scale☆12,835Updated this week
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,613Updated last month
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,914Updated this week
- End-to-End Speech Processing Toolkit☆9,279Updated last week
- Fast and memory-efficient exact attention☆18,252Updated this week
- Transformer related optimization, including BERT, GPT☆6,231Updated last year
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,381Updated 11 months ago
- Unsupervised text tokenizer for Neural Network-based text generation.☆11,066Updated last week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,443Updated this week
- Development repository for the Triton language and compiler☆16,114Updated this week
- Large Language Model Text Generation Inference☆10,311Updated this week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆10,953Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,212Updated this week
- Train transformer language models with reinforcement learning.☆14,513Updated this week
- Fast inference engine for Transformer models☆3,902Updated 3 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,486Updated last week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆15,055Updated 3 months ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆39,299Updated this week
- Repo for external large-scale work☆6,527Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18,976Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,550Updated last year
- A PyTorch-based Speech Toolkit☆10,113Updated last week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆2,977Updated this week
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆9,868Updated last week
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆146,733Updated this week
- Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker…☆7,854Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆15,932Updated this week
- A framework for few-shot evaluation of language models.☆9,464Updated this week
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,256Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆51,794Updated this week