NVIDIA-NeMo / NeMoLinks
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
☆16,271Updated this week
Alternatives and similar repositories for NeMo
Users that are interested in NeMo are comparing it to the libraries listed below
Sorting:
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆32,020Updated 2 months ago
- Ongoing research training transformer models at scale☆14,602Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,377Updated this week
- A PyTorch-based Speech Toolkit☆10,915Updated last week
- Transformer related optimization, including BERT, GPT☆6,365Updated last year
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,630Updated last year
- End-to-End Speech Processing Toolkit☆9,641Updated this week
- Development repository for the Triton language and compiler☆17,861Updated this week
- Large Language Model Text Generation Inference☆10,709Updated this week
- Unsupervised text tokenizer for Neural Network-based text generation.☆11,508Updated this week
- Fast and memory-efficient exact attention☆21,067Updated this week
- PyTorch extensions for high performance and large scale training.☆3,388Updated 7 months ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,131Updated this week
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,301Updated 2 weeks ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,215Updated last week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,015Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,815Updated this week
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,350Updated last week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆16,788Updated 2 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,184Updated this week
- Fast inference engine for Transformer models☆4,191Updated last week
- Train transformer language models with reinforcement learning.☆16,638Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,385Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆65,334Updated this week
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.☆30,561Updated last week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,875Updated 5 months ago
- Repo for external large-scale work☆6,547Updated last year
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,312Updated 3 weeks ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,786Updated last year
- Code for the paper "Language Models are Unsupervised Multitask Learners"☆24,472Updated last year