NVIDIA / NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
β13,613Updated this week
Alternatives and similar repositories for NeMo:
Users that are interested in NeMo are comparing it to the libraries listed below
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β8,608Updated this week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β18,082Updated this week
- Ongoing research training transformer models at scaleβ12,032Updated this week
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed librariesβ7,154Updated last week
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.β31,265Updated 3 months ago
- Fast and memory-efficient exact attentionβ16,835Updated this week
- π₯ Fast State-of-the-Art Tokenizers optimized for Research and Productionβ9,572Updated 3 weeks ago
- Accessible large language models via k-bit quantization for PyTorch.β6,901Updated this week
- Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speakerβ¦β7,255Updated this week
- End-to-End Speech Processing Toolkitβ8,972Updated last week
- Transformer related optimization, including BERT, GPTβ6,116Updated last year
- Train transformer language models with reinforcement learning.β13,166Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.β9,037Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β37,834Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,366Updated 10 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligenceβ10,440Updated 4 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ21,052Updated last month
- An open source implementation of CLIP.β11,481Updated last week
- Build and share delightful machine learning apps, all in Python. π Star to support our work!β37,426Updated this week
- PyTorch extensions for high performance and large scale training.β3,293Updated this week
- Unsupervised text tokenizer for Neural Network-based text generation.β10,771Updated last week
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β142,871Updated this week
- Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.β29,263Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β11,699Updated 3 months ago
- Repo for external large-scale workβ6,522Updated 11 months ago
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterβ¦β14,146Updated 8 months ago
- Fast inference engine for Transformer modelsβ3,728Updated this week
- State-of-the-Art Text Embeddingsβ16,415Updated last week
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"β6,325Updated last month
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an imageβ28,419Updated 8 months ago