NVIDIA-NeMo / NeMoLinks
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
☆15,732Updated this week
Alternatives and similar repositories for NeMo
Users that are interested in NeMo are comparing it to the libraries listed below
Sorting:
- Ongoing research training transformer models at scale☆13,602Updated this week
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,810Updated 2 weeks ago
- A PyTorch-based Speech Toolkit☆10,472Updated this week
- End-to-End Speech Processing Toolkit☆9,464Updated last week
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,493Updated last year
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,149Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆9,786Updated last week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆33,486Updated this week
- Fast inference engine for Transformer models☆4,021Updated 5 months ago
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆11,625Updated this week
- Train transformer language models with reinforcement learning.☆15,601Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,125Updated this week
- PyTorch extensions for high performance and large scale training.☆3,369Updated 4 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,626Updated this week
- Fast and memory-efficient exact attention☆19,585Updated this week
- Large Language Model Text Generation Inference☆10,515Updated this week
- Transformer related optimization, including BERT, GPT☆6,305Updated last year
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,303Updated last week
- Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker…☆8,314Updated this week
- Hydra is a framework for elegantly configuring complex applications☆9,749Updated 2 weeks ago
- A framework for few-shot evaluation of language models.☆10,161Updated last week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆15,979Updated 3 weeks ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,085Updated last week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,731Updated 2 months ago
- Open standard for machine learning interoperability☆19,604Updated this week
- Flax is a neural network library for JAX that is designed for flexibility.☆6,806Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,584Updated last week
- Development repository for the Triton language and compiler☆16,919Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,046Updated last year
- Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.☆30,142Updated this week