NVIDIA / NeMoLinks
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
☆14,800Updated this week
Alternatives and similar repositories for NeMo
Users that are interested in NeMo are comparing it to the libraries listed below
Sorting:
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,543Updated last week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,420Updated 2 weeks ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,839Updated this week
- Ongoing research training transformer models at scale☆12,600Updated this week
- Fast and memory-efficient exact attention☆17,846Updated this week
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆9,817Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆38,997Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,142Updated this week
- Large Language Model Text Generation Inference☆10,236Updated this week
- Unsupervised text tokenizer for Neural Network-based text generation.☆10,994Updated 2 months ago
- Train transformer language models with reinforcement learning.☆14,193Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18,774Updated last week
- A PyTorch-based Speech Toolkit☆9,980Updated last week
- Tensor library for machine learning☆12,697Updated last week
- End-to-End Speech Processing Toolkit☆9,205Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,591Updated last week
- Fast inference engine for Transformer models☆3,856Updated 2 months ago
- Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.☆29,644Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,573Updated last year
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆17,490Updated this week
- PyTorch extensions for high performance and large scale training.☆3,331Updated last month
- State-of-the-Art Text Embeddings☆16,947Updated last week
- 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools☆20,270Updated last week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆22,108Updated 10 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,500Updated last year
- 💡 All-in-one open-source AI framework for semantic search, LLM orchestration and language model workflows☆11,110Updated last week
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,228Updated last week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆16,917Updated this week
- Transformer related optimization, including BERT, GPT☆6,211Updated last year
- Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!☆38,620Updated this week