NVIDIA / NeMo-Framework-LauncherLinks
Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.
☆505Updated last month
Alternatives and similar repositories for NeMo-Framework-Launcher
Users that are interested in NeMo-Framework-Launcher are comparing it to the libraries listed below
Sorting:
- Scalable toolkit for efficient model alignment☆807Updated this week
- Pipeline Parallelism for PyTorch☆767Updated 9 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,450Updated this week
- The Triton TensorRT-LLM Backend☆845Updated this week
- Microsoft Automatic Mixed Precision Library☆602Updated 8 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆301Updated last week
- A tool to configure, launch and manage your machine learning experiments.☆153Updated this week
- Large Context Attention☆714Updated 4 months ago
- ☆261Updated 3 weeks ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,014Updated 2 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆424Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆831Updated 9 months ago
- distributed trainer for LLMs☆575Updated last year
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆478Updated this week
- Serving multiple LoRA finetuned LLM as one☆1,062Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,391Updated last year
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆335Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆815Updated 3 weeks ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,249Updated 3 months ago
- Fast Inference Solutions for BLOOM☆564Updated 7 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆186Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,081Updated 2 months ago
- Scalable data pre processing and curation toolkit for LLMs☆930Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆127Updated last month
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆796Updated 3 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆829Updated this week
- ☆411Updated last year
- Minimalistic large language model 3D-parallelism training☆1,898Updated this week
- Common source, scripts and utilities for creating Triton backends.☆324Updated 3 weeks ago
- NVIDIA Inference Xfer Library (NIXL)☆365Updated this week