NVIDIA / NeMo-Framework-LauncherLinks
Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.
β509Updated 7 months ago
Alternatives and similar repositories for NeMo-Framework-Launcher
Users that are interested in NeMo-Framework-Launcher are comparing it to the libraries listed below
Sorting:
- ποΈ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oβ¦β320Updated 2 months ago
- The Triton TensorRT-LLM Backendβ910Updated last week
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β201Updated this week
- Pipeline Parallelism for PyTorchβ784Updated last year
- β319Updated last week
- Scalable toolkit for efficient model alignmentβ847Updated 2 months ago
- β413Updated 2 years ago
- Microsoft Automatic Mixed Precision Libraryβ628Updated this week
- Large Context Attentionβ753Updated last month
- Fast Inference Solutions for BLOOMβ564Updated last year
- A tool to configure, launch and manage your machine learning experiments.β209Updated this week
- This repository contains tutorials and examples for Triton Inference Serverβ805Updated 3 weeks ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,307Updated 9 months ago
- Serving multiple LoRA finetuned LLM as oneβ1,122Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMsβ85Updated this week
- A throughput-oriented high-performance serving framework for LLMsβ921Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β958Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,080Updated 5 months ago
- Reference models for Intel(R) Gaudi(R) AI Acceleratorβ169Updated 2 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β2,971Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsβ267Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantizationβ710Updated last year
- GPTQ inference Triton kernelβ316Updated 2 years ago
- Easy and Efficient Quantization for Transformersβ203Updated 5 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Servβ¦β500Updated this week
- An open collection of methodologies to help with successful training of large language models.β541Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,426Updated last year
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.β830Updated 3 months ago
- π Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.β216Updated last week
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β271Updated last week