NVIDIA / NeMo-Framework-LauncherLinks
Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.
β508Updated 6 months ago
Alternatives and similar repositories for NeMo-Framework-Launcher
Users that are interested in NeMo-Framework-Launcher are comparing it to the libraries listed below
Sorting:
- ποΈ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oβ¦β318Updated last month
- The Triton TensorRT-LLM Backendβ903Updated this week
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β199Updated this week
- Scalable toolkit for efficient model alignmentβ841Updated 2 weeks ago
- Pipeline Parallelism for PyTorchβ780Updated last year
- Microsoft Automatic Mixed Precision Libraryβ626Updated last year
- β302Updated this week
- Large Context Attentionβ746Updated 2 weeks ago
- A tool to configure, launch and manage your machine learning experiments.β198Updated this week
- β413Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMsβ266Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decodingβ1,288Updated 7 months ago
- Fast Inference Solutions for BLOOMβ565Updated last year
- A throughput-oriented high-performance serving framework for LLMsβ909Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β916Updated last year
- Serving multiple LoRA finetuned LLM as oneβ1,106Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Blaβ¦β2,834Updated this week
- β121Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMsβ83Updated this week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Trainingβ1,837Updated last week
- An innovative library for efficient LLM inference via low-bit quantizationβ349Updated last year
- Easy and Efficient Quantization for Transformersβ202Updated 4 months ago
- GPTQ inference Triton kernelβ311Updated 2 years ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welβ¦β385Updated 4 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inferenceβ460Updated 6 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Servβ¦β495Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,422Updated last year
- Zero Bubble Pipeline Parallelismβ433Updated 5 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,070Updated 3 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantizationβ704Updated last year