triton-inference-server / server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
☆9,157Updated this week
Alternatives and similar repositories for server:
Users that are interested in server are comparing it to the libraries listed below
- Transformer related optimization, including BERT, GPT☆6,147Updated last year
- Ongoing research training transformer models at scale☆12,261Updated this week
- Serve, optimize and scale PyTorch models in production☆4,317Updated this week
- Development repository for the Triton language and compiler☆15,447Updated this week
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,543Updated this week
- Fast and memory-efficient exact attention☆17,192Updated last week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,742Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,972Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,673Updated this week
- Open standard for machine learning interoperability☆18,895Updated this week
- PyTorch extensions for high performance and large scale training.☆3,309Updated last week
- A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Auto…☆13,859Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,645Updated 3 weeks ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18,274Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆2,874Updated last week
- The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!☆7,669Updated last week
- Train transformer language models with reinforcement learning.☆13,559Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,435Updated last week
- An easy to use PyTorch to TensorRT converter☆4,729Updated 8 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,393Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆8,892Updated last week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆16,455Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,381Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,630Updated last month
- A library for efficient similarity search and clustering of dense vectors.☆34,661Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆692Updated 3 weeks ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆38,206Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆13,976Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,065Updated 2 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,259Updated this week