triton-inference-server / serverLinks
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
☆9,443Updated this week
Alternatives and similar repositories for server
Users that are interested in server are comparing it to the libraries listed below
Sorting:
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆11,828Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,795Updated this week
- Serve, optimize and scale PyTorch models in production☆4,339Updated this week
- Transformer related optimization, including BERT, GPT☆6,231Updated last year
- Development repository for the Triton language and compiler☆16,114Updated this week
- A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep lear…☆5,457Updated this week
- An easy to use PyTorch to TensorRT converter☆4,773Updated 10 months ago
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆10,953Updated this week
- Ongoing research training transformer models at scale☆12,835Updated this week
- ONNX-TensorRT: TensorRT backend for ONNX☆3,103Updated 3 weeks ago
- Simplify your onnx model☆4,114Updated 10 months ago
- Fast and memory-efficient exact attention☆18,252Updated this week
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆12,435Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,914Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆15,932Updated this week
- Standardized Serverless ML Inference Platform on Kubernetes☆4,327Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,655Updated 3 months ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆2,977Updated this week
- PyTorch extensions for high performance and large scale training.☆3,337Updated 2 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,716Updated this week
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator☆17,136Updated this week
- Optimized primitives for collective multi-GPU communication☆3,848Updated 3 weeks ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,548Updated this week
- Large Language Model Text Generation Inference☆10,311Updated this week
- OpenVINO™ is an open source toolkit for optimizing and deploying AI inference☆8,559Updated this week
- Tutorials for creating and using ONNX models☆3,565Updated 11 months ago
- Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.☆14,531Updated last week
- Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX☆2,441Updated last week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,341Updated this week
- Open standard for machine learning interoperability☆19,234Updated this week