mosecorg / mosecLinks
A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine
☆855Updated this week
Alternatives and similar repositories for mosec
Users that are interested in mosec are comparing it to the libraries listed below
Sorting:
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,263Updated 5 months ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆817Updated 3 weeks ago
- ☆412Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,086Updated last year
- A high-performance inference system for large language models, designed for production environments.☆463Updated last month
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,052Updated 2 months ago
- Large-scale model inference.☆632Updated last year
- Model Deployment at Scale on Kubernetes 🦄️☆821Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,852Updated last year
- Efficient AI Inference & Serving☆477Updated last year
- The Triton TensorRT-LLM Backend☆887Updated last week
- A Survey of AI startups☆401Updated 2 years ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,169Updated 10 months ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆635Updated 3 weeks ago
- Autoscale LLM (vLLM, SGLang, LMDeploy) inferences on Kubernetes (and others)☆273Updated last year
- 🏕️ Reproducible development environment☆2,136Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆1,000Updated 8 months ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Updated 10 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,583Updated last year
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆485Updated last month
- Bagua Speeds up PyTorch☆883Updated last year
- A Python vector database you just need - no more, no less.☆631Updated last year
- ☆293Updated last month
- Fast Inference Solutions for BLOOM☆565Updated 10 months ago
- Inference Llama 2 in one file of pure 🔥☆2,117Updated last year
- ☆1,030Updated last year
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆209Updated 4 months ago
- ☆504Updated 4 months ago
- Comparison of Language Model Inference Engines☆229Updated 8 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,062Updated last year