☆47Jun 27, 2024Updated last year
Alternatives and similar repositories for melange-release
Users that are interested in melange-release are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Feb 22, 2023Updated 3 years ago
- ☆12Oct 16, 2022Updated 3 years ago
- Stateful LLM Serving☆99Mar 11, 2025Updated last year
- LLM Serving Performance Evaluation Harness☆85Feb 25, 2025Updated last year
- A simple SQL parser based on Apache Calcite.☆14Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- ☆68Nov 4, 2024Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆807Apr 6, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆956Mar 29, 2026Updated last month
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆37May 6, 2024Updated 2 years ago
- ☆17May 10, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 4 months ago
- ☆19Jan 10, 2023Updated 3 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Framework-Agnostic RL Environments for LLM Fine-Tuning☆44May 1, 2026Updated last week
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆85Oct 15, 2025Updated 6 months ago
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- ☆158Oct 9, 2024Updated last year
- ☆13Jun 29, 2024Updated last year
- Accurate, large-scale, and extensible simulator for LLM inference Systems☆595Jul 25, 2025Updated 9 months ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆50Jun 1, 2024Updated last year
- A benchmark suite for evaluating FaaS scheduler.☆23Nov 5, 2022Updated 3 years ago
- NeuroSpector: Dataflow and Mapping Optimizer for Deep Neural Network Accelerators☆21Mar 20, 2025Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆92May 23, 2023Updated 2 years ago
- ☆133Nov 11, 2024Updated last year
- Spatialyze: A Geospatial Video Analytic System with Spatial-Aware Optimizations☆11Mar 3, 2025Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆303Updated this week
- CausIL is an approach to estimate the causal graph for a cloud microservice system, where the nodes are the service-specific metrics whil…☆13Jul 3, 2023Updated 2 years ago
- ☆21Jun 9, 2025Updated 11 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆91Apr 21, 2026Updated 2 weeks ago
- Artifacts for our SIGCOMM'23 paper Ditto☆15Oct 17, 2023Updated 2 years ago
- This repository is established to store personal notes and annotated papers during daily research.☆193Apr 13, 2026Updated 3 weeks ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆22Dec 11, 2024Updated last year
- High-Speed Stateful Packet Processor for Programmable Switches☆13Dec 18, 2022Updated 3 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆146Dec 4, 2024Updated last year
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆32Nov 16, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆482May 30, 2025Updated 11 months ago
- LLM serving cluster simulator☆150Apr 25, 2024Updated 2 years ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆214Sep 21, 2024Updated last year