☆47Jun 27, 2024Updated last year
Alternatives and similar repositories for melange-release
Users that are interested in melange-release are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆12Oct 16, 2022Updated 3 years ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆37Aug 29, 2025Updated 7 months ago
- Stateful LLM Serving☆97Mar 11, 2025Updated last year
- LLM Serving Performance Evaluation Harness☆83Feb 25, 2025Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆67Nov 4, 2024Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆792Apr 6, 2025Updated 11 months ago
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 5 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆35May 6, 2024Updated last year
- An Open-Source SCAlable Interface for ISA Extensionsfor RISC-V Processors. New Version:☆17Feb 29, 2024Updated 2 years ago
- ☆17May 10, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆486Jan 8, 2026Updated 2 months ago
- ☆19Jan 10, 2023Updated 3 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆81Oct 15, 2025Updated 5 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Efficient and easy multi-instance LLM serving☆536Mar 12, 2026Updated 2 weeks ago
- ☆154Oct 9, 2024Updated last year
- Using fourier interpolation to merge large language models☆11Jan 6, 2026Updated 2 months ago
- Visualization synthesis☆15May 12, 2021Updated 4 years ago
- ☆12Jun 29, 2024Updated last year
- A large-scale simulation framework for LLM inference☆564Jul 25, 2025Updated 8 months ago
- RISC-V ISA based 32-bit processor written in HLS☆16Nov 7, 2019Updated 6 years ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆49Jun 1, 2024Updated last year
- A benchmark suite for evaluating FaaS scheduler.☆23Nov 5, 2022Updated 3 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- NeuroSpector: Dataflow and Mapping Optimizer for Deep Neural Network Accelerators☆21Mar 20, 2025Updated last year
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆92May 23, 2023Updated 2 years ago
- ☆131Nov 11, 2024Updated last year
- Spatialyze: A Geospatial Video Analytic System with Spatial-Aware Optimizations☆11Mar 3, 2025Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆289Updated this week
- CausIL is an approach to estimate the causal graph for a cloud microservice system, where the nodes are the service-specific metrics whil…☆13Jul 3, 2023Updated 2 years ago
- ☆20Jun 9, 2025Updated 9 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆89Mar 12, 2026Updated 2 weeks ago
- ☆11Aug 7, 2023Updated 2 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Artifacts for our SIGCOMM'23 paper Ditto☆15Oct 17, 2023Updated 2 years ago
- An agent for CUDA compute-communication kernel co-design☆33Updated this week
- High-Speed Stateful Packet Processor for Programmable Switches☆14Dec 18, 2022Updated 3 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- LLM serving cluster simulator☆140Apr 25, 2024Updated last year
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆32Nov 16, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago