Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]
☆56Mar 5, 2025Updated last year
Alternatives and similar repositories for marconi
Users that are interested in marconi are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆36Jan 9, 2023Updated 3 years ago
- ☆25Apr 13, 2025Updated 11 months ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- Expressive, Easy to Build, and High-Performance Application Networks☆18Jul 1, 2025Updated 8 months ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆320Jun 10, 2025Updated 9 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆90Jun 16, 2025Updated 9 months ago
- AQUATOPE: QoS-and-Uncertainty-Aware Resource Management for Multi-Stage Serverless Workflows (ASPLOS'23)☆24Mar 13, 2024Updated 2 years ago
- High performance inference engine for diffusion models☆107Sep 5, 2025Updated 6 months ago
- ☆35Nov 28, 2024Updated last year
- ☆17May 10, 2024Updated last year
- Simple PyTorch profiler that combines DeepSpeed Flops Profiler and TorchInfo☆12Feb 12, 2023Updated 3 years ago
- [ICLR 2025] Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆52Oct 19, 2025Updated 5 months ago
- Hydra adds resilience and high availability to remote memory solutions.☆33Feb 22, 2022Updated 4 years ago
- ☆18Jan 27, 2025Updated last year
- Compression for Foundation Models☆35Jul 21, 2025Updated 8 months ago
- ☆89Apr 2, 2022Updated 3 years ago
- Implementation of Read-Log-Update in Rust☆11Jan 8, 2020Updated 6 years ago
- [ICLR'25] "Understanding Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing" by Peihao Wang, Ruisi Cai, Yue…☆17Mar 21, 2025Updated last year
- A sparse attention kernel supporting mix sparse patterns☆480Jan 18, 2026Updated 2 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆75Sep 15, 2025Updated 6 months ago
- ☆12Apr 26, 2025Updated 10 months ago
- Training hybrid models for dummies.☆29Nov 1, 2025Updated 4 months ago
- ☆11May 19, 2025Updated 10 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆52Aug 6, 2025Updated 7 months ago
- simd enabled column imprints☆11Feb 12, 2018Updated 8 years ago
- A low-latency & high-throughput serving engine for LLMs☆484Jan 8, 2026Updated 2 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 5 months ago
- ☆35Jun 22, 2024Updated last year
- Harmonic-NAS: Hardware-Aware Multimodal Neural Architecture Search on Resource-constrained Devices (ACML 2023)☆16May 7, 2024Updated last year
- ☆13Jan 7, 2025Updated last year
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆76Aug 12, 2025Updated 7 months ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆12Nov 8, 2024Updated last year
- Enhanced sound event localization and detection in real 360-degree audio-visual soundscapes (DCASE task3 format)☆13Mar 21, 2025Updated last year
- Empowering LLM Agents for Real-World Computer System Optimization☆17Sep 10, 2025Updated 6 months ago
- 🕹 Implementation for the lesson Compiling Engineering(2020 Spring) in Peking University, adjusted from UCLA CS 132 Project.☆10Jun 21, 2020Updated 5 years ago
- Artifacts of EuroSys'24 paper "Exploring Performance and Cost Optimization with ASIC-Based CXL Memory"☆31Feb 21, 2024Updated 2 years ago