Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]
☆56Mar 5, 2025Updated last year
Alternatives and similar repositories for marconi
Users that are interested in marconi are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆36Jan 9, 2023Updated 3 years ago
- ☆25Apr 13, 2025Updated last year
- Democratizing AlphaFold3: an PyTorch reimplementation to accelerate protein structure prediction☆21May 24, 2025Updated 10 months ago
- ☆34Dec 19, 2025Updated 3 months ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆320Jun 10, 2025Updated 10 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆91Jun 16, 2025Updated 9 months ago
- AQUATOPE: QoS-and-Uncertainty-Aware Resource Management for Multi-Stage Serverless Workflows (ASPLOS'23)☆24Mar 13, 2024Updated 2 years ago
- High performance inference engine for diffusion models☆107Sep 5, 2025Updated 7 months ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆46Nov 24, 2022Updated 3 years ago
- ☆17May 10, 2024Updated last year
- Simple PyTorch profiler that combines DeepSpeed Flops Profiler and TorchInfo☆11Feb 12, 2023Updated 3 years ago
- Hydra adds resilience and high availability to remote memory solutions.☆33Feb 22, 2022Updated 4 years ago
- [ICLR 2025] Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆52Oct 19, 2025Updated 5 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Compression for Foundation Models☆35Jul 21, 2025Updated 8 months ago
- ☆89Apr 2, 2022Updated 4 years ago
- Implementation of Read-Log-Update in Rust☆11Jan 8, 2020Updated 6 years ago
- [ICLR'25] "Understanding Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing" by Peihao Wang, Ruisi Cai, Yue…☆17Mar 21, 2025Updated last year
- A sparse attention kernel supporting mix sparse patterns☆495Jan 18, 2026Updated 2 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆952Mar 29, 2026Updated 2 weeks ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆77Sep 15, 2025Updated 6 months ago
- ☆12Apr 26, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 8 months ago
- ☆11May 19, 2025Updated 10 months ago
- A low-latency & high-throughput serving engine for LLMs☆490Jan 8, 2026Updated 3 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 6 months ago
- ☆34Jun 22, 2024Updated last year
- Harmonic-NAS: Hardware-Aware Multimodal Neural Architecture Search on Resource-constrained Devices (ACML 2023)☆16May 7, 2024Updated last year
- A Progam-Behavior-Guided Far Memory System☆36Oct 26, 2023Updated 2 years ago
- ☆13Jan 7, 2025Updated last year
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Skeleton code for new 6.858 final project --- an encrypted and authenticated file system☆24Apr 20, 2022Updated 3 years ago
- Fast Matrix Multiplication Implementation in C programming language. This matrix multiplication algorithm is similar to what Numpy uses t…☆41Jun 6, 2021Updated 4 years ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆77Aug 12, 2025Updated 8 months ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆12Nov 8, 2024Updated last year
- Enhanced sound event localization and detection in real 360-degree audio-visual soundscapes (DCASE task3 format)☆14Mar 21, 2025Updated last year
- Empowering LLM Agents for Real-World Computer System Optimization☆17Sep 10, 2025Updated 7 months ago
- 🕹 Implementation for the lesson Compiling Engineering(2020 Spring) in Peking University, adjusted from UCLA CS 132 Project.☆10Jun 21, 2020Updated 5 years ago