Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]
☆56Mar 5, 2025Updated last year
Alternatives and similar repositories for marconi
Users that are interested in marconi are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆36Jan 9, 2023Updated 3 years ago
- ☆25Apr 13, 2025Updated last year
- Democratizing AlphaFold3: an PyTorch reimplementation to accelerate protein structure prediction☆21May 24, 2025Updated 11 months ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- Expressive, Easy to Build, and High-Performance Application Networks☆19Jul 1, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆328Jun 10, 2025Updated 10 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆94Jun 16, 2025Updated 10 months ago
- ☆40Dec 19, 2025Updated 4 months ago
- AQUATOPE: QoS-and-Uncertainty-Aware Resource Management for Multi-Stage Serverless Workflows (ASPLOS'23)☆24Mar 13, 2024Updated 2 years ago
- High performance inference engine for diffusion models☆107Sep 5, 2025Updated 8 months ago
- ☆39Nov 28, 2024Updated last year
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆46Nov 24, 2022Updated 3 years ago
- Simple PyTorch profiler that combines DeepSpeed Flops Profiler and TorchInfo☆11Feb 12, 2023Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Hydra adds resilience and high availability to remote memory solutions.☆33Feb 22, 2022Updated 4 years ago
- [ICLR 2025] Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆54Oct 19, 2025Updated 6 months ago
- ☆18Jan 27, 2025Updated last year
- Compression for Foundation Models☆35Jul 21, 2025Updated 9 months ago
- ☆91Apr 2, 2022Updated 4 years ago
- [ICLR'25] "Understanding Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing" by Peihao Wang, Ruisi Cai, Yue…☆17Mar 21, 2025Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- A sparse attention kernel supporting mix sparse patterns☆503Jan 18, 2026Updated 3 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆83Sep 15, 2025Updated 7 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A throughput-oriented high-performance serving framework for LLMs☆956Mar 29, 2026Updated last month
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 8 months ago
- Training hybrid models for dummies.☆29Nov 1, 2025Updated 6 months ago
- ☆11May 19, 2025Updated 11 months ago
- ☆21Jan 23, 2026Updated 3 months ago
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 3 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 7 months ago
- "An optimizer custom node for ComfyUI that ensures each queue execution starts in an optimal state by clearing unused VRAM and unnecessar…☆19Jul 18, 2025Updated 9 months ago
- Reticle evaluation (PLDI 2021)☆12Apr 12, 2021Updated 5 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆13Jan 7, 2025Updated last year
- ☆24Oct 9, 2025Updated 6 months ago
- APEX+ is an LLM Serving Simulator☆45Jun 16, 2025Updated 10 months ago
- Skeleton code for new 6.858 final project --- an encrypted and authenticated file system☆24Apr 20, 2022Updated 4 years ago
- Fast Matrix Multiplication Implementation in C programming language. This matrix multiplication algorithm is similar to what Numpy uses t…☆41Jun 6, 2021Updated 4 years ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆77Aug 12, 2025Updated 8 months ago
- Empowering LLM Agents for Real-World Computer System Optimization☆18Sep 10, 2025Updated 7 months ago