SotaroKaneda / MLCarbonLinks
End-to-end carbon footprint mod- eling tool
☆49Updated 4 months ago
Alternatives and similar repositories for MLCarbon
Users that are interested in MLCarbon are comparing it to the libraries listed below
Sorting:
- How much energy do GenAI models consume?☆47Updated 4 months ago
- ACT An Architectural Carbon Modeling Tool for Designing Sustainable Computer Systems☆43Updated 2 months ago
- ☆47Updated last year
- LLM Serving Performance Evaluation Harness☆79Updated 7 months ago
- Measure and optimize the energy consumption of your AI applications!☆296Updated this week
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆30Updated 10 months ago
- Carbon Explorer helps evaluating solutions make datacenters operate on renewable energy.☆82Updated 11 months ago
- A curated list of awesome Green AI resources and tools to assess and reduce the environmental impacts of using and deploying AI.☆87Updated last week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆128Updated 10 months ago
- ☆32Updated 7 months ago
- ☆43Updated 5 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆59Updated 11 months ago
- Compression for Foundation Models☆35Updated 2 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆47Updated 2 months ago
- Efficient LLM Inference Acceleration using Prompting☆50Updated 11 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆90Updated 2 years ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆168Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆142Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆184Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆129Updated last year
- ☆20Updated 2 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated 10 months ago
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Updated last year
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆26Updated last year
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆238Updated 9 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆146Updated 3 months ago
- ☆19Updated 2 years ago
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆99Updated last week
- A resilient distributed training framework☆95Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆196Updated 4 months ago