ml-energy / leaderboardLinks
How much energy do GenAI models consume?
☆45Updated 2 months ago
Alternatives and similar repositories for leaderboard
Users that are interested in leaderboard are comparing it to the libraries listed below
Sorting:
- Measure and optimize the energy consumption of your AI applications!☆274Updated this week
- A resilient distributed training framework☆95Updated last year
- End-to-end carbon footprint mod- eling tool☆43Updated last month
- ☆25Updated last year
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆89Updated 2 years ago
- LLM Serving Performance Evaluation Harness☆79Updated 4 months ago
- Dynamic resources changes for multi-dimensional parallelism training☆26Updated 8 months ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆34Updated last week
- ☆32Updated last year
- ☆112Updated 9 months ago
- ACT An Architectural Carbon Modeling Tool for Designing Sustainable Computer Systems☆40Updated 2 months ago
- ☆47Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆40Updated 2 months ago
- ☆64Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆123Updated last year
- Modular and structured prompt caching for low-latency LLM inference☆97Updated 8 months ago
- A minimal implementation of vllm.☆49Updated 11 months ago
- Code for MLSys 2024 Paper "SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models"☆18Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated 7 months ago
- ☆9Updated 11 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆50Updated 8 months ago
- ☆38Updated 10 months ago
- LLM checkpointing for DeepSpeed/Megatron☆19Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆121Updated 7 months ago
- Carbon Explorer helps evaluating solutions make datacenters operate on renewable energy.☆81Updated 8 months ago
- Compression for Foundation Models☆33Updated 3 months ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆66Updated 7 months ago
- ☆94Updated 3 years ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year