autoiac-project / iac-evalLinks
[NeurIPS 24] IaC-Eval: A Code Generation Benchmark for Cloud Infrastructure-as-Code programs
☆29Updated 9 months ago
Alternatives and similar repositories for iac-eval
Users that are interested in iac-eval are comparing it to the libraries listed below
Sorting:
- Zodiac: Unearthing Semantic Checks for Cloud Infrastructure-as-Code Programs, SOSP 2024☆13Updated 9 months ago
- A holistic framework to enable the design, development, and evaluation of autonomous AIOps agents.☆12Updated 3 months ago
- ☁️ Benchmarking LLMs for Cloud Config Generation | 云场景下的大模型基准测试☆34Updated 10 months ago
- Simulator for the datacenter, including power, cooling, server and other components☆16Updated 7 months ago
- Easy, Fast, and Scalable Multimodal AI☆18Updated this week
- Serverless LLM Serving for Everyone.☆537Updated last week
- A Framework for Automated Validation of Deep Learning Training Tasks☆49Updated this week
- Predict the performance of LLM inference services☆19Updated 4 months ago
- Federated Transformer (NeurIPS 24): a framework to enhance the performance of multi-party Vertical Federated Learning involving fuzzy ide…☆38Updated 9 months ago
- Burstable Cloud Scheduler☆14Updated last year
- How much energy do GenAI models consume?☆47Updated 4 months ago
- r2e: turn any github repository into a programming agent environment☆129Updated 4 months ago
- Codebase for Autothrottle (NSDI 2024)☆48Updated last year
- The official repository of ICCV 2025 paper "CATP-LLM: Empowering Large Language Models for Cost-Aware Tool Planning".☆12Updated last month
- This is the repo for remote direct memory introspection.☆22Updated 2 years ago
- [ASPLOS'25] Towards End-to-End Optimization of LLM-based Applications with Ayo☆37Updated last month
- End-to-end carbon footprint mod- eling tool☆47Updated 3 months ago
- ☆181Updated last month
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆181Updated 11 months ago
- Cloud incidents/failures related work.☆19Updated 8 months ago
- ☆100Updated last year
- ☆270Updated last month
- ☆47Updated last year
- A series of work towards achieving ACV.☆20Updated last month
- Course information for CS598-Topics in LLM Agents(25'Spring) under the direction of Prof. Jiaxuan You ( jiaxuan@illinois.edu ).☆33Updated 4 months ago
- µBench is a tool for benchmarking cloud/edge computing platforms that run microservice applications. The tool creates dummy microservice …☆70Updated 3 months ago
- LLM Serving Performance Evaluation Harness☆79Updated 6 months ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated 9 months ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆157Updated this week
- MetaOpt: Towards efficient heuristic design with quantifiable and confident performance☆19Updated last week