OttoKaaij / Ticket-To-SustainabilityLinks
☆11Updated 2 years ago
Alternatives and similar repositories for Ticket-To-Sustainability
Users that are interested in Ticket-To-Sustainability are comparing it to the libraries listed below
Sorting:
- End-to-end carbon footprint mod- eling tool☆45Updated 2 months ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 4 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆52Updated last week
- Run SWE-bench evaluations remotely☆37Updated last week
- r2e: turn any github repository into a programming agent environment☆129Updated 3 months ago
- ☆22Updated last month
- AI Energy Score: Initiative to establish comparable energy efficiency ratings for AI models.☆30Updated 4 months ago
- ☆67Updated last year
- A data model and a viewer for carbon footprint scenarios.☆24Updated this week
- Open-source repository for the OOPSLA'24 paper "CYCLE: Learning to Self-Refine Code Generation"☆10Updated last year
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆45Updated 6 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆76Updated last year
- For our ACL25 Paper: Can Language Models Replace Programmers? RepoCod Says ‘Not Yet’ - by Shanchao Liang and Yiran Hu and Nan Jiang and L…☆22Updated this week
- Training and Benchmarking LLMs for Code Preference.☆34Updated 8 months ago
- ☆35Updated last month
- Experiments to assess SPADE on different LLM pipelines.☆17Updated last year
- Repo2Run is an LLM-based agent that automates environment configuration by generating error-free Dockerfiles for Python repositories.☆41Updated 5 months ago
- How much energy do GenAI models consume?☆46Updated 2 months ago
- Small, simple agent task environments for training and evaluation☆18Updated 9 months ago
- TDD-Bench-Verified is a new benchmark for generating test cases for test-driven development (TDD)☆21Updated last week
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆76Updated last year
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆64Updated 11 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆47Updated last year
- A browser extension (for now only Chrome) that estimates the environmental impact of your AI interactions.☆13Updated last month
- RepoQA: Evaluating Long-Context Code Understanding☆113Updated 9 months ago
- Tracking instruction-tuned LLM openness. Paper: Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Track…☆119Updated 5 months ago
- ☆18Updated this week
- ☆100Updated 2 months ago
- ☆28Updated 3 weeks ago
- A curated list of awesome Green AI resources and tools to assess and reduce the environmental impacts of using and deploying AI.☆82Updated 4 months ago