amazon-science / CodeSageLinks
CodeSage: Code Representation Learning At Scale (ICLR 2024)
☆111Updated 9 months ago
Alternatives and similar repositories for CodeSage
Users that are interested in CodeSage are comparing it to the libraries listed below
Sorting:
- ☆96Updated 10 months ago
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆64Updated 11 months ago
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆114Updated 3 weeks ago
- RepoQA: Evaluating Long-Context Code Understanding☆113Updated 9 months ago
- CodeMind is a generic framework for evaluating inductive code reasoning of LLMs. It is equipped with a static analysis component that ena…☆39Updated 3 months ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆73Updated 11 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 10 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆197Updated 4 months ago
- ☆99Updated last month
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 7 months ago
- ☆66Updated last year
- r2e: turn any github repository into a programming agent environment☆129Updated 3 months ago
- Evaluating LLMs with fewer examples☆160Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆114Updated 10 months ago
- ☆118Updated 11 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- Fine-tune SantaCoder for Code/Text Generation.☆192Updated 2 years ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆131Updated last year
- ☆35Updated last month
- ☆108Updated 2 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆91Updated 2 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆45Updated 11 months ago
- Beating the GAIA benchmark with Transformers Agents. 🚀☆131Updated 5 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆175Updated 4 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆59Updated last year
- Train your own SOTA deductive reasoning model☆103Updated 4 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆169Updated 11 months ago
- ☆41Updated 6 months ago