structuredllm / syncodeLinks
Efficient and general syntactical decoding for Large Language Models
β309Updated 3 weeks ago
Alternatives and similar repositories for syncode
Users that are interested in syncode are comparing it to the libraries listed below
Sorting:
- π€ A specialized library for integrating context-free grammars (CFG) in EBNF with the Hugging Face Transformersβ130Updated 8 months ago
- A curated list of papers related to constrained decoding of LLM, along with their relevant code and resources.β310Updated 2 months ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β182Updated last year
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environmentβ138Updated 8 months ago
- EvoEval: Evolving Coding Benchmarks via LLMβ80Updated last year
- β78Updated last year
- Code and Data artifact for NeurIPS 2023 paper - "Monitor-Guided Decoding of Code LMs with Static Analysis of Repository Context". `multisβ¦β277Updated last year
- A multi-programming language benchmark for LLMsβ289Updated last month
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generationβ323Updated 10 months ago
- LDB: A Large Language Model Debugger via Verifying Runtime Execution Step by Step (ACL'24)β573Updated last year
- Iterate on LLM-based structured generation forward and backwardβ22Updated 9 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.β231Updated 2 weeks ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ163Updated last year
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repositoryβ71Updated last year
- CodeBERTScore: an automatic metric for code generation, based on BERTScoreβ206Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β166Updated 4 months ago
- Enhancing AI Software Engineering with Repository-level Code Graphβ240Updated 8 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructionsβ48Updated 3 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generationβ64Updated last week
- β127Updated 2 years ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β261Updated last year
- RepoQA: Evaluating Long-Context Code Understandingβ127Updated last year
- β112Updated last year
- β128Updated 6 months ago
- [NeurIPS '25] Challenging Software Optimization Tasks for Evaluating SWE-Agentsβ60Updated last week
- CodeSage: Code Representation Learning At Scale (ICLR 2024)β114Updated last year
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolvingβ299Updated last week
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGIβ463Updated 2 months ago
- Reproduction Package for the paper "Type-Constrained Code Generation with Language Models" [PLDI 2025]β80Updated 6 months ago
- Benchmark ClassEval for class-level code generation.β146Updated last year