structuredllm / itergenLinks
Iterate on LLM-based structured generation forward and backward
☆21Updated 6 months ago
Alternatives and similar repositories for itergen
Users that are interested in itergen are comparing it to the libraries listed below
Sorting:
- Efficient and general syntactical decoding for Large Language Models☆295Updated 2 weeks ago
- General-purpose program synthesiser☆48Updated 11 months ago
- CodeMind is a generic framework for evaluating inductive code reasoning of LLMs. It is equipped with a static analysis component that ena…☆39Updated 5 months ago
- LeanUniverse: A Library for Consistent and Scalable Lean4 Dataset Management☆71Updated 8 months ago
- ☆71Updated last year
- Training and Benchmarking LLMs for Code Preference.☆36Updated 10 months ago
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆66Updated last year
- Clover: Closed-Loop Verifiable Code Generation☆35Updated 4 months ago
- ☆53Updated 7 months ago
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆79Updated last year
- 🤗 A specialized library for integrating context-free grammars (CFG) in EBNF with the Hugging Face Transformers☆124Updated 5 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆76Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆117Updated 11 months ago
- ☆111Updated last year
- [FSE-2024] Towards AI-Assisted Synthesis of Verified Dafny Methods☆53Updated last year
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Updated last year
- SatLM: SATisfiability-Aided Language Models using Declarative Prompting (NeurIPS 2023)☆50Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆153Updated 11 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆56Updated 3 weeks ago
- Thorn in a HaizeStack test for evaluating long-context adversarial robustness.☆26Updated last year
- Training language models to make programs faster☆92Updated last year
- TDD-Bench-Verified is a new benchmark for generating test cases for test-driven development (TDD)☆24Updated 3 weeks ago
- ☆21Updated 3 years ago
- A certifier for bias in LLMs☆23Updated 5 months ago
- This is the artifact for paper “Are Machine Learning Cloud APIs Used Correctly? (#421)” in ICSE2021☆16Updated 4 years ago
- [ICML 2021] Break-It-Fix-It: Unsupervised Learning for Program Repair☆119Updated 2 years ago
- PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets suitable for graph-based machine learning. W…☆87Updated 3 years ago
- ☆28Updated this week
- For our ACL25 Paper: Can Language Models Replace Programmers? RepoCod Says ‘Not Yet’ - by Shanchao Liang and Yiran Hu and Nan Jiang and L…☆22Updated last month
- COPRA: An in-COntext PRoof Agent which uses LLMs like GPTs to prove theorems in formal languages.☆66Updated this week