sun-wendy / DafnyBenchLinks
DafnyBench: A Benchmark for Formal Software Verification
☆52Updated last year
Alternatives and similar repositories for DafnyBench
Users that are interested in DafnyBench are comparing it to the libraries listed below
Sorting:
- [FSE-2024] Towards AI-Assisted Synthesis of Verified Dafny Methods☆54Updated last year
- ☆39Updated 5 months ago
- [COLM 2024] A Survey on Deep Learning for Theorem Proving☆212Updated 7 months ago
- AlphaVerus: Formally Verified Code Generation through Self-Improving Translation and Treefinement☆23Updated 7 months ago
- Code for the paper LEGO-Prover: Neural Theorem Proving with Growing Libraries☆67Updated last year
- ☆15Updated last year
- The official implementation of "Self-play LLM Theorem Provers with Iterative Conjecturing and Proving"☆116Updated 9 months ago
- Clover: Closed-Loop Verifiable Code Generation☆39Updated 7 months ago
- An inequality benchmark for theorem proving☆21Updated 7 months ago
- ☆140Updated 4 months ago
- ☆67Updated 2 months ago
- SatLM: SATisfiability-Aided Language Models using Declarative Prompting (NeurIPS 2023)☆51Updated last year
- https://albertqjiang.github.io/Portal-to-ISAbelle/☆56Updated 2 years ago
- ☆71Updated 2 years ago
- An updated version of miniF2F with lots of fixes and informal statements / solutions.☆97Updated last year
- [ICLR'25 Spotlight] Rethinking and improving autoformalization: towards a faithful metric and a Dependency Retrieval-based approach☆24Updated 7 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆163Updated last year
- The official code release for Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with Autoformalization☆34Updated 10 months ago
- Collection of resources for research concerning Machine Learning and Formal Methods.☆93Updated 4 years ago
- NeqLIPS: a powerful Olympiad-level inequality prover☆39Updated 4 months ago
- ☆74Updated this week
- A minimal language for Isabelle/HOL, designed for easing machine learning.☆24Updated this week
- An evaluation benchmark for undergraduate competition math in Lean4, Isabelle, Coq, and natural language.☆193Updated this week
- Neural theorem proving evaluation via the Lean REPL☆23Updated 5 months ago
- ☆224Updated 9 months ago
- COPRA: An in-COntext PRoof Agent which uses LLMs like GPTs to prove theorems in formal languages.☆69Updated last month
- The official repository for the paper Multilingual Mathematical Autoformalization☆38Updated last year
- Retrieval-Augmented Theorem Provers for Lean☆315Updated 11 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆85Updated last year
- ☆64Updated this week