IPRC-DIP / ANPLLinks
☆22Updated 2 years ago
Alternatives and similar repositories for ANPL
Users that are interested in ANPL are comparing it to the libraries listed below
Sorting:
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Updated 9 months ago
- Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents☆132Updated last year
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆31Updated 10 months ago
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆74Updated last year
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆43Updated 2 years ago
- ☆42Updated last year
- Scripts for downloading and pre-processing the `proof-pile`, a high quality dataset of mathematical text and code.☆22Updated 3 years ago
- This is the official repository for all the code of TheoremLlama☆47Updated 6 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- ☆33Updated this week
- ☆28Updated 2 months ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆140Updated 9 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Multi-Granularity LLM Debugger [ICSE2026]☆95Updated 7 months ago
- LILO: Library Induction with Language Observations☆90Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆44Updated last year
- ☆55Updated last year
- ☆27Updated last year
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- Pre-training code for CrystalCoder 7B LLM☆57Updated last year
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆96Updated 8 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Updated 4 months ago
- ☆41Updated last year
- ☆76Updated last month
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- ☆80Updated 10 months ago
- ☆41Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆165Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆63Updated last year