alphadl / OOP-evalLinks
The first Object-Oriented Programming (OOP) Evaluaion Benchmark for LLMs
☆27Updated last year
Alternatives and similar repositories for OOP-eval
Users that are interested in OOP-eval are comparing it to the libraries listed below
Sorting:
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- ☆130Updated last year
- ☆14Updated last year
- [ICLR 2022] Official repository for "Knowledge Removal in Sampling-based Bayesian Inference"☆18Updated 3 years ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Updated last year
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆42Updated last year
- ☆14Updated 2 years ago
- [ICML 2024] Code for the paper "Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases"☆38Updated last year
- [ACL 2024] The project of Symbol-LLM☆59Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 8 months ago
- [ICML2024]Adaptive decoding balances the diversity and coherence of open-ended text generation.☆19Updated last year
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Updated 2 years ago
- Evaluate the Quality of Critique☆36Updated last year
- ☆17Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆29Updated last year
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆66Updated 2 years ago
- Code release for "SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers" [NeurIPS D&B, 2024]☆71Updated last year
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆41Updated 6 months ago
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆37Updated 2 years ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated last year
- MathFusion: Enhancing Mathematical Problem-solving of LLM through Instruction Fusion (ACL 2025)☆35Updated 6 months ago
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆49Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆114Updated 6 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Updated 10 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆50Updated last year
- MMSci: A Multimodal Multi-Discipline Dataset for PhD-Level Scientific Comprehension☆51Updated last year
- ☆39Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year