alphadl / OOP-evalLinks
The first Object-Oriented Programming (OOP) Evaluaion Benchmark for LLMs
β27Updated last year
Alternatives and similar repositories for OOP-eval
Users that are interested in OOP-eval are comparing it to the libraries listed below
Sorting:
- [ICLR 2022] Official repository for "Knowledge Removal in Sampling-based Bayesian Inference"β18Updated 3 years ago
- πenhanced GRPO with more verifiable rewards and real-time evaluatorsβ37Updated 2 weeks ago
- FusionBench: A Comprehensive Benchmark/Toolkit of Deep Model Fusionβ203Updated this week
- Evaluate the Quality of Critiqueβ36Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srwβ64Updated last year
- Interpretable Contrastive Monte Carlo Tree Search Reasoningβ51Updated last year
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizabilityβ42Updated 7 months ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"β109Updated 2 years ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoningβ29Updated last year
- β130Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Modelsβ60Updated last year
- β25Updated last year
- This is the official repo for Towards Uncertainty-Aware Language Agent.β30Updated last year
- Code for "Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal" (ACL 2024)β16Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignmentβ57Updated last year
- β26Updated 11 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.β85Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningβ120Updated 9 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modelingβ53Updated 8 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AIβ107Updated 11 months ago
- β103Updated 2 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignmentβ69Updated 2 years ago
- β14Updated last year
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"β60Updated 2 years ago
- Exploration of automated dataset selection approaches at large scales.β52Updated 11 months ago
- [ACL 2024] The project of Symbol-LLMβ59Updated last year
- Self-Supervised Alignment with Mutual Informationβ20Updated last year
- Codebase for Inference-Time Policy Adaptersβ25Updated 2 years ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)β47Updated last year
- Benchmarking Benchmark Leakage in Large Language Modelsβ58Updated last year