Yifan-Song793 / GoodBadGreedyLinks
The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism
☆30Updated last year
Alternatives and similar repositories for GoodBadGreedy
Users that are interested in GoodBadGreedy are comparing it to the libraries listed below
Sorting:
- Visual and Embodied Concepts evaluation benchmark☆21Updated 2 years ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆30Updated last year
- ☆58Updated last year
- ☆30Updated 11 months ago
- ☆14Updated 2 years ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing"☆22Updated 8 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- The code and data for the paper JiuZhang3.0☆49Updated last year
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated 2 years ago
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆33Updated last year
- Methods and evaluation for aligning language models temporally☆30Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 11 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 6 months ago
- PyTorch implementation of experiments in the paper Aligning Language Models with Human Preferences via a Bayesian Approach☆32Updated 2 years ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 5 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated last month
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆52Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆84Updated last year
- ☆51Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- The rule-based evaluation subset and code implementation of Omni-MATH☆25Updated 11 months ago
- ☆41Updated 2 years ago
- This the implementation of LeCo☆31Updated 10 months ago
- Evaluate the Quality of Critique☆36Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- ☆76Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆80Updated 2 years ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆28Updated last year