lfy79001 / S3Eval
A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models
☆32Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for S3Eval
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆20Updated 8 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆45Updated 4 months ago
- ☆53Updated 2 months ago
- ☆28Updated last week
- ☆37Updated 5 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆36Updated 8 months ago
- The code and data for the paper JiuZhang3.0☆35Updated 5 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆70Updated 9 months ago
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆30Updated 3 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆32Updated 9 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆25Updated 3 months ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆34Updated last month
- Code for Findings of EMNLP2023 paper "Coarse-to-Fine Dual Encoders are Better Frame Identification Learners"☆12Updated last year
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆26Updated 4 months ago
- Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆22Updated last month
- ☆25Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆61Updated 3 weeks ago
- Evaluating Mathematical Reasoning Beyond Accuracy☆37Updated 7 months ago
- trending projects & awesome papers about data-centric llm studies.☆32Updated this week
- ☆43Updated last month
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆45Updated 7 months ago
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- Evaluate the Quality of Critique☆35Updated 5 months ago
- ☆14Updated last year
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆26Updated 3 months ago
- Benchmarking Benchmark Leakage in Large Language Models☆44Updated 5 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆29Updated 2 months ago
- This the implementation of LeCo☆27Updated 3 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆43Updated last week
- ☆16Updated last year