jordddan / GameEvalLinks
Using conversational games to evaluate powerful LLMs
☆18Updated 2 years ago
Alternatives and similar repositories for GameEval
Users that are interested in GameEval are comparing it to the libraries listed below
Sorting:
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 6 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆36Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆89Updated last year
- Sotopia-RL: Reward Design for Social Intelligence☆45Updated 3 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆108Updated 9 months ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆48Updated 5 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆53Updated 10 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Updated 2 months ago
- ☆23Updated last year
- ☆46Updated 6 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆64Updated last year
- SpyGame: An interactive multi-agent framework to evaluate intelligence with large language models :D☆15Updated 2 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- Feeling confused about super alignment? Here is a reading list☆43Updated last year
- ☆16Updated last year
- The paper list of multilingual pre-trained models (Continual Updated).☆24Updated last year
- ☆31Updated last year
- ☆70Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 6 months ago
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆49Updated 2 years ago
- ☆60Updated last year
- Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lo…☆16Updated last year
- [NeurIPS 2025 Spotlight] Co-Evolving LLM Coder and Unit Tester via Reinforcement Learning☆143Updated 2 months ago
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Updated 2 years ago
- ☆35Updated last year
- PreAct: Prediction Enhances Agent's Planning Ability (Coling2025)☆30Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆28Updated last year
- MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation☆28Updated last year