allenai / WildBench
Benchmarking LLMs with Challenging Tasks from Real Users
☆215Updated 3 months ago
Alternatives and similar repositories for WildBench:
Users that are interested in WildBench are comparing it to the libraries listed below
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆135Updated 3 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆172Updated 3 months ago
- The official evaluation suite and dynamic data release for MixEval.☆231Updated 3 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 9 months ago
- Reproducible, flexible LLM evaluations☆160Updated 2 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆100Updated 7 months ago
- A simple unified framework for evaluating LLMs☆197Updated 2 weeks ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆194Updated last week
- ☆149Updated last week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆451Updated 11 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆153Updated 2 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆178Updated 7 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆128Updated 3 months ago
- Self-Alignment with Principle-Following Reward Models☆154Updated 11 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆172Updated 6 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆145Updated 2 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆229Updated last year
- ☆130Updated 2 months ago
- RewardBench: the first evaluation tool for reward models.☆505Updated this week
- Evaluating LLMs with fewer examples☆145Updated 10 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆216Updated this week
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆103Updated last week
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆296Updated last year
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆111Updated 3 months ago
- Reformatted Alignment☆114Updated 4 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆157Updated this week
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 5 months ago
- ☆305Updated 8 months ago
- ☆95Updated 7 months ago