Cohere-Labs-Community / m-rewardbench
Official Code for M-RᴇᴡᴀʀᴅBᴇɴᴄʜ: Evaluating Reward Models in Multilingual Settings (ACL 2025 Main)
☆28Updated this week
Alternatives and similar repositories for m-rewardbench
Users that are interested in m-rewardbench are comparing it to the libraries listed below
Sorting:
- ☆22Updated last year
- Repository for "Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"☆12Updated last month
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆21Updated 9 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆72Updated 9 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆72Updated last year
- FeedbackQA: Improving Question Answering Post-Deployment with Interactive Feedback☆12Updated 2 years ago
- ☆28Updated last year
- 👻 Code and benchmark for our EMNLP 2023 paper - "FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions"☆55Updated 11 months ago
- ☆38Updated last year
- A library for efficient patching and automatic circuit discovery.☆64Updated 3 weeks ago
- ☆83Updated 3 months ago
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆33Updated 5 months ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆38Updated 2 years ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆51Updated 3 weeks ago
- Code for Zero-Shot Tokenizer Transfer☆127Updated 4 months ago