Cohere-Labs-Community / m-rewardbenchLinks
Official Code for M-RᴇᴡᴀʀᴅBᴇɴᴄʜ: Evaluating Reward Models in Multilingual Settings (ACL 2025 Main)
☆30Updated last month
Alternatives and similar repositories for m-rewardbench
Users that are interested in m-rewardbench are comparing it to the libraries listed below
Sorting:
- ☆22Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 10 months ago
- ☆11Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated 10 months ago
- ☆28Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆38Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆70Updated last year
- Repository for "Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"☆12Updated 3 months ago
- ☆51Updated last year
- ☆38Updated last year
- Evaluation pipeline for the BabyLM Challenge 2023.☆76Updated last year
- ☆62Updated 2 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated last year
- ☆35Updated 3 years ago
- ☆24Updated 2 years ago
- ☆74Updated last year
- ☆95Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆55Updated last month
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆93Updated 3 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆41Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated last year
- 👻 Code and benchmark for our EMNLP 2023 paper - "FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions"☆55Updated last year
- ☆34Updated last year
- ☆72Updated 2 years ago
- Measuring the Mixing of Contextual Information in the Transformer☆30Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆37Updated 10 months ago
- ☆86Updated 2 years ago
- ☆48Updated last month