nlp-uoregon / mlmm-evaluation
Multilingual Large Language Models Evaluation Benchmark
☆118Updated 7 months ago
Alternatives and similar repositories for mlmm-evaluation:
Users that are interested in mlmm-evaluation are comparing it to the libraries listed below
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆94Updated last year
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆213Updated 4 months ago
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆100Updated 11 months ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆89Updated last year
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆68Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆126Updated last year
- [ACL 2024] LangBridge: Multilingual Reasoning Without Multilingual Supervision☆87Updated 4 months ago
- A Multilingual Replicable Instruction-Following Model☆93Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆160Updated 3 months ago
- ☆68Updated 3 months ago
- ☆128Updated 2 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆109Updated 8 months ago
- Repository for EMNLP 2022 Paper: Towards a Unified Multi-Dimensional Evaluator for Text Generation☆198Updated last year
- ☆174Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆94Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆39Updated last month
- A framework for few-shot evaluation of autoregressive language models.☆103Updated last year
- Tools for evaluating the performance of MT metrics on data from recent WMT metrics shared tasks.☆101Updated last week
- GEMBA — GPT Estimation Metric Based Assessment☆113Updated 7 months ago
- BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval☆91Updated last month
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆136Updated 4 months ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆83Updated 7 months ago
- Code for Zero-Shot Tokenizer Transfer☆125Updated 2 months ago
- Codebase, data and models for the SummaC paper in TACL☆89Updated last month
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆179Updated 2 years ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆71Updated last year
- NAACL 2024: SeaEval for Multilingual Foundation Models: From Cross-Lingual Alignment to Cultural Reasoning☆24Updated 3 weeks ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆124Updated last year