nlp-uoregon / mlmm-evaluation
Multilingual Large Language Models Evaluation Benchmark
☆107Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for mlmm-evaluation
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆96Updated 7 months ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆91Updated last year
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆190Updated this week
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆122Updated 8 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆115Updated last month
- A Multilingual Replicable Instruction-Following Model☆94Updated last year
- ☆167Updated last year
- A Survey on Data Selection for Language Models☆182Updated last month
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆160Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆124Updated 3 weeks ago
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆86Updated last year
- A package to generate summaries of long-form text and evaluate the coherence of these summaries. Official package for our ICLR 2024 paper…☆107Updated last month
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆111Updated 8 months ago
- [ACL 2024] LangBridge: Multilingual Reasoning Without Multilingual Supervision☆81Updated 3 weeks ago
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆65Updated 8 months ago
- Codebase, data and models for the SummaC paper in TACL☆85Updated 11 months ago
- Repository for EMNLP 2022 Paper: Towards a Unified Multi-Dimensional Evaluator for Text Generation☆193Updated 9 months ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆177Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆54Updated 10 months ago
- A framework for few-shot evaluation of autoregressive language models.☆101Updated last year
- ☆122Updated 2 months ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆78Updated 3 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆83Updated 4 months ago
- ☆47Updated 2 months ago
- ☆67Updated 9 months ago
- Code, datasets, and checkpoints for the paper "Improving Passage Retrieval with Zero-Shot Question Generation (EMNLP 2022)"☆96Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆195Updated 2 weeks ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆292Updated 6 months ago
- Resources for cultural NLP research☆67Updated this week
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆74Updated 10 months ago