qcri / LLMeBenchLinks
Benchmarking Large Language Models
☆104Updated 6 months ago
Alternatives and similar repositories for LLMeBench
Users that are interested in LLMeBench are comparing it to the libraries listed below
Sorting:
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆136Updated last year
- Resources for cultural NLP research☆113Updated 3 months ago
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆71Updated last year
- A Multilingual Replicable Instruction-Following Model☆95Updated 2 years ago
- ☆43Updated 11 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆225Updated last year
- Dataset from the paper "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering" (COLING 2022)☆117Updated 3 years ago
- A multi-purpose toolkit for table-to-text generation: web interface, Python bindings, CLI commands.☆57Updated last year
- Finetune mistral-7b-instruct for sentence embeddings☆88Updated last year
- ☆80Updated last year
- A curated list of research papers and resources on Cultural LLM.☆52Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆215Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆155Updated last year
- Token-level Reference-free Hallucination Detection☆97Updated 2 years ago
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆107Updated last year
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆97Updated 2 years ago
- AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness☆101Updated 11 months ago
- Repository for research in the field of Responsible NLP at Meta.☆204Updated 7 months ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆87Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- Pretraining Efficiently on S2ORC!☆178Updated last year
- Multilingual Large Language Models Evaluation Benchmark☆133Updated last year
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆82Updated 2 weeks ago
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 3 years ago
- Tools for managing datasets for governance and training.☆87Updated 3 weeks ago
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- ☆43Updated last year
- ☆145Updated 11 months ago
- ☆43Updated 2 years ago
- We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in …☆54Updated 2 years ago