qcri / LLMeBenchLinks
Benchmarking Large Language Models
☆99Updated 3 weeks ago
Alternatives and similar repositories for LLMeBench
Users that are interested in LLMeBench are comparing it to the libraries listed below
Sorting:
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆131Updated last year
- Resources for cultural NLP research☆98Updated 2 months ago
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated 11 months ago
- AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness☆101Updated 5 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆191Updated 7 months ago
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆70Updated last year
- Dataset from the paper "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering" (COLING 2022)☆114Updated 2 years ago
- A Multilingual Replicable Instruction-Following Model☆94Updated 2 years ago
- ☆41Updated 5 months ago
- ☆78Updated 9 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 9 months ago
- A curated list of research papers and resources on Cultural LLM.☆45Updated 9 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆221Updated 8 months ago
- Finetune mistral-7b-instruct for sentence embeddings☆85Updated last year
- Code and data accompanying the paper "TRUE: Re-evaluating Factual Consistency Evaluation".☆81Updated 3 weeks ago
- Token-level Reference-free Hallucination Detection☆94Updated last year
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆45Updated last year
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆103Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- ☆42Updated 2 years ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆97Updated last year
- A multi-purpose toolkit for table-to-text generation: web interface, Python bindings, CLI commands.☆55Updated last year
- Retrieval Augmented Generation Generalized Evaluation Dataset☆53Updated 7 months ago
- Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning☆46Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆150Updated last year
- ☆52Updated last year
- Multilingual Large Language Models Evaluation Benchmark☆127Updated 10 months ago
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆102Updated last year