nlpyang / gevalLinks
Code for paper "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment"
☆395Updated last year
Alternatives and similar repositories for geval
Users that are interested in geval are comparing it to the libraries listed below
Sorting:
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆578Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆212Updated 11 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆530Updated last year
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆257Updated 2 years ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆406Updated 7 months ago
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆501Updated last year
- [ICLR 2024 & NeurIPS 2023 WS] An Evaluator LM that is open-source, offers reproducible evaluation, and inexpensive to use. Specifically d…☆308Updated 2 years ago
- ☆294Updated last year
- RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Langua…☆402Updated 6 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆364Updated last year
- This is a repository for sharing papers in the field of persona-based conversational AI. The related source code for each paper is linked…☆168Updated last year
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆379Updated this week
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆250Updated 2 years ago
- ☆157Updated last month
- Contriever: Unsupervised Dense Information Retrieval with Contrastive Learning☆763Updated 2 years ago
- Multilingual Large Language Models Evaluation Benchmark☆133Updated last year
- Benchmarking library for RAG☆248Updated last month
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆524Updated 10 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆282Updated 2 years ago
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆144Updated last year
- Comprehensive benchmark for RAG☆242Updated 5 months ago
- Multilingual/multidomain question generation datasets, models, and python library for question generation.☆366Updated last year
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆218Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆550Updated last week
- DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI☆517Updated 10 months ago
- Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder …☆162Updated 5 months ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆384Updated 2 years ago
- Generative Representational Instruction Tuning☆679Updated 5 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆530Updated last year
- ☆189Updated 4 months ago