nlpyang / gevalLinks
Code for paper "G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment"
☆342Updated last year
Alternatives and similar repositories for geval
Users that are interested in geval are comparing it to the libraries listed below
Sorting:
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆249Updated 2 years ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆524Updated 11 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆180Updated 5 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆474Updated last year
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆350Updated last month
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆263Updated last year
- [ICLR 2024 & NeurIPS 2023 WS] An Evaluator LM that is open-source, offers reproducible evaluation, and inexpensive to use. Specifically d…☆300Updated last year
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆486Updated 7 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆344Updated last year
- ACL2023 - AlignScore, a metric for factual consistency evaluation.☆129Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆493Updated 11 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆241Updated last year
- BARTScore: Evaluating Generated Text as Text Generation☆350Updated 2 years ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆948Updated last month
- A Survey of Attributions for Large Language Models☆202Updated 9 months ago
- Contriever: Unsupervised Dense Information Retrieval with Contrastive Learning☆737Updated 2 years ago
- Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"☆239Updated last year
- ☆281Updated last year
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆248Updated 11 months ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆360Updated last year
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆362Updated this week
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆526Updated 4 months ago
- ☆109Updated 2 months ago
- ☆175Updated 2 years ago
- Code and model release for the paper "Task-aware Retrieval with Instructions" by Asai et al.☆162Updated last year
- TruthfulQA: Measuring How Models Imitate Human Falsehoods☆734Updated 4 months ago
- Automated Evaluation of RAG Systems☆596Updated 2 months ago
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆216Updated last year
- Benchmarking library for RAG☆205Updated last week
- Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder …☆156Updated 8 months ago