eth-sri / ChatProtectLinks
This is the code for the paper "Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation".
☆36Updated last year
Alternatives and similar repositories for ChatProtect
Users that are interested in ChatProtect are comparing it to the libraries listed below
Sorting:
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆72Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆108Updated 2 months ago
- Evaluating LLMs with fewer examples☆160Updated last year
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆113Updated last year
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆63Updated 5 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated 10 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆162Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆152Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆112Updated last month
- Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts…☆195Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 8 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆115Updated 10 months ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆97Updated last year
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆57Updated last year
- ☆134Updated last year
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆182Updated 5 months ago
- Code/data for MARG (multi-agent review generation)☆47Updated 8 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆30Updated 11 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆73Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 6 months ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 10 months ago
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆42Updated 9 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- A package to generate summaries of long-form text and evaluate the coherence of these summaries. Official package for our ICLR 2024 paper…☆124Updated 10 months ago
- Retrieval Augmented Generation Generalized Evaluation Dataset☆54Updated 3 weeks ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆26Updated 7 months ago