felipemaiapolo / promptevalLinks
Efficient multi-prompt evaluation of LLMs
☆24Updated last year
Alternatives and similar repositories for prompteval
Users that are interested in prompteval are comparing it to the libraries listed below
Sorting:
- Discovering Data-driven Hypotheses in the Wild☆120Updated 6 months ago
- Optimize Any User-defined Compound AI Systems☆63Updated 3 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- ☆52Updated 8 months ago
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.☆133Updated last year
- Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency☆37Updated 10 months ago
- This is the repo for constructing a comprehensive and rigorous evaluation framework for LLM calibration.☆13Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 10 months ago
- Using Explanations as a Tool for Advanced LLMs☆69Updated last year
- Codebase the paper "The Remarkable Robustness of LLMs: Stages of Inference?"☆19Updated 6 months ago
- Data and code for the Corr2Cause paper (ICLR 2024)☆111Updated last year
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆44Updated 8 months ago
- Evaluating LLMs with fewer examples☆169Updated last year
- Interpretating the latent space representations of attention head outputs for LLMs☆34Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆29Updated 9 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆151Updated 5 months ago
- This repository contains data, code and models for contextual noncompliance.☆24Updated last year
- Conformal Language Modeling☆32Updated last year
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆44Updated last month
- ☆32Updated last year
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆112Updated this week
- ☆19Updated 4 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆101Updated last year
- A mechanistic approach for understanding and detecting factual errors of large language models.☆49Updated last year
- ☆43Updated 10 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Foundation Models for Data Tasks☆110Updated 2 years ago
- PASTA: Post-hoc Attention Steering for LLMs☆130Updated last year