h2oai / h2o-LLM-evalLinks
Large-language Model Evaluation framework with Elo Leaderboard and A-B testing
☆52Updated 10 months ago
Alternatives and similar repositories for h2o-LLM-eval
Users that are interested in h2o-LLM-eval are comparing it to the libraries listed below
Sorting:
- A set of utilities for running few-shot prompting experiments on large-language models☆122Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 11 months ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆97Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆115Updated 11 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 8 months ago
- Retrieval Augmented Generation Generalized Evaluation Dataset☆55Updated last month
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆132Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆114Updated this week
- ☆85Updated 2 years ago
- ☆42Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated 10 months ago
- AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness☆102Updated 7 months ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆100Updated 3 weeks ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆109Updated 2 months ago
- Code and Dataset for Learning to Solve Complex Tasks by Talking to Agents☆24Updated 3 years ago
- Reward Model framework for LLM RLHF☆61Updated 2 years ago
- ☆127Updated 10 months ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆153Updated last year
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆86Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆151Updated last year
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆94Updated 8 months ago
- Evaluating tool-augmented LLMs in conversation settings☆86Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆186Updated last month
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆204Updated 8 months ago
- The official code repo for "Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations".☆83Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Benchmark baseline for retrieval qa applications☆115Updated last year
- ☆62Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year