olly-styles / WorkBenchLinks
WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting.
☆53Updated last year
Alternatives and similar repositories for WorkBench
Users that are interested in WorkBench are comparing it to the libraries listed below
Sorting:
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated 3 weeks ago
- Complex Function Calling Benchmark.☆147Updated 9 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆144Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Evaluating LLMs with fewer examples☆167Updated last year
- LOFT: A 1 Million+ Token Long-Context Benchmark☆219Updated 5 months ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆154Updated last year
- ☆129Updated last year
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆205Updated 10 months ago
- The official evaluation suite and dynamic data release for MixEval.☆252Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆244Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated 2 years ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆218Updated last week
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆135Updated last year
- ☆224Updated last week
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆164Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆110Updated 5 months ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆155Updated 2 years ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆149Updated last year
- Retrieval Augmented Generation Generalized Evaluation Dataset☆57Updated 3 months ago
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆206Updated 4 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year
- ☆239Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- A set of utilities for running few-shot prompting experiments on large-language models☆126Updated 2 years ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆282Updated 2 years ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆207Updated 11 months ago
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆217Updated last year
- ☆173Updated 2 years ago
- The first dense retrieval model that can be prompted like an LM☆89Updated 6 months ago