olly-styles / WorkBenchLinks
WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting.
☆54Updated last year
Alternatives and similar repositories for WorkBench
Users that are interested in WorkBench are comparing it to the libraries listed below
Sorting:
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆117Updated last month
- Complex Function Calling Benchmark.☆149Updated 10 months ago
- Evaluating LLMs with fewer examples☆169Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- ☆129Updated last year
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- ☆226Updated last month
- Code accompanying "How I learned to start worrying about prompt formatting".☆112Updated 6 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆165Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆220Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆277Updated last year
- ☆43Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆218Updated 5 months ago
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆206Updated 11 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated 2 years ago
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆191Updated 3 months ago
- ☆124Updated 9 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- ☆157Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆218Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆151Updated last year
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆209Updated 5 months ago
- A set of utilities for running few-shot prompting experiments on large-language models☆126Updated 2 years ago
- ☆81Updated 3 weeks ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆142Updated last month
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 9 months ago