night-chen / ToolQALinks
ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels (easy/hard) across eight real-life scenarios.
☆277Updated 2 years ago
Alternatives and similar repositories for ToolQA
Users that are interested in ToolQA are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆498Updated 11 months ago
- Data and Code for Program of Thoughts [TMLR 2023]☆286Updated last year
- Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"☆245Updated last year
- Generative Judge for Evaluating Alignment☆246Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆247Updated last year
- ☆290Updated last year
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆374Updated 2 weeks ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆263Updated last year
- ☆173Updated 2 years ago
- [Preprint] Learning to Filter Context for Retrieval-Augmented Generaton☆195Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆161Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆153Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆360Updated last year
- Datasets for Instruction Tuning of Large Language Models☆255Updated last year
- ☆140Updated 2 years ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆198Updated 9 months ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated 2 years ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 4 months ago
- [IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection☆89Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆312Updated last year
- FireAct: Toward Language Agent Fine-tuning☆281Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆163Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆378Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆353Updated last year
- Codes for our paper "ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate"☆301Updated 11 months ago
- ☆189Updated 3 months ago
- This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"☆204Updated 9 months ago
- [NeurIPS 2023] Codebase for the paper: "Guiding Large Language Models with Directional Stimulus Prompting"☆113Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆512Updated last year
- ☆240Updated last year