night-chen / ToolQA
ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels (easy/hard) across eight real-life scenarios.
☆254Updated last year
Alternatives and similar repositories for ToolQA:
Users that are interested in ToolQA are comparing it to the libraries listed below
- ☆275Updated last year
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆477Updated 5 months ago
- Repository for Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions, ACL23☆192Updated 9 months ago
- Generative Judge for Evaluating Alignment☆230Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆247Updated last year
- Data and Code for Program of Thoughts (TMLR 2023)☆263Updated 9 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆259Updated 10 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆334Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆156Updated 3 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆137Updated 8 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆294Updated 6 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆536Updated 3 months ago
- Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"☆235Updated 10 months ago
- FireAct: Toward Language Agent Fine-tuning☆271Updated last year
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆444Updated last year
- ☆271Updated 2 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆232Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆331Updated last year
- YuLan-IR: Information Retrieval Boosted LMs☆218Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆469Updated last month
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆242Updated 2 years ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆151Updated last year
- Codes for our paper "ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate"☆263Updated 4 months ago
- Evaluating LLMs' multi-round chatting capability via assessing conversations generated by two LLM instances.☆145Updated last year
- [Preprint] Learning to Filter Context for Retrieval-Augmented Generaton☆190Updated 11 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆149Updated last year
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆243Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆288Updated 9 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆468Updated 8 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆136Updated 4 months ago