TIGER-AI-Lab / Program-of-ThoughtsLinks
Data and Code for Program of Thoughts [TMLR 2023]
☆282Updated last year
Alternatives and similar repositories for Program-of-Thoughts
Users that are interested in Program-of-Thoughts are comparing it to the libraries listed below
Sorting:
- Generative Judge for Evaluating Alignment☆244Updated last year
- ☆286Updated last year
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆274Updated 2 years ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 3 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆127Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆264Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆158Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆267Updated 11 months ago
- Repository for Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions, ACL23☆225Updated last year
- paper list on reasoning in NLP☆191Updated 4 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆115Updated last year
- Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"☆244Updated last year
- [NeurIPS 2023] Codebase for the paper: "Guiding Large Language Models with Directional Stimulus Prompting"☆112Updated 2 years ago
- ☆279Updated 7 months ago
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆92Updated last year
- ☆237Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆129Updated last year
- ☆140Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆347Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆162Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆507Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆246Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆107Updated last month
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆495Updated 10 months ago
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆49Updated 2 years ago
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆71Updated last year
- Datasets for Instruction Tuning of Large Language Models☆255Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆146Updated 9 months ago
- Collection of papers for scalable automated alignment.☆93Updated 10 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆506Updated 7 months ago