thunlp / ToolLearningPapersLinks
☆916Updated last year
Alternatives and similar repositories for ToolLearningPapers
Users that are interested in ToolLearningPapers are comparing it to the libraries listed below
Sorting:
- ☆922Updated last year
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆992Updated 8 months ago
- ☆770Updated last year
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,071Updated 3 months ago
- Paper collection on building and evaluating language model agents via executable language grounding☆363Updated last year
- papers related to LLM-agent that published on top conferences☆320Updated 9 months ago
- [NIPS2023] RRHF & Wombat☆808Updated 2 years ago
- Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs…☆601Updated last month
- [ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future☆486Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,473Updated 2 years ago
- Paper List for In-context Learning 🌷☆873Updated last year
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,039Updated last year
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆284Updated 2 years ago
- This is the repository for the Tool Learning survey.☆474Updated 5 months ago
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆507Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,413Updated last year
- ☆552Updated last year
- Aligning Large Language Models with Human: A Survey☆742Updated 2 years ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,573Updated last month
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,590Updated 7 months ago
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,098Updated 2 years ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆767Updated 2 years ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,611Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆580Updated last year
- An Awesome Collection for LLM Survey☆383Updated 7 months ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆563Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆544Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆1,074Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,766Updated last year
- [TMLR] Cumulative Reasoning With Large Language Models (https://arxiv.org/abs/2308.04371)☆308Updated 5 months ago