facebookresearch / ShepherdLinks
This is the repo for the paper Shepherd -- A Critic for Language Model Generation
☆220Updated 2 years ago
Alternatives and similar repositories for Shepherd
Users that are interested in Shepherd are comparing it to the libraries listed below
Sorting:
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- ☆173Updated 2 years ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆252Updated 2 years ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆216Updated last year
- ☆277Updated 2 years ago
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆208Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆169Updated 3 months ago
- ☆159Updated 2 years ago
- Official codebase for "SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation"☆228Updated 2 years ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆283Updated 2 years ago
- ☆180Updated 2 years ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year
- ☆313Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆165Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆301Updated 10 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆167Updated last year
- Scaling Data-Constrained Language Models☆342Updated 5 months ago
- Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"☆248Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- ☆249Updated 3 years ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆210Updated last year
- ☆98Updated 2 years ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- ☆129Updated last year
- ☆185Updated 10 months ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 5 months ago
- Datasets for Instruction Tuning of Large Language Models☆260Updated 2 years ago
- FireAct: Toward Language Agent Fine-tuning☆287Updated 2 years ago