mingkaid / rl-promptLinks
Accompanying repo for the RLPrompt paper
☆349Updated last year
Alternatives and similar repositories for rl-prompt
Users that are interested in rl-prompt are comparing it to the libraries listed below
Sorting:
- ☆280Updated 8 months ago
- MEND: Fast Model Editing at Scale☆249Updated 2 years ago
- Prod Env☆429Updated last year
- ☆176Updated last year
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆255Updated 2 years ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆504Updated last year
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆167Updated 4 years ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆378Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆163Updated last year
- Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"☆245Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆545Updated 7 months ago
- This repository contains a collection of papers and resources on Reasoning in Large Language Models.☆564Updated last year
- ☆240Updated 2 years ago
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆270Updated 2 years ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆377Updated 5 months ago
- Simple next-token-prediction for RLHF☆227Updated last year
- Data and code for the ICLR 2023 paper "Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning".☆156Updated last year
- ☆350Updated 4 years ago
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated 2 years ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆291Updated 6 months ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆573Updated last year
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆270Updated 2 years ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆359Updated last year
- Few-shot Learning of GPT-3☆355Updated last year
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆209Updated 2 years ago
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆148Updated 10 months ago
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆353Updated 2 years ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆546Updated 10 months ago
- ☆159Updated 2 years ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆511Updated last year