princeton-nlp / InstructEval
[NAACL 2024 Findings] Evaluation suite for the systematic evaluation of instruction selection methods.
☆23Updated last year
Alternatives and similar repositories for InstructEval:
Users that are interested in InstructEval are comparing it to the libraries listed below
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆33Updated last year
- The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agen…☆23Updated 10 months ago
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆48Updated last year
- ☆93Updated last year
- Code for paper - On Diversified Preferences of Large Language Model Alignment☆15Updated 5 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆77Updated 5 months ago
- [ICLR 2024] COLLIE: Systematic Construction of Constrained Text Generation Tasks☆52Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆53Updated 10 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 3 weeks ago
- Evaluate the Quality of Critique☆35Updated 7 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆64Updated last year
- Accompanying code for "Boosted Prompt Ensembles for Large Language Models"☆29Updated last year
- ☆33Updated 9 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆41Updated 5 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆31Updated 8 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- Benchmarking Benchmark Leakage in Large Language Models☆47Updated 7 months ago
- ☆15Updated 5 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆43Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs?☆26Updated 7 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆51Updated 9 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆66Updated 6 months ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆23Updated last month
- Critique-out-Loud Reward Models☆47Updated 3 months ago
- A trainable user simulator☆32Updated 4 months ago
- ☆14Updated 10 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆18Updated last month
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated 10 months ago
- Learning adapter weights from task descriptions☆15Updated last year