princeton-nlp / InstructEval
[NAACL 2024 Findings] Evaluation suite for the systematic evaluation of instruction selection methods.
☆22Updated last year
Alternatives and similar repositories for InstructEval:
Users that are interested in InstructEval are comparing it to the libraries listed below
- Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment☆15Updated 6 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆34Updated last year
- ☆33Updated 10 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated 11 months ago
- The code of paper "Toward Optimal LLM Alignments Using Two-Player Games".☆16Updated 7 months ago
- ☆93Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆50Updated 8 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Evaluate the Quality of Critique☆35Updated 8 months ago
- ☆13Updated 11 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 5 months ago
- The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agen…☆23Updated 11 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆42Updated 2 months ago
- GenRM-CoT: Data release for verification rationales☆46Updated 3 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆47Updated last week
- Lightweight tool to identify Data Contamination in LLMs evaluation☆46Updated 11 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆66Updated last year
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆29Updated 8 months ago
- Learning adapter weights from task descriptions☆15Updated last year
- ☆27Updated 11 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆38Updated 6 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆37Updated 4 months ago
- AbstainQA, ACL 2024☆25Updated 4 months ago
- Code for the 2024 arXiv publication "Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Mo…☆23Updated 7 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated last year
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 8 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆21Updated 2 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆83Updated last year