Re-Align / just-eval
A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.
☆79Updated 11 months ago
Alternatives and similar repositories for just-eval:
Users that are interested in just-eval are comparing it to the libraries listed below
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated 11 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated 10 months ago
- ☆48Updated 10 months ago
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"☆90Updated last month
- Reformatted Alignment☆113Updated 3 months ago
- InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆64Updated 2 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆106Updated 6 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆129Updated 2 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆41Updated last month
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆97Updated 6 months ago
- Self-Alignment with Principle-Following Reward Models☆150Updated 10 months ago
- Code for ICLR 2024 paper "CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets"☆50Updated 7 months ago
- ☆56Updated 4 months ago
- ☆64Updated 11 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆88Updated 3 months ago
- FuseAI Project☆76Updated last month
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆58Updated 2 months ago
- Codebase for Instruction Following without Instruction Tuning☆33Updated 3 months ago
- Sotopia-π: Interactive Learning of Socially Intelligent Language Agents (ACL 2024)☆55Updated 8 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆120Updated 6 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆40Updated 3 weeks ago
- Code implementation of synthetic continued pretraining☆79Updated last week
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 3 weeks ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆45Updated 10 months ago
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆70Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 3 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆53Updated 10 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆77Updated 5 months ago
- ☆93Updated 3 months ago