☆51Mar 2, 2024Updated 2 years ago
Alternatives and similar repositories for OPO
Users that are interested in OPO are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Safety-J: Evaluating Safety with Critique☆16Jul 28, 2024Updated last year
- ☆78May 22, 2024Updated last year
- Evaluate the Quality of Critique☆36Jun 1, 2024Updated last year
- ☆13Jul 14, 2024Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Aug 15, 2024Updated last year
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆43Jul 19, 2024Updated last year
- Codes for Mitigating Unhelpfulness in Emotional Support Conversations with Multifaceted AI Feedback (ACL 2024 Findings)☆16Jul 2, 2024Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Feb 15, 2024Updated 2 years ago
- ☆58Sep 2, 2024Updated last year
- Generative Judge for Evaluating Alignment☆248Jan 18, 2024Updated 2 years ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated last year
- Collections of RLxLM experiments using minimal codes☆14Feb 17, 2025Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Jan 18, 2024Updated 2 years ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆78Oct 9, 2025Updated 5 months ago
- ☆25May 16, 2024Updated last year
- ☆27Mar 27, 2025Updated 11 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Feb 29, 2024Updated 2 years ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Jan 17, 2024Updated 2 years ago
- SOTA Math Opensource LLM☆335Dec 12, 2023Updated 2 years ago
- Modified Beam Search with periodical restart☆12Sep 12, 2024Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Mar 30, 2024Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Jun 19, 2024Updated last year
- Self-Supervised Alignment with Mutual Information☆20May 24, 2024Updated last year
- Explore what LLMs are really leanring over SFT☆28Mar 30, 2024Updated last year
- FacTool: Factuality Detection in Generative AI☆918Aug 19, 2024Updated last year
- ☆21Aug 19, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- PyTorch implementation of experiments in the paper Aligning Language Models with Human Preferences via a Bayesian Approach☆32Nov 6, 2023Updated 2 years ago
- I-SHEEP: Iterative Self-enHancEmEnt Paradigm of LLMs through Self-Instruct and Self-Assessment☆17Jan 16, 2025Updated last year
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 10 months ago
- ☆124Feb 3, 2025Updated last year
- [ICLR 2026] Efficient Agent Training for Computer Use☆139Sep 5, 2025Updated 6 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆90Jan 29, 2024Updated 2 years ago
- OpenResearcher, an advanced Scientific Research Assistant☆498Oct 10, 2024Updated last year
- Trending projects & awesome papers about data-centric llm studies.☆40May 20, 2025Updated 10 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Jun 24, 2024Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- Codebase for Inference-Time Policy Adapters☆25Nov 3, 2023Updated 2 years ago