☆50Mar 2, 2024Updated 2 years ago
Alternatives and similar repositories for OPO
Users that are interested in OPO are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Safety-J: Evaluating Safety with Critique☆16Jul 28, 2024Updated last year
- Evaluate the Quality of Critique☆37Jun 1, 2024Updated last year
- ☆13Jul 14, 2024Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆35Aug 15, 2024Updated last year
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆44Jul 19, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Feb 15, 2024Updated 2 years ago
- ☆59Sep 2, 2024Updated last year
- Generative Judge for Evaluating Alignment☆249Jan 18, 2024Updated 2 years ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆106Mar 6, 2025Updated last year
- Collections of RLxLM experiments using minimal codes☆14Feb 17, 2025Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆82Jan 18, 2024Updated 2 years ago
- ☆16Mar 22, 2025Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆78Oct 9, 2025Updated 6 months ago
- ☆25May 16, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆28Mar 27, 2025Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Feb 29, 2024Updated 2 years ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Jan 17, 2024Updated 2 years ago
- SOTA Math Opensource LLM☆335Dec 12, 2023Updated 2 years ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Mar 30, 2024Updated 2 years ago
- Self-Supervised Alignment with Mutual Information☆20May 24, 2024Updated last year
- Explore what LLMs are really leanring over SFT☆28Mar 30, 2024Updated 2 years ago
- FacTool: Factuality Detection in Generative AI☆926Aug 19, 2024Updated last year
- ☆21Aug 19, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- PyTorch implementation of experiments in the paper Aligning Language Models with Human Preferences via a Bayesian Approach☆32Nov 6, 2023Updated 2 years ago
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Sep 2, 2024Updated last year
- I-SHEEP: Iterative Self-enHancEmEnt Paradigm of LLMs through Self-Instruct and Self-Assessment☆17Jan 16, 2025Updated last year
- ☆129Feb 3, 2025Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆90Jan 29, 2024Updated 2 years ago
- "Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" b…☆45Mar 18, 2024Updated 2 years ago
- Trending projects & awesome papers about data-centric llm studies.☆40May 20, 2025Updated 11 months ago
- OpenResearcher, an advanced Scientific Research Assistant☆504Oct 10, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Jun 24, 2024Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97May 23, 2024Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- Codebase for Inference-Time Policy Adapters☆25Nov 3, 2023Updated 2 years ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆48Jul 12, 2024Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆19Jan 8, 2025Updated last year
- ☆17Feb 22, 2024Updated 2 years ago