☆98May 30, 2023Updated 2 years ago
Alternatives and similar repositories for reward-modeling
Users that are interested in reward-modeling are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A repository for transformer critique learning and generation☆89Dec 7, 2023Updated 2 years ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆91Nov 23, 2022Updated 3 years ago
- ☆35Jan 29, 2023Updated 3 years ago
- ☆14Aug 15, 2024Updated last year
- Experiments with generating opensource language model assistants☆97May 14, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,745Jan 8, 2024Updated 2 years ago
- RL algorithm: Advantage induced policy alignment☆66Aug 11, 2023Updated 2 years ago
- [ICLR 2024] This is the official implementation for the paper: "Beyond imitation: Leveraging fine-grained quality signals for alignment"☆10May 5, 2024Updated 2 years ago
- Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-…☆564Apr 23, 2026Updated 2 weeks ago
- ZYN: Zero-Shot Reward Models with Yes-No Questions☆35Aug 15, 2023Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,387Mar 1, 2024Updated 2 years ago
- ☆158Mar 18, 2023Updated 3 years ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆181Feb 13, 2024Updated 2 years ago
- ☆19Jan 11, 2024Updated 2 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆32Apr 20, 2023Updated 3 years ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆27Oct 3, 2025Updated 7 months ago
- A Benchmark Dataset for Multimodal Scientific Fact Checking☆27Sep 17, 2024Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,424Mar 3, 2024Updated 2 years ago
- ☆12Jan 17, 2025Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Jun 26, 2021Updated 4 years ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,389Jul 25, 2023Updated 2 years ago
- Simple next-token-prediction for RLHF☆229Sep 30, 2023Updated 2 years ago
- One stop shop for all things carp☆59Sep 9, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,840Jun 17, 2025Updated 10 months ago
- Open source implementation of InstructGPT (not finished)☆31Apr 13, 2023Updated 3 years ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆211Jul 31, 2023Updated 2 years ago
- An original implementation of the paper "CREPE: Open-Domain Question Answering with False Presuppositions"☆16Nov 5, 2024Updated last year
- Pretraining summarization models using a corpus of nonsense☆13Sep 28, 2021Updated 4 years ago
- A Data Source for Reasoning Embodied Agents☆19Sep 18, 2023Updated 2 years ago
- Pytorch implementation on OpenAI's Procgen ppo-baseline, built from scratch.☆14May 17, 2024Updated last year
- Repository for "Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators"☆12Mar 25, 2025Updated last year
- Self-Alignment with Principle-Following Reward Models☆170Sep 18, 2025Updated 7 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Jan 12, 2024Updated 2 years ago
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆174Apr 7, 2023Updated 3 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- Chain-of-thought 방식을 활용하여 llama2를 fine-tuning☆10Nov 18, 2023Updated 2 years ago
- Platform and API Agnostic library for powering chatbots☆23Feb 27, 2023Updated 3 years ago
- Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise☆40Aug 29, 2024Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆307Mar 1, 2023Updated 3 years ago