☆253Dec 21, 2022Updated 3 years ago
Alternatives and similar repositories for ConstitutionalHarmlessnessPaper
Users that are interested in ConstitutionalHarmlessnessPaper are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,832Jun 17, 2025Updated 9 months ago
- Hypercorn is an ASGI and WSGI Server based on Hyper libraries and inspired by Gunicorn.☆14Jan 12, 2026Updated 2 months ago
- ☆336Jul 2, 2024Updated last year
- ☆75Nov 3, 2023Updated 2 years ago
- ☆158Mar 18, 2023Updated 3 years ago
- All-in-one repository for Fine-tuning & Pretraining (Large) Language Models☆15Mar 8, 2023Updated 3 years ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,591Nov 24, 2025Updated 4 months ago
- Development repository for the Triton language and compiler☆21Sep 17, 2025Updated 6 months ago
- [ICLR 2024] COLLIE: Systematic Construction of Constrained Text Generation Tasks☆60Aug 2, 2023Updated 2 years ago
- Simple next-token-prediction for RLHF☆229Sep 30, 2023Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- ☆12Oct 23, 2022Updated 3 years ago
- MetricEval: A framework that conceptualizes and operationalizes four main components of metric evaluation, in terms of reliability and va…☆12Nov 6, 2023Updated 2 years ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆170Sep 18, 2025Updated 6 months ago
- ☆13Mar 25, 2022Updated 3 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,961Aug 9, 2025Updated 7 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆29Dec 19, 2023Updated 2 years ago
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆48Dec 23, 2024Updated last year
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Apr 26, 2023Updated 2 years ago
- PyTorch implementation of experiments in the paper Aligning Language Models with Human Preferences via a Bayesian Approach☆32Nov 6, 2023Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,383Mar 1, 2024Updated 2 years ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,587Mar 27, 2023Updated 2 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Aug 18, 2023Updated 2 years ago
- Expanding natural instructions☆1,037Dec 11, 2023Updated 2 years ago
- ☆284Jan 6, 2025Updated last year
- Data and code for the paper "The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems"☆21Jul 18, 2023Updated 2 years ago
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆355Jun 18, 2023Updated 2 years ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,106Jun 1, 2023Updated 2 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆464Nov 5, 2022Updated 3 years ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,144Sep 18, 2025Updated 6 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Feb 13, 2024Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Jan 8, 2024Updated 2 years ago
- Tasks for describing differences between text distributions.☆17Aug 9, 2024Updated last year
- Explore what LLMs are really leanring over SFT☆28Mar 30, 2024Updated last year
- ☆26Sep 5, 2024Updated last year
- [EACL 2023] CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification☆42Apr 29, 2023Updated 2 years ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Jun 19, 2024Updated last year
- This repo is for the paper: On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark☆24Aug 13, 2022Updated 3 years ago