Model Alignment is a python library from the PAIR team that enable users to create model prompts through user feedback instead of manual prompt writing and editing. The technique makes use of constitutional principles to align prompts to users' desired values.
☆30Oct 30, 2025Updated 4 months ago
Alternatives and similar repositories for model-alignment
Users that are interested in model-alignment are comparing it to the libraries listed below
Sorting:
- ☆14Nov 17, 2021Updated 4 years ago
- Platform for running online research experiments on human + LLM group dynamics.☆65Feb 20, 2026Updated last week
- [AAAI'23] FinalMLP: An Enhanced Two-Stream MLP Model for CTR Prediction https://arxiv.org/abs/2304.00902