tianjunz / TEMPERA
☆42Updated last year
Alternatives and similar repositories for TEMPERA:
Users that are interested in TEMPERA are comparing it to the libraries listed below
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆42Updated 6 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆39Updated 3 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆24Updated 6 months ago
- Directional Preference Alignment☆56Updated 4 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆15Updated last month
- ☆34Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆66Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆108Updated last year
- ☆25Updated last year
- ☆12Updated 2 months ago
- Restore safety in fine-tuned language models through task arithmetic☆27Updated 10 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆34Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆52Updated 4 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 3 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆19Updated 3 weeks ago
- Lightweight Adapting for Black-Box Large Language Models☆19Updated last year
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆55Updated last year
- Implementation of the methods described in our paper "Explicit Planning Helps Language Models in Logical Reasoning"☆22Updated last year
- ☆25Updated 8 months ago
- Self-Supervised Alignment with Mutual Information☆16Updated 8 months ago
- ☆15Updated 5 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆40Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆115Updated 5 months ago
- A Survey on the Honesty of Large Language Models☆53Updated 2 months ago
- This repository contains the official code for the paper: "Prompt Injection: Parameterization of Fixed Inputs"☆32Updated 5 months ago
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆34Updated last year
- Mosaic IT: Enhancing Instruction Tuning with Data Mosaics☆17Updated last week
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆62Updated 3 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆37Updated 4 months ago
- ☆21Updated 7 months ago