jamqd / Group-Preference-OptimizationLinks
☆21Updated last year
Alternatives and similar repositories for Group-Preference-Optimization
Users that are interested in Group-Preference-Optimization are comparing it to the libraries listed below
Sorting:
- ☆46Updated last year
- ☆179Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆84Updated 8 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆26Updated last year
- ☆29Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆78Updated 5 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆86Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆47Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆86Updated 7 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆67Updated 11 months ago
- Source code and data for ADEPT: A DEbiasing PrompT Framework (AAAI-23).☆15Updated 11 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆28Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆121Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆118Updated last year
- Code for Representation Bending Paper☆13Updated 4 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- ☆13Updated 4 months ago
- Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation☆20Updated last year
- ☆47Updated last year
- ☆30Updated 8 months ago
- ☆41Updated last year
- ☆57Updated 2 years ago
- AbstainQA, ACL 2024☆28Updated last year
- ☆101Updated 2 years ago
- ☆63Updated 8 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆34Updated 9 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆75Updated last year
- ☆53Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆92Updated last year