deeplearning-wisc / args
☆37Updated last year
Alternatives and similar repositories for args:
Users that are interested in args are comparing it to the libraries listed below
- ☆50Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆71Updated 3 weeks ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆22Updated 9 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆19Updated 5 months ago
- ☆93Updated last year
- Rewarded soups official implementation☆56Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆15Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆119Updated 6 months ago
- Directional Preference Alignment☆56Updated 6 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆52Updated 4 months ago
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆72Updated 7 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆73Updated this week
- ☆29Updated 11 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆23Updated 2 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆43Updated 8 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆62Updated 3 months ago
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆17Updated 7 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆35Updated 2 months ago
- ☆25Updated 10 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆99Updated last year
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆49Updated 10 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆16Updated last year
- Self-Supervised Alignment with Mutual Information☆16Updated 10 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆107Updated last year
- Learning adapter weights from task descriptions☆16Updated last year
- General-purpose activation steering library☆56Updated 3 months ago
- ☆17Updated 3 weeks ago
- ☆49Updated 7 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆28Updated last week