haotiansun14 / BBox-AdapterLinks
Lightweight Adapting for Black-Box Large Language Models
☆22Updated last year
Alternatives and similar repositories for BBox-Adapter
Users that are interested in BBox-Adapter are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆29Updated 5 months ago
- A Sober Look at Language Model Reasoning☆74Updated last week
- Directional Preference Alignment☆57Updated 9 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆26Updated last year
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆32Updated 9 months ago
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆19Updated 10 months ago
- ☆37Updated last year
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆24Updated 7 months ago
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆32Updated 2 months ago
- ☆26Updated last year
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆37Updated 7 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆37Updated last week
- Self-Supervised Alignment with Mutual Information☆19Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- ☆49Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆60Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆121Updated 9 months ago
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆51Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 3 months ago
- ☆13Updated 6 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆22Updated 6 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆30Updated last month
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆27Updated 2 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆48Updated last month
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 7 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆35Updated 7 months ago
- ☆29Updated last year