2003pro / ScaleBiOLinks
This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting
☆19Updated 10 months ago
Alternatives and similar repositories for ScaleBiO
Users that are interested in ScaleBiO are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆45Updated 7 months ago
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆31Updated 8 months ago
- A Sober Look at Language Model Reasoning☆52Updated last week
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆25Updated last month
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated 11 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆22Updated 3 weeks ago
- Directional Preference Alignment☆56Updated 8 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- ☆40Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆25Updated 6 months ago
- ☆49Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- [ICML 2025] Official code of "AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization"☆19Updated 7 months ago
- This is a unified platform for implementing and evaluating test-time reasoning mechanisms in Large Language Models (LLMs).☆18Updated 4 months ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆55Updated last year
- Codebase for decoding compressed trust.☆23Updated last year
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆35Updated 3 weeks ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 4 months ago
- ☆38Updated 2 months ago
- Official Code Repository for [AutoScale–Automatic Prediction of Compute-optimal Data Compositions for Training LLMs]☆12Updated 4 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆56Updated this week
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆28Updated 4 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated last week
- Mosaic IT: Enhancing Instruction Tuning with Data Mosaics☆18Updated 3 months ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆26Updated last year
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆31Updated last month
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆16Updated last year
- ☆41Updated 8 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year