2003pro / ScaleBiOLinks
This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting
☆19Updated 10 months ago
Alternatives and similar repositories for ScaleBiO
Users that are interested in ScaleBiO are comparing it to the libraries listed below
Sorting:
- A Sober Look at Language Model Reasoning☆74Updated last week
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆45Updated 8 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆32Updated 9 months ago
- ☆49Updated last year
- Directional Preference Alignment☆57Updated 9 months ago
- ☆40Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆27Updated 2 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- ☆26Updated last year
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆26Updated 6 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆80Updated 10 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆30Updated last month
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆34Updated 5 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆37Updated last week
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆55Updated last year
- Codebase for decoding compressed trust.☆24Updated last year
- ☆41Updated 8 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆24Updated 7 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 9 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆121Updated 9 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆29Updated 5 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 8 months ago
- ☆33Updated 9 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆88Updated 8 months ago
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆25Updated last year
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆25Updated last week
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆72Updated 2 years ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆26Updated last year