luli-git / MAPLinks
MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation
☆13Updated last year
Alternatives and similar repositories for MAP
Users that are interested in MAP are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] For paper Parameter Competition Balancing for Model Merging☆47Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆196Updated last year
- A curated list of Model Merging methods.☆94Updated last week
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆93Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆149Updated 10 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆64Updated last year
- Codes for Merging Large Language Models☆34Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆49Updated last year
- ☆151Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 8 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆57Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆199Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆128Updated 9 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆38Updated last year
- A Sober Look at Language Model Reasoning☆89Updated 3 weeks ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆192Updated 10 months ago
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large …☆97Updated 11 months ago
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆22Updated last year
- [NeurIPS 2025] What Makes a Reward Model a Good Teacher? An Optimization Perspective☆40Updated 2 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆83Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆145Updated 5 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆56Updated 10 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆62Updated 4 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆72Updated 9 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Updated last year