tanganke / pareto_set_learningLinks
Code for paper "Towards Efficient Pareto Set Approximation via Weight-Ensembling Mixture of Experts"
☆11Updated last year
Alternatives and similar repositories for pareto_set_learning
Users that are interested in pareto_set_learning are comparing it to the libraries listed below
Sorting:
- Official implementation of the paper "Chain-of-Experts: When LLMs Meet Complex Operation Research Problems"☆111Updated 10 months ago
- FusionBench: A Comprehensive Benchmark/Toolkit of Deep Model Fusion☆194Updated this week
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆120Updated 5 months ago
- The LLMOPT project offers a comprehensive set of resources, including the model, dataset, training framework, and inference code, enablin…☆112Updated last month
- A list of awesome papers and resources of the intersection of Large Language Models and Evolutionary Computation.☆123Updated 9 months ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆97Updated last year
- [NeurIPS 2024] GITA: Graph to Image-Text Integration for Vision-Language Graph Reasoning☆53Updated last month
- ☆95Updated 9 months ago
- Must-read Papers on Large Language Model (LLM) as Optimizers and Automatic Optimization for Prompting LLMs.☆253Updated last year
- Evolutionary-Algorithm and Large-Language-Model☆22Updated last year
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 5 months ago
- Code for paper "Concrete Subspace Learning based Interference Elimination for Multi-task Model Fusion"☆14Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆78Updated 6 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆30Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Updated last year
- Large Language Model for MOEA☆60Updated last year
- [ICML 2024] Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models☆33Updated last year
- A curated list of Model Merging methods.☆94Updated 3 weeks ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 5 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆94Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆200Updated last year
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆31Updated last month
- ThinK: Thinner Key Cache by Query-Driven Pruning☆25Updated 10 months ago
- A block pruning framework for LLMs.☆27Updated 7 months ago
- ☆55Updated 2 years ago
- code for paper Sparse Structure Search for Delta Tuning☆11Updated 3 years ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- [IJCAI 2023] Black-box Prompt Tuning for Vision-Language Model as a Service☆18Updated 2 years ago
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆22Updated 10 months ago