Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging. Arxiv, 2024.
☆16Oct 28, 2024Updated last year
Alternatives and similar repositories for Efficient-WEMoE
Users that are interested in Efficient-WEMoE are comparing it to the libraries listed below
Sorting:
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- ☆19Jun 21, 2025Updated 8 months ago
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆18Dec 16, 2024Updated last year
- DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling☆36Jul 12, 2024Updated last year
- ☆32Aug 9, 2024Updated last year
- 🌏 UI component library for the future, based on WebComponent.☆23Nov 12, 2024Updated last year
- ☆26Oct 6, 2024Updated last year
- Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"☆29Oct 1, 2024Updated last year
- Official repository of the "Transformer Fusion with Optimal Transport" paper, published as a conference paper at ICLR 2024.☆31Apr 19, 2024Updated last year
- Sensitive-rs is a Rust library for finding, validating, filtering, and replacing sensitive words. It provides efficient algorithms to han…☆22Feb 4, 2026Updated last month
- [NeurIPS25] Official repo for "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning"☆42Oct 3, 2025Updated 5 months ago
- [ICLR 2025] A Closer Look at Machine Unlearning for Large Language Models☆45Dec 4, 2024Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆81Jun 19, 2024Updated last year
- ☆44Oct 1, 2024Updated last year
- ☆44Mar 3, 2023Updated 3 years ago
- Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"☆57Sep 20, 2024Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated last month
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- “悟道”数据☆51Jul 5, 2021Updated 4 years ago
- A toolkit for building dense retrievers with deep language models.☆64Sep 24, 2021Updated 4 years ago
- ☆33Jul 8, 2024Updated last year
- ☆60Aug 22, 2024Updated last year
- ☆55Apr 24, 2024Updated last year
- PsyChat: A Client-Centric Dialogue System for Mental Health Support☆62Sep 4, 2024Updated last year
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆31Jun 7, 2024Updated last year
- ☆73Jul 15, 2024Updated last year
- The official repository of "Whoever Started the Interference Should End It: Guiding Data-Free Model Merging via Task Vectors""☆49Oct 1, 2025Updated 5 months ago
- Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs (ACL 2024)☆73May 5, 2025Updated 10 months ago
- Official code for our paper "Model Composition for Multimodal Large Language Models" (ACL 2024)☆31Jan 8, 2025Updated last year
- ☆70Apr 14, 2023Updated 2 years ago
- A Large-Scale Chinese Legal Case Retrieval Dataset☆85Dec 29, 2024Updated last year
- [NeurIPS 2024] For paper Parameter Competition Balancing for Model Merging☆48Oct 11, 2024Updated last year
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆86Mar 2, 2021Updated 5 years ago
- ☆83Sep 14, 2024Updated last year
- ☆10Aug 22, 2017Updated 8 years ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆91Sep 30, 2024Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- Shiny apps for NGS etc based on reusable components created using Shiny modules☆88Dec 8, 2025Updated 3 months ago
- Conifer: Improving Complex Constrained Instruction-Following Ability of Large Language Models☆89Apr 4, 2024Updated last year