TUDB-Labs / MoE-PEFTView external linksLinks
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
☆133Mar 11, 2025Updated 11 months ago
Alternatives and similar repositories for MoE-PEFT
Users that are interested in MoE-PEFT are comparing it to the libraries listed below
Sorting:
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆188Jul 22, 2024Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆400Apr 29, 2024Updated last year
- ☆176Jul 22, 2024Updated last year
- X-LoRA: Mixture of LoRA Experts☆263Aug 4, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆38Oct 11, 2024Updated last year
- Where is the "main theme" in an orchestral score?☆12Oct 25, 2025Updated 3 months ago
- 😎 Awesome papers on token redundancy reduction☆11Mar 12, 2025Updated 11 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆114Updated this week
- Grounding Language Models for Compositional and Spatial Reasoning☆18Oct 26, 2022Updated 3 years ago
- Repo of the paper "Towards Building an End-to-End Multilingual Automatic Lyrics Transcription Model""☆14Jun 28, 2024Updated last year
- ☆10Apr 16, 2024Updated last year
- Official implementation of ICLR 2025 'LORO: Parameter and Memory Efficient Pretraining via Low-rank Riemannian Optimization'☆16Apr 24, 2025Updated 9 months ago
- ☆64Dec 2, 2024Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆143Apr 8, 2025Updated 10 months ago
- ☆33Dec 17, 2025Updated last month
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model☆18Apr 16, 2024Updated last year
- [ICLR 2024] Beyond Accuracy: Evaluating Self-Consistency of Code Large Language Models with IdentityChain☆10Nov 24, 2025Updated 2 months ago
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆84Oct 21, 2025Updated 3 months ago
- Source code for Noise-Contrastive Estimation for Multivariate Point Processes (NeurIPS 2020).☆15Nov 3, 2020Updated 5 years ago
- Discrete Diffusion VLA: Bringing Discrete Diffusion to Action Decoding in Vision-Language-Action Policies☆55Dec 3, 2025Updated 2 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- ☆31Mar 13, 2024Updated last year
- ☆18Aug 11, 2022Updated 3 years ago
- [NeurIPS 2024] Search for Efficient LLMs☆16Jan 16, 2025Updated last year
- ☆15Aug 22, 2025Updated 5 months ago
- This repository contains the implementation of the paper "MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models".☆24May 28, 2025Updated 8 months ago
- ☆152Sep 9, 2024Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Mar 27, 2025Updated 10 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated 9 months ago
- ☆25Jun 19, 2025Updated 7 months ago
- Code for http://proceedings.mlr.press/v80/dvurechensky18a.html☆17Jul 31, 2018Updated 7 years ago
- The demo page for ALMTokenizer☆58Apr 14, 2025Updated 10 months ago
- EfficientDet_PyTorch 目标检测(Object Detection)☆22Apr 23, 2021Updated 4 years ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Jul 1, 2025Updated 7 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆409Jun 30, 2025Updated 7 months ago
- Collection of awesome Continual Test-Time Adaptation methods☆24Jun 4, 2024Updated last year
- ☆19Apr 10, 2017Updated 8 years ago