TUDB-Labs / MixLoRA
State-of-the-art Parameter-Efficient MoE Fine-tuning Method
☆92Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for MixLoRA
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆44Updated this week
- ☆116Updated 3 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆64Updated 5 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆124Updated 3 months ago
- SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆26Updated last month
- ☆76Updated 4 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models".☆37Updated 2 weeks ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆25Updated 4 months ago
- [EMNLP 2024 Findings🔥] Official implementation of "LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Infe…☆75Updated last week
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆36Updated this week
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆100Updated 2 weeks ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆29Updated 7 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆144Updated 5 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆15Updated 6 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆69Updated 8 months ago
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆94Updated 7 months ago
- A Survey on the Honesty of Large Language Models☆46Updated last month
- ☆147Updated 4 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆32Updated last week
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆55Updated 3 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆35Updated 7 months ago
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆69Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆97Updated 7 months ago
- DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling☆28Updated 4 months ago
- Codes for Merging Large Language Models☆24Updated 3 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆91Updated 4 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆73Updated 8 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆230Updated 6 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆138Updated 2 months ago