wuhy68 / Parameter-Efficient-MoE
Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
☆140Updated 6 months ago
Alternatives and similar repositories for Parameter-Efficient-MoE:
Users that are interested in Parameter-Efficient-MoE are comparing it to the libraries listed below
- ☆76Updated 2 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆154Updated 9 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated 10 months ago
- ☆125Updated last year
- FuseAI Project☆84Updated last month
- ☆253Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 9 months ago
- A pipeline for LLM knowledge distillation☆98Updated last month
- This is the official repository for Inheritune.☆109Updated last month
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆128Updated 8 months ago
- Reformatted Alignment☆115Updated 6 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆166Updated 2 weeks ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆70Updated 3 months ago
- ☆120Updated 9 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆102Updated last month
- ☆220Updated 9 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆85Updated 9 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆54Updated 11 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆73Updated 9 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 3 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆74Updated 9 months ago
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆73Updated 5 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆108Updated 2 weeks ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆150Updated 3 months ago
- ☆194Updated 3 months ago