cooper12121 / llama3-8x8b-MoELinks
Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b MoE model based on llama3.
☆27Updated last year
Alternatives and similar repositories for llama3-8x8b-MoE
Users that are interested in llama3-8x8b-MoE are comparing it to the libraries listed below
Sorting:
- ☆52Updated last year
- ☆50Updated last year
- FuseAI Project☆87Updated 10 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- ☆92Updated 6 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 6 months ago
- ☆36Updated last year
- Automatic prompt optimization framework for multi-step agent tasks.☆36Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 9 months ago
- ☆53Updated 4 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated 11 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- ☆94Updated last year
- Reformatted Alignment☆113Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆152Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆192Updated last year
- WideSearch: Benchmarking Agentic Broad Info-Seeking☆103Updated 2 months ago
- ☆60Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- ☆95Updated last year
- ☆122Updated last year
- Scaling Preference Data Curation via Human-AI Synergy☆132Updated 5 months ago
- Official completion of “Training on the Benchmark Is Not All You Need”.☆38Updated 11 months ago
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆86Updated last year
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆70Updated last month
- ☆98Updated 4 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆179Updated 5 months ago
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆53Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year