cooper12121 / llama3-8x8b-MoELinks
Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b MoE model based on llama3.
☆27Updated last year
Alternatives and similar repositories for llama3-8x8b-MoE
Users that are interested in llama3-8x8b-MoE are comparing it to the libraries listed below
Sorting:
- FuseAI Project☆87Updated last year
- ☆53Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆126Updated last year
- ☆51Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆120Updated 8 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆150Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated 2 years ago
- ☆96Updated last year
- ☆36Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated last year
- ☆92Updated 8 months ago
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆53Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- ☆59Updated 6 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- Reformatted Alignment☆111Updated last year
- [EMNLP'25] Code for paper "MT-R1-Zero: Advancing LLM-based Machine Translation via R1-Zero-like Reinforcement Learning"☆65Updated 9 months ago
- ☆125Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Updated 2 years ago
- Automatic prompt optimization framework for multi-step agent tasks.☆36Updated last year
- rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking☆39Updated last year
- Mixture-of-Experts (MoE) Language Model☆194Updated last year
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Updated 11 months ago
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆86Updated 2 years ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 6 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 6 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 11 months ago