cooper12121 / llama3-8x8b-MoELinks
Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b MoE model based on llama3.
☆27Updated last year
Alternatives and similar repositories for llama3-8x8b-MoE
Users that are interested in llama3-8x8b-MoE are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆122Updated 6 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆110Updated 2 months ago
- FuseAI Project☆87Updated 5 months ago
- Reformatted Alignment☆113Updated 9 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆135Updated last year
- ☆36Updated 10 months ago
- ☆94Updated 7 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆33Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆195Updated 2 weeks ago
- Automatic prompt optimization framework for multi-step agent tasks.☆31Updated 8 months ago
- ☆43Updated 9 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆145Updated 9 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆147Updated 11 months ago
- rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking☆39Updated 6 months ago
- Official completion of “Training on the Benchmark Is Not All You Need”.☆34Updated 6 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆38Updated 4 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated 10 months ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- ☆106Updated last year
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆252Updated 7 months ago
- ☆88Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆166Updated last week
- ☆64Updated 7 months ago
- ☆56Updated 8 months ago
- ☆102Updated 7 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆85Updated 3 months ago
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 5 months ago