cooper12121 / llama3-8x8b-MoELinks
Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b MoE model based on llama3.
☆27Updated last year
Alternatives and similar repositories for llama3-8x8b-MoE
Users that are interested in llama3-8x8b-MoE are comparing it to the libraries listed below
Sorting:
- FuseAI Project☆87Updated 7 months ago
- ☆48Updated 11 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆149Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 4 months ago
- Official completion of “Training on the Benchmark Is Not All You Need”.☆35Updated 8 months ago
- ☆36Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- ☆49Updated last year
- rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking☆39Updated 8 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆71Updated this week
- ☆89Updated 4 months ago
- ☆89Updated 10 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- ☆46Updated 2 months ago
- ☆95Updated 9 months ago
- Reformatted Alignment☆113Updated 11 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆212Updated last month
- ☆114Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Automatic prompt optimization framework for multi-step agent tasks.☆33Updated 10 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆190Updated last year
- Scaling Preference Data Curation via Human-AI Synergy☆107Updated 2 months ago
- ☆50Updated last year
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆52Updated 10 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆269Updated 7 months ago
- ☆97Updated last month