cooper12121 / llama3-8x8b-MoE
Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b MoE model based on llama3.
☆26Updated 9 months ago
Alternatives and similar repositories for llama3-8x8b-MoE:
Users that are interested in llama3-8x8b-MoE are comparing it to the libraries listed below
- ☆46Updated 10 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- Automatic prompt optimization framework for multi-step agent tasks.☆29Updated 5 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- ☆36Updated 7 months ago
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆47Updated 5 months ago
- Unofficial implementation of AlpaGasus☆90Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆147Updated 7 months ago
- ☆37Updated 6 months ago
- Reformatted Alignment☆115Updated 7 months ago
- a-m-team's exploration in large language modeling☆49Updated 3 weeks ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆36Updated last month
- An Experiment on Dynamic NTK Scaling RoPE☆63Updated last year
- FuseAI Project☆85Updated 3 months ago
- ☆17Updated 11 months ago
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- ☆98Updated 6 months ago
- ☆29Updated 8 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated 10 months ago
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆26Updated 8 months ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆55Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆31Updated 3 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆60Updated 5 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆48Updated 10 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆135Updated 9 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆24Updated 2 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆131Updated 10 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆61Updated 5 months ago