cooper12121 / llama3-8x8b-MoE
Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b MoE model based on llama3.
☆26Updated 8 months ago
Alternatives and similar repositories for llama3-8x8b-MoE:
Users that are interested in llama3-8x8b-MoE are comparing it to the libraries listed below
- ☆44Updated 9 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 10 months ago
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆29Updated 9 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 2 months ago
- Automatic prompt optimization framework for multi-step agent tasks.☆28Updated 4 months ago
- ☆36Updated 6 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆144Updated 6 months ago
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆62Updated 2 months ago
- ☆48Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- Reformatted Alignment☆114Updated 5 months ago
- ☆96Updated 5 months ago
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆24Updated 7 months ago
- Hammer: Robust Function-Calling for On-Device Language Models via Function Masking☆63Updated 3 weeks ago
- FuseAI Project☆83Updated last month
- ☆92Updated 3 months ago
- On Memorization of Large Language Models in Logical Reasoning☆53Updated 4 months ago
- Unofficial implementation of AlpaGasus☆90Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆27Updated 6 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 4 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 9 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆22Updated last month
- Code for paper "Patch-Level Training for Large Language Models"☆81Updated 3 months ago
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆45Updated 4 months ago