cooper12121 / llama3-8x8b-MoE
Copy the MLP of llama3 8 times as 8 experts , created a router with random initialization,add load balancing loss to construct an 8x8b MoE model based on llama3.
☆26Updated 10 months ago
Alternatives and similar repositories for llama3-8x8b-MoE
Users that are interested in llama3-8x8b-MoE are comparing it to the libraries listed below
Sorting:
- ☆36Updated 8 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- ☆46Updated 11 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- ☆29Updated 8 months ago
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆48Updated 6 months ago
- Automatic prompt optimization framework for multi-step agent tasks.☆30Updated 6 months ago
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32Updated 11 months ago
- ☆39Updated 7 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆77Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 7 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆24Updated 3 months ago
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆27Updated 9 months ago
- ☆17Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆151Updated 8 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- On Memorization of Large Language Models in Logical Reasoning☆64Updated last month
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆62Updated 6 months ago
- Code and Data for the paper "Evaluating Character Understanding of Large Language Models via Character Profiling from Fictional Works".☆17Updated 9 months ago
- ☆14Updated last year
- ☆17Updated last year
- ☆98Updated 7 months ago
- Reformatted Alignment☆114Updated 7 months ago
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆32Updated 9 months ago
- ☆49Updated last year
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆32Updated 5 months ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆135Updated 3 months ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆164Updated last year