SkyworkAI / Skywork-MoE
Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models
☆129Updated 9 months ago
Alternatives and similar repositories for Skywork-MoE:
Users that are interested in Skywork-MoE are comparing it to the libraries listed below
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- FuseAI Project☆84Updated last month
- Mixture-of-Experts (MoE) Language Model☆185Updated 6 months ago
- Reformatted Alignment☆115Updated 5 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆229Updated last month
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆128Updated 8 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆246Updated 3 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆140Updated 6 months ago
- ☆29Updated 6 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆173Updated last month
- ☆60Updated 3 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 5 months ago
- ☆45Updated 9 months ago
- Repository of LV-Eval Benchmark☆59Updated 6 months ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆55Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆54Updated 4 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆164Updated this week
- ☆312Updated 6 months ago
- ☆102Updated 3 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 11 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆61Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆393Updated 5 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 9 months ago
- ☆92Updated 3 months ago