fxmeng / mixtral_spliter
Converting Mixtral-8x7B to Mixtral-[1~7]x7B
β20Updated 10 months ago
Alternatives and similar repositories for mixtral_spliter:
Users that are interested in mixtral_spliter are comparing it to the libraries listed below
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Modelsβ75Updated 10 months ago
- β93Updated 3 months ago
- [NeurIPS-2024] π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623β75Updated 3 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignmentβ70Updated 7 months ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free toβ¦β53Updated last year
- [ICML'24] The official implementation of βRethinking Optimization and Architecture for Tiny Language Modelsββ119Updated this week
- An Experiment on Dynamic NTK Scaling RoPEβ62Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scalesβ31Updated last year
- 𧬠RegMix: Data Mixture as Regression for Language Model Pre-trainingβ97Updated 3 months ago
- β69Updated this week
- Touchstone: Evaluating Vision-Language Models by Language Modelsβ80Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMsβ40Updated 6 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β58Updated 2 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear atβ¦β99Updated 7 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodingsβ154Updated 7 months ago
- β17Updated last year
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"β71Updated 7 months ago
- β42Updated last month
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]β130Updated 4 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ155Updated 6 months ago
- β66Updated 10 months ago
- The code and data for the paper JiuZhang3.0β40Updated 7 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":β34Updated 9 months ago
- A Closer Look into Mixture-of-Experts in Large Language Modelsβ41Updated 5 months ago
- β64Updated 9 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"β135Updated 3 months ago
- Code for paper "Patch-Level Training for Large Language Models"β75Updated 2 months ago
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Processβ23Updated 5 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"β58Updated 10 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modelingβ44Updated 3 weeks ago