A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI
☆771Dec 15, 2023Updated 2 years ago
Alternatives and similar repositories for MixtralKit
Users that are interested in MixtralKit are comparing it to the libraries listed below
Sorting:
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,002Dec 6, 2024Updated last year
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆368Dec 9, 2023Updated 2 years ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,693Aug 14, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,710Jun 25, 2024Updated last year
- Official inference library for Mistral models☆10,690Updated this week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated last year
- Tools for merging pretrained large language models.☆6,826Updated this week
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,705Updated this week
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,087Updated this week
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,328Apr 8, 2024Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,479Oct 31, 2023Updated 2 years ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,895Jan 16, 2024Updated 2 years ago
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 5 months ago
- A series of large language models trained from scratch by developers @01-ai☆7,843Nov 27, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,187Jul 11, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,315Mar 6, 2025Updated 11 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,645Updated this week
- An Open-source Toolkit for LLM Development☆2,805Jan 13, 2025Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514May 20, 2024Updated last year
- High-speed Large Language Model Serving for Local Deployment☆8,729Jan 24, 2026Updated last month
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,896May 3, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,477Jun 7, 2025Updated 8 months ago
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,160Oct 30, 2025Updated 4 months ago
- OpenChat: Advancing Open-source Language Models with Imperfect Data☆5,475Sep 13, 2024Updated last year
- 中文Mixtral-8x7B(Chinese-Mixtral-8x7B)☆654Aug 17, 2024Updated last year
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week
- Best practice for training LLaMA models in Megatron-LM☆663Jan 2, 2024Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,673Apr 17, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,460Feb 26, 2026Updated last week
- inference code for mixtral-8x7b-32kseqlen☆105Dec 12, 2023Updated 2 years ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Jun 28, 2024Updated last year
- Fast and memory-efficient exact attention☆22,460Updated this week
- Efficient AI Inference & Serving☆479Jan 8, 2024Updated 2 years ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,426Jun 2, 2025Updated 9 months ago
- Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、…☆6,638Oct 24, 2024Updated last year
- 🩹Editing large language models within 10 seconds⚡☆1,360Aug 13, 2023Updated 2 years ago
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,036May 31, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- Modeling, training, eval, and inference code for OLMo☆6,326Nov 24, 2025Updated 3 months ago