open-compass / MixtralKitLinks
A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI
☆773Updated 2 years ago
Alternatives and similar repositories for MixtralKit
Users that are interested in MixtralKit are comparing it to the libraries listed below
Sorting:
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,476Updated 2 years ago
- Official repository for LongChat and LongEval☆534Updated last year
- Efficient AI Inference & Serving☆479Updated 2 years ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514Updated last year
- ☆977Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,005Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,113Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,668Updated last year
- ☆901Updated 2 years ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,038Updated last year
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆346Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Updated 11 months ago
- LOMO: LOw-Memory Optimization☆987Updated last year
- distributed trainer for LLMs☆588Updated last year
- ☆772Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆667Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆639Updated last year
- Inference code for Mistral and Mixtral hacked up into original Llama implementation☆371Updated 2 years ago
- Codebase for Merging Language Models (ICML 2024)☆863Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆446Updated last year
- Yuan 2.0 Large Language Model☆690Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,654Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆630Updated 2 years ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,231Updated last year
- 🩹Editing large language models within 10 seconds⚡☆1,361Updated 2 years ago
- A generalized information-seeking agent system with Large Language Models (LLMs).☆1,195Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,407Updated last year
- Code for Quiet-STaR☆740Updated last year