Inference code for Mistral and Mixtral hacked up into original Llama implementation
☆368Dec 9, 2023Updated 2 years ago
Alternatives and similar repositories for llama-mistral
Users that are interested in llama-mistral are comparing it to the libraries listed below
Sorting:
- ☆868Dec 8, 2023Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆104Dec 12, 2023Updated 2 years ago
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆771Dec 15, 2023Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,664Mar 8, 2024Updated 2 years ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,175Oct 8, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- Tools for merging pretrained large language models.☆6,842Feb 28, 2026Updated last week
- ☆415Nov 2, 2023Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,900Jan 21, 2024Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,676Apr 17, 2024Updated last year
- ☆718Mar 6, 2024Updated 2 years ago
- Go ahead and axolotl questions☆11,395Updated this week
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,327Apr 8, 2024Updated last year
- Official inference library for Mistral models☆10,700Feb 26, 2026Updated last week
- ☆579Oct 29, 2024Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Mar 31, 2024Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆890Nov 26, 2025Updated 3 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,895Jan 16, 2024Updated 2 years ago
- OpenChat: Advancing Open-source Language Models with Imperfect Data☆5,476Sep 13, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,196Jul 11, 2024Updated last year
- LLM as a Chatbot Service☆3,330Nov 20, 2023Updated 2 years ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,451Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆487Mar 19, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,478Jun 7, 2025Updated 9 months ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,464Nov 7, 2023Updated 2 years ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514May 20, 2024Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆822May 6, 2023Updated 2 years ago
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,629Sep 15, 2023Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,902May 3, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,317Mar 6, 2025Updated last year
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,923Dec 7, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,002Dec 6, 2024Updated last year
- ☆17Dec 5, 2023Updated 2 years ago
- ☆15Mar 12, 2024Updated last year
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆86Aug 10, 2024Updated last year
- ☆1,033Dec 17, 2024Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago