dzhulgakov / llama-mistralView external linksLinks
Inference code for Mistral and Mixtral hacked up into original Llama implementation
☆369Dec 9, 2023Updated 2 years ago
Alternatives and similar repositories for llama-mistral
Users that are interested in llama-mistral are comparing it to the libraries listed below
Sorting:
- ☆867Dec 8, 2023Updated 2 years ago
- inference code for mixtral-8x7b-32kseqlen☆105Dec 12, 2023Updated 2 years ago
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆771Dec 15, 2023Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,657Mar 8, 2024Updated last year
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,174Oct 8, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- Tools for merging pretrained large language models.☆6,783Jan 26, 2026Updated 2 weeks ago
- ☆415Nov 2, 2023Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Jan 21, 2024Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,669Apr 17, 2024Updated last year
- ☆717Mar 6, 2024Updated last year
- Go ahead and axolotl questions☆11,289Updated this week
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,325Apr 8, 2024Updated last year
- Official inference library for Mistral models☆10,664Nov 21, 2025Updated 2 months ago
- ☆577Oct 29, 2024Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Mar 31, 2024Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆887Nov 26, 2025Updated 2 months ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,894Jan 16, 2024Updated 2 years ago
- OpenChat: Advancing Open-source Language Models with Imperfect Data☆5,472Sep 13, 2024Updated last year
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,445Dec 9, 2025Updated 2 months ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- LLM as a Chatbot Service☆3,332Nov 20, 2023Updated 2 years ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆484Mar 19, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,477Jun 7, 2025Updated 8 months ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,463Nov 7, 2023Updated 2 years ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514May 20, 2024Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆823May 6, 2023Updated 2 years ago
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,630Sep 15, 2023Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891May 3, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Mar 6, 2025Updated 11 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,924Dec 7, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,003Dec 6, 2024Updated last year
- ☆17Dec 5, 2023Updated 2 years ago
- ☆15Mar 12, 2024Updated last year
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Aug 10, 2024Updated last year
- ☆1,033Dec 17, 2024Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,911Sep 30, 2023Updated 2 years ago