From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
☆802Oct 30, 2024Updated last year
Alternatives and similar repositories for makeMoE
Users that are interested in makeMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,678Mar 8, 2024Updated 2 years ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,314Jul 15, 2025Updated 9 months ago
- ☆252Mar 20, 2024Updated 2 years ago
- An autoregressive character-level language model for making more things☆3,887Jun 4, 2024Updated last year
- Tools for merging pretrained large language models.☆7,052Mar 15, 2026Updated last month
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- llama3 implementation one matrix multiplication at a time☆15,246May 23, 2024Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,955May 3, 2024Updated 2 years ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,462Jul 1, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,692Aug 14, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,251Aug 27, 2025Updated 8 months ago
- ☆4,109Apr 15, 2026Updated 3 weeks ago
- Official inference library for Mistral models☆10,786Apr 20, 2026Updated 2 weeks ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Jul 23, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Modeling, training, eval, and inference code for OLMo☆6,495Nov 24, 2025Updated 5 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,909Jan 21, 2024Updated 2 years ago
- Fast and memory-efficient exact attention☆23,628Updated this week
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,946Mar 8, 2024Updated 2 years ago
- A fast MoE impl for PyTorch☆1,846Feb 10, 2025Updated last year
- Mixture-of-Experts (MoE) Language Model☆196Sep 9, 2024Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆258May 6, 2024Updated 2 years ago
- LLM training in simple, raw C/CUDA☆29,780Jun 26, 2025Updated 10 months ago
- Minimal reproduction of DeepSeek R1-Zero☆13,091Feb 27, 2026Updated 2 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆70,969Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,225Jul 11, 2024Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,337May 1, 2026Updated last week
- Fine-tune LLM agents with online reinforcement learning☆1,250Mar 19, 2024Updated 2 years ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,206Aug 22, 2025Updated 8 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,084Jul 1, 2025Updated 10 months ago
- Robust recipes to align language models with human and AI preferences☆5,593Apr 8, 2026Updated last month
- Train transformer language models with reinforcement learning.☆18,282Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,922Jan 16, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,246Aug 14, 2025Updated 8 months ago
- Large World Model -- Modeling Text and Video with Millions Context☆7,408Oct 19, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,753Aug 12, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,678Apr 7, 2026Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆57,469Nov 12, 2025Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,901Jun 10, 2024Updated last year
- A collection of AWESOME things about mixture-of-experts☆1,275Dec 8, 2024Updated last year