From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
☆796Oct 30, 2024Updated last year
Alternatives and similar repositories for makeMoE
Users that are interested in makeMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,667Mar 8, 2024Updated 2 years ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,307Jul 15, 2025Updated 8 months ago
- ☆252Mar 20, 2024Updated 2 years ago
- An autoregressive character-level language model for making more things☆3,760Jun 4, 2024Updated last year
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- llama3 implementation one matrix multiplication at a time☆15,255May 23, 2024Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,922May 3, 2024Updated last year
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,385Jul 1, 2024Updated last year
- ☆16Nov 23, 2023Updated 2 years ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,232Aug 27, 2025Updated 7 months ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,697Aug 14, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- ☆4,110Jun 4, 2024Updated last year
- Official inference library for Mistral models☆10,730Feb 26, 2026Updated last month
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Reaching LLaMA2 Performance with 0.1M Dollars☆989Jul 23, 2024Updated last year
- Modeling, training, eval, and inference code for OLMo☆6,424Nov 24, 2025Updated 4 months ago
- Fast and memory-efficient exact attention☆22,938Updated this week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,903Jan 21, 2024Updated 2 years ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,931Mar 8, 2024Updated 2 years ago
- A fast MoE impl for PyTorch☆1,846Feb 10, 2025Updated last year
- Mixture-of-Experts (MoE) Language Model☆196Sep 9, 2024Updated last year
- LLM training in simple, raw C/CUDA☆29,216Jun 26, 2025Updated 9 months ago
- From scratch implementation of a vision language model in pure PyTorch☆257May 6, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Minimal reproduction of DeepSeek R1-Zero☆12,963Feb 27, 2026Updated last month
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆69,106Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,209Jul 11, 2024Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,187Aug 22, 2025Updated 7 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,256Updated this week
- Fine-tune LLM agents with online reinforcement learning☆1,249Mar 19, 2024Updated 2 years ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,083Jul 1, 2025Updated 8 months ago
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- Robust recipes to align language models with human and AI preferences☆5,535Sep 8, 2025Updated 6 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,904Jan 16, 2024Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,236Aug 14, 2025Updated 7 months ago
- Large World Model -- Modeling Text and Video with Millions Context☆7,402Oct 19, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,603Aug 12, 2024Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆55,432Nov 12, 2025Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year