AviSoori1x / makeMoELinks
From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
☆775Updated last year
Alternatives and similar repositories for makeMoE
Users that are interested in makeMoE are comparing it to the libraries listed below
Sorting:
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,405Updated last year
- Reference implementation of Megalodon 7B model☆526Updated 6 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,641Updated last year
- ☆969Updated 10 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,054Updated 4 months ago
- Large Reasoning Models☆807Updated last year
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,168Updated 3 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- DataComp for Language Models☆1,398Updated 3 months ago
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,226Updated last year
- Recipes to scale inference-time compute of open models☆1,119Updated 6 months ago
- ☆968Updated 10 months ago
- Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model tra…☆179Updated last year
- ☆1,035Updated 11 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆998Updated last year
- 🍃 MINT-1T: A one trillion token multimodal interleaved dataset.☆827Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆795Updated 8 months ago
- Unleashing the Power of Reinforcement Learning for Math and Code Reasoners☆733Updated 6 months ago
- Codebase for Merging Language Models (ICML 2024)☆860Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆894Updated 2 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,307Updated 9 months ago
- FuseAI Project☆583Updated 10 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆662Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,644Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆933Updated 9 months ago
- Code for Quiet-STaR☆743Updated last year
- ☆567Updated 2 years ago
- Minimal hackable GRPO implementation☆303Updated 10 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆407Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,164Updated 2 months ago