AviSoori1x / makeMoE
From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
☆672Updated 4 months ago
Alternatives and similar repositories for makeMoE:
Users that are interested in makeMoE are comparing it to the libraries listed below
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆705Updated 5 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,473Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,210Updated last week
- Serving multiple LoRA finetuned LLM as one☆1,035Updated 10 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆813Updated this week
- ☆500Updated 3 months ago
- Scalable toolkit for efficient model alignment☆740Updated this week
- [ICML 2024] CLLMs: Consistency Large Language Models☆385Updated 3 months ago
- Reference implementation of Megalodon 7B model☆515Updated 10 months ago
- Large Reasoning Models☆799Updated 3 months ago
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆934Updated 2 weeks ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆644Updated 9 months ago
- MINT-1T: A one trillion token multimodal interleaved dataset.☆801Updated 7 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆453Updated 11 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,443Updated 10 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆648Updated last month
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,367Updated 11 months ago
- Code for Quiet-STaR☆721Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆1,675Updated this week
- ☆905Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆666Updated 2 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆841Updated 3 weeks ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆927Updated 3 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆594Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆978Updated 7 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆499Updated 9 months ago
- Recipes to scale inference-time compute of open models☆1,035Updated 2 weeks ago
- O1 Replication Journey☆1,969Updated last month