AviSoori1x / makeMoE
From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
☆633Updated 3 months ago
Alternatives and similar repositories for makeMoE:
Users that are interested in makeMoE are comparing it to the libraries listed below
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,445Updated 11 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆698Updated 4 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆814Updated 3 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆798Updated this week
- MINT-1T: A one trillion token multimodal interleaved dataset.☆797Updated 6 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,359Updated 10 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆582Updated 11 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆371Updated 3 months ago
- Minimalistic large language model 3D-parallelism training☆1,445Updated this week
- ☆496Updated 2 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆451Updated 10 months ago
- Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model tra…☆136Updated 10 months ago
- Serving multiple LoRA finetuned LLM as one☆1,025Updated 9 months ago
- ☆917Updated last week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆638Updated 8 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,493Updated 3 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,191Updated 4 months ago
- RewardBench: the first evaluation tool for reward models.☆503Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆590Updated last month
- Open weights language model from Google DeepMind, based on Griffin.☆620Updated 7 months ago
- Codebase for Merging Language Models (ICML 2024)☆793Updated 9 months ago
- A simple and effective LLM pruning approach.☆709Updated 6 months ago
- Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient an…☆619Updated this week
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆389Updated 8 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆497Updated 8 months ago
- Recipes to scale inference-time compute of open models☆992Updated last month
- Recipes to train reward model for RLHF.☆1,160Updated last week
- Code for Quiet-STaR☆711Updated 5 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆547Updated last month