AviSoori1x / makeMoE
From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
☆573Updated last month
Related projects: ⓘ
- DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models☆786Updated 5 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆695Updated last week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆608Updated last month
- [ICML 2024] CLLMs: Consistency Large Language Models☆337Updated last month
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,309Updated 5 months ago
- ☆419Updated 2 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆595Updated 3 months ago
- Codebase for Merging Language Models (ICML 2024)☆745Updated 4 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,353Updated 6 months ago
- SimPO: Simple Preference Optimization with a Reference-Free Reward☆648Updated 3 weeks ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,099Updated 7 months ago
- Minimalistic large language model 3D-parallelism training☆1,116Updated this week
- Scalable toolkit for efficient model alignment☆509Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆970Updated 8 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆356Updated last week
- Serving multiple LoRA finetuned LLM as one☆946Updated 4 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆464Updated 4 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆271Updated 4 months ago
- Generative Representational Instruction Tuning☆527Updated 2 weeks ago
- FuseAI Project☆436Updated last month
- ☆473Updated this week
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆398Updated this week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆539Updated 6 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆530Updated 4 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆572Updated last month
- distributed trainer for LLMs☆524Updated 4 months ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆957Updated last month
- Reference implementation of Megalodon 7B model☆503Updated 5 months ago
- Official repository for ORPO☆409Updated 3 months ago