Antlera / nanoGPT-moe
Enable moe for nanogpt.
☆21Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for nanoGPT-moe
- Token Omission Via Attention☆119Updated last month
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆30Updated last month
- ☆35Updated 2 weeks ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆137Updated this week
- ☆63Updated 4 months ago
- ☆49Updated 6 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆84Updated 3 months ago
- Code repository for the c-BTM paper☆105Updated last year
- ☆61Updated 2 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆79Updated this week
- ☆89Updated 4 months ago
- ☆121Updated 9 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated 9 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆79Updated this week
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated last month
- Simple and efficient pytorch-native transformer training and inference (batched)☆61Updated 7 months ago
- A repository for research on medium sized language models.☆74Updated 5 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆56Updated 5 months ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆69Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- ☆15Updated 4 months ago
- Data preparation code for CrystalCoder 7B LLM☆42Updated 6 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆44Updated 9 months ago
- ☆25Updated last month
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆40Updated 3 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆49Updated 7 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆28Updated 8 months ago
- ☆33Updated 6 months ago
- Critique-out-Loud Reward Models☆36Updated 3 weeks ago
- ☆24Updated 6 months ago