Antlera / nanoGPT-moe
Enable moe for nanogpt.
☆26Updated last year
Alternatives and similar repositories for nanoGPT-moe:
Users that are interested in nanoGPT-moe are comparing it to the libraries listed below
- Token Omission Via Attention☆126Updated 6 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆88Updated this week
- The code repository for the CURLoRA research paper. Stable LLM continual fine-tuning and catastrophic forgetting mitigation.☆43Updated 8 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆145Updated 2 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆128Updated 8 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 4 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆171Updated 3 months ago
- ☆60Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 11 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆32Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆148Updated 3 weeks ago
- This is the official repository for Inheritune.☆111Updated 2 months ago
- Code repository for the c-BTM paper☆106Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆120Updated last year
- RWKV-7: Surpassing GPT☆83Updated 5 months ago
- ☆96Updated 10 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆231Updated 2 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- Replicating O1 inference-time scaling laws☆83Updated 4 months ago
- Experiments for efforts to train a new and improved t5☆77Updated last year
- Experiments on speculative sampling with Llama models☆125Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 7 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆170Updated last month
- ☆125Updated last year
- ☆67Updated 9 months ago
- ☆45Updated last year
- A repository for research on medium sized language models.☆76Updated 11 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 10 months ago
- ☆114Updated 2 months ago