kjslag / spacebyteLinks
A byte-level decoder architecture that matches the performance of tokenized Transformers.
☆63Updated last year
Alternatives and similar repositories for spacebyte
Users that are interested in spacebyte are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 8 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆87Updated last year
- ☆79Updated 10 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 6 months ago
- RWKV-7: Surpassing GPT☆91Updated 7 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 9 months ago
- ☆81Updated last year
- research impl of Native Sparse Attention (2502.11089)☆54Updated 4 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated 10 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated last week
- A repository for research on medium sized language models.☆76Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆57Updated last month
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆52Updated 3 months ago
- ☆47Updated 9 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last month
- RWKV, in easy to read code☆72Updated 2 months ago
- Experiments for efforts to train a new and improved t5☆77Updated last year
- GoldFinch and other hybrid transformer components☆45Updated 11 months ago
- This is the official repository for Inheritune.☆111Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆101Updated 2 months ago
- ☆51Updated 7 months ago
- Some preliminary explorations of Mamba's context scaling.☆214Updated last year
- ☆109Updated last year
- Mixture of A Million Experts☆46Updated 10 months ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆51Updated last year
- PyTorch implementation of models from the Zamba2 series.☆182Updated 5 months ago
- ☆49Updated last year
- ☆98Updated 5 months ago