SmerkyG / gptcore
Fast modular code to create and train cutting edge LLMs
☆65Updated 9 months ago
Alternatives and similar repositories for gptcore:
Users that are interested in gptcore are comparing it to the libraries listed below
- RWKV, in easy to read code☆66Updated 2 months ago
- RWKV-7: Surpassing GPT☆77Updated 3 months ago
- GoldFinch and other hybrid transformer components☆43Updated 7 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 9 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆118Updated 5 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 6 months ago
- Normalized Transformer (nGPT)☆152Updated 3 months ago
- ☆53Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆75Updated 2 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆215Updated 3 weeks ago
- A large-scale RWKV v6, v7(World, ARWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy on docke…☆30Updated this week
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- supporting pytorch FSDP for optimizers☆76Updated 2 months ago
- Token Omission Via Attention☆123Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆116Updated 2 months ago
- ☆49Updated 11 months ago
- RWKV in nanoGPT style☆187Updated 8 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 4 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆57Updated 3 weeks ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Evaluating the Mamba architecture on the Othello game☆44Updated 9 months ago
- ☆51Updated 9 months ago
- Efficient optimizers☆169Updated this week
- ☆78Updated 10 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆65Updated 9 months ago
- Evaluating LLMs with Dynamic Data☆75Updated last week
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆51Updated 10 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆17Updated this week
- Understand and test language model architectures on synthetic tasks.☆181Updated last month