SmerkyG / gptcoreView external linksLinks
Fast modular code to create and train cutting edge LLMs
☆68May 16, 2024Updated last year
Alternatives and similar repositories for gptcore
Users that are interested in gptcore are comparing it to the libraries listed below
Sorting:
- ☆27Jul 28, 2025Updated 6 months ago
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- RADLADS training code☆37May 7, 2025Updated 9 months ago
- RWKV, in easy to read code☆72Mar 25, 2025Updated 10 months ago
- ☆12Dec 14, 2024Updated last year
- ☆13May 11, 2025Updated 9 months ago
- ☆11Oct 11, 2023Updated 2 years ago
- Mini Model Daemon☆12Nov 9, 2024Updated last year
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Mar 1, 2024Updated last year
- ☆13Dec 21, 2024Updated last year
- RWKV-7 mini☆12Mar 29, 2025Updated 10 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- ☆20Aug 1, 2024Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Aug 13, 2024Updated last year
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆48Aug 22, 2025Updated 5 months ago
- ☆171Jan 13, 2026Updated last month
- ☆67Mar 21, 2025Updated 10 months ago
- Experiments on the impact of depth in transformers and SSMs.☆40Oct 23, 2025Updated 3 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year
- ☆81May 15, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- 基于RWKV模型的角色扮演,实际上是个改的妈都不认识的 RWKV_Role_Playing☆17Aug 17, 2023Updated 2 years ago
- Some preliminary explorations of Mamba's context scaling.☆13Dec 18, 2024Updated last year
- Course Project for COMP4471 on RWKV☆17Feb 11, 2024Updated 2 years ago
- ☆32May 26, 2024Updated last year
- The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.