Fast modular code to create and train cutting edge LLMs
☆68May 16, 2024Updated last year
Alternatives and similar repositories for gptcore
Users that are interested in gptcore are comparing it to the libraries listed below
Sorting:
- ☆27Feb 26, 2026Updated last week
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- RADLADS training code☆37May 7, 2025Updated 10 months ago
- RWKV, in easy to read code☆72Mar 25, 2025Updated 11 months ago
- ☆13May 11, 2025Updated 9 months ago
- Mini Model Daemon☆12Nov 9, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- ☆12Dec 14, 2024Updated last year
- RWKV-7 mini☆12Mar 29, 2025Updated 11 months ago
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Mar 1, 2024Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- ☆13Dec 21, 2024Updated last year
- ☆20Aug 1, 2024Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Aug 13, 2024Updated last year
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆48Aug 22, 2025Updated 6 months ago
- ☆67Mar 21, 2025Updated 11 months ago
- ☆176Jan 13, 2026Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆138Dec 17, 2024Updated last year
- ☆81May 15, 2024Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆41Oct 23, 2025Updated 4 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- 基于RWKV模型的角色扮演,实际上是个改的妈都不认识的 RWKV_Role_Playing☆17Aug 17, 2023Updated 2 years ago
- Some preliminary explorations of Mamba's context scaling.☆13Dec 18, 2024Updated last year
- Course Project for COMP4471 on RWKV☆17Feb 11, 2024Updated 2 years ago
- ☆32May 26, 2024Updated last year
- The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.☆198Nov 9, 2023Updated 2 years ago
- ☆20May 30, 2024Updated last year
- ☆19Dec 4, 2025Updated 3 months ago
- ☆17Jan 1, 2025Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆35Jan 24, 2025Updated last year
- ☆16May 8, 2024Updated last year
- RWKV models and examples powered by candle.☆24Jan 19, 2026Updated last month
- train with kittens!☆63Oct 25, 2024Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆55Updated this week
- RWKV-7: Surpassing GPT☆104Nov 17, 2024Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Nov 3, 2023Updated 2 years ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Feb 2, 2025Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Jul 17, 2025Updated 7 months ago