RWKV-7: Surpassing GPT
☆104Nov 17, 2024Updated last year
Alternatives and similar repositories for modded-nanogpt-rwkv
Users that are interested in modded-nanogpt-rwkv are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆49Oct 21, 2025Updated 6 months ago
- RWKV in nanoGPT style☆196Jun 9, 2024Updated last year
- GoldFinch and other hybrid transformer components☆46Jul 20, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- Efficient RWKV inference engine. RWKV7 7.2B fp16 decoding 10250 tps @ single 5090.☆107Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- RWKV-7 mini☆12Mar 29, 2025Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated 2 years ago
- ☆33Oct 4, 2024Updated last year
- State tuning tunes the state☆35Feb 12, 2025Updated last year
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 8 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- ☆177Jan 13, 2026Updated 3 months ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Apr 9, 2023Updated 3 years ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Fast modular code to create and train cutting edge LLMs☆67May 16, 2024Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Nov 3, 2023Updated 2 years ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆25Jun 6, 2024Updated last year
- ☆11Feb 20, 2025Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28May 4, 2025Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- ☆107Mar 9, 2024Updated 2 years ago
- analyse problems of AI with Math and Code☆27Jul 28, 2025Updated 9 months ago
- ☆26Feb 26, 2026Updated 2 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Explorations into the proposed SDFT, Self-Distillation Enables Continual Learning, from Shenfeld et al. of MIT☆32Feb 6, 2026Updated 3 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Implementation of a fast semantic chunker in C++, installable in python 3.7+ projects.☆22Sep 20, 2025Updated 7 months ago
- RWKV-TS: Beyond Traditional Recurrent Neural Network for Time Series Tasks☆125Aug 16, 2024Updated last year
- ☆81May 15, 2024Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Sep 17, 2025Updated 7 months ago
- Code for "Robust Pose Estimation in Crowded Scenes with Direct Pose-Level Inference", NeurIPS 2021☆15Dec 2, 2021Updated 4 years ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆256Jan 31, 2025Updated last year
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Mar 1, 2024Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- RWKV, in easy to read code☆73Mar 25, 2025Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆44Mar 29, 2023Updated 3 years ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆89Mar 27, 2026Updated last month
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆44Jan 25, 2025Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆241Oct 14, 2025Updated 6 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Aug 13, 2024Updated last year