RWKV-7: Surpassing GPT
☆104Nov 17, 2024Updated last year
Alternatives and similar repositories for modded-nanogpt-rwkv
Users that are interested in modded-nanogpt-rwkv are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆49Oct 21, 2025Updated 5 months ago
- RWKV in nanoGPT style☆196Jun 9, 2024Updated last year
- GoldFinch and other hybrid transformer components☆46Jul 20, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- Efficient RWKV inference engine. RWKV7 7.2B fp16 decoding 10250 tps @ single 5090.☆98Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- RWKV-7 mini☆12Mar 29, 2025Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- ☆33Oct 4, 2024Updated last year
- RWKV6 in native pytorch and triton:)☆11Aug 4, 2024Updated last year
- High-performance tokenized language data-loader for Python C++ extension☆14Jul 22, 2024Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Feb 2, 2025Updated last year
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 7 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆57Aug 20, 2024Updated last year
- ☆177Jan 13, 2026Updated 3 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Apr 9, 2023Updated 3 years ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- Fast modular code to create and train cutting edge LLMs☆68May 16, 2024Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Nov 3, 2023Updated 2 years ago
- ☆11Feb 20, 2025Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28May 4, 2025Updated 11 months ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Explorations into the proposed SDFT, Self-Distillation Enables Continual Learning, from Shenfeld et al. of MIT☆31Feb 6, 2026Updated 2 months ago
- ☆27Feb 26, 2026Updated last month
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Implementation of a fast semantic chunker in C++, installable in python 3.7+ projects.☆22Sep 20, 2025Updated 6 months ago
- RWKV-TS: Beyond Traditional Recurrent Neural Network for Time Series Tasks☆126Aug 16, 2024Updated last year
- ☆81May 15, 2024Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Sep 17, 2025Updated 7 months ago
- Code for "Robust Pose Estimation in Crowded Scenes with Direct Pose-Level Inference", NeurIPS 2021☆15Dec 2, 2021Updated 4 years ago
- ☆13Jun 16, 2021Updated 4 years ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆253Jan 31, 2025Updated last year
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Mar 1, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- RWKV, in easy to read code☆73Mar 25, 2025Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆44Mar 29, 2023Updated 3 years ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆88Mar 27, 2026Updated 3 weeks ago
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆44Jan 25, 2025Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆240Oct 14, 2025Updated 6 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Aug 13, 2024Updated last year