☆34Jul 21, 2024Updated last year
Alternatives and similar repositories for RWKV-World-HF-Tokenizer
Users that are interested in RWKV-World-HF-Tokenizer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A fast RWKV Tokenizer written in Rust☆54Aug 12, 2025Updated 8 months ago
- ☆13Dec 21, 2024Updated last year
- ☆17Jan 1, 2025Updated last year
- RWKV, in easy to read code☆73Mar 25, 2025Updated last year
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- GoldFinch and other hybrid transformer components☆46Jul 20, 2024Updated last year
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Mar 1, 2024Updated 2 years ago
- ☆33May 26, 2024Updated last year
- Official Implementation of Knowledge Flow Prompting☆35Oct 20, 2025Updated 5 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Aug 13, 2024Updated last year
- ☆177Jan 13, 2026Updated 3 months ago
- ☆41Apr 30, 2025Updated 11 months ago
- Fast modular code to create and train cutting edge LLMs☆68May 16, 2024Updated last year
- A libcluster strategy for Digital Ocean Droplets☆12May 11, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- 📖 — Notebooks related to RWKV☆58May 13, 2023Updated 2 years ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Mar 25, 2025Updated last year
- The all-in-one RWKV runtime box with embed, RAG, AI agents, and more.☆608Feb 22, 2026Updated last month
- ☆27Feb 26, 2026Updated last month
- This project is established for real-time training of the RWKV model.☆50May 17, 2024Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆49Oct 21, 2025Updated 5 months ago
- ☆32Jan 7, 2024Updated 2 years ago
- ☆125Dec 15, 2023Updated 2 years ago
- ☆23Dec 28, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆81May 15, 2024Updated last year
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆64Sep 19, 2025Updated 6 months ago
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆134Jul 20, 2024Updated last year
- ☆29Jul 9, 2024Updated last year
- Testing library/runner for load and integration testing using intelligent bots☆20Apr 8, 2024Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆147Aug 24, 2023Updated 2 years ago
- ☆18Mar 20, 2024Updated 2 years ago
- readthedocs.org documentation for Inkplate boards☆10Aug 25, 2025Updated 7 months ago
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆245Jan 13, 2026Updated 3 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- Mini Model Daemon☆13Nov 9, 2024Updated last year
- Vanilla Components - Ergonomic and Widely Reusable☆23Sep 3, 2016Updated 9 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Nov 3, 2023Updated 2 years ago
- [CVPR2026] BinaryAttention: One-Bit QK-Attention for Vision and Diffusion Transformers☆32Mar 17, 2026Updated last month
- ☆44Dec 28, 2022Updated 3 years ago