TorchRWKV / rwkv-kit
☆14Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for rwkv-kit
- ☆82Updated this week
- ☆33Updated 3 months ago
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆181Updated this week
- RWKV centralised docs for the community☆19Updated 2 months ago
- RWKV, in easy to read code☆55Updated last week
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Updated 8 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆91Updated last month
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆118Updated 3 months ago
- Fast modular code to create and train cutting edge LLMs☆65Updated 5 months ago
- Evaluating LLMs with Dynamic Data☆68Updated this week
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆133Updated 3 months ago
- continous batching and parallel acceleration for RWKV6☆23Updated 4 months ago
- Unofficial Implementation of Evolutionary Model Merging☆33Updated 7 months ago
- RWKV in nanoGPT style☆177Updated 5 months ago
- This project is established for real-time training of the RWKV model.☆50Updated 5 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆30Updated 2 months ago
- A fast RWKV Tokenizer written in Rust☆36Updated 2 months ago
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆29Updated 3 months ago
- ☆21Updated last week
- ☆14Updated this week
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆39Updated 2 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆49Updated 2 months ago
- QuIP quantization☆46Updated 7 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆77Updated last month
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆49Updated 6 months ago
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆38Updated 3 months ago
- Here we will test various linear attention designs.☆56Updated 6 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆67Updated last week
- Community Implementation of the paper: "Multi-Head Mixture-of-Experts" In PyTorch☆18Updated last week
- ☆61Updated 2 months ago