nlpodyssey / rwkvLinks
RWKV (Receptance Weighted Key Value) is a RNN with Transformer-level performance
☆41Updated 2 years ago
Alternatives and similar repositories for rwkv
Users that are interested in rwkv are comparing it to the libraries listed below
Sorting:
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- RWKV in nanoGPT style☆197Updated last year
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- RWKV centralised docs for the community☆31Updated 4 months ago
- A converter and basic tester for rwkv onnx☆43Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- ☆80Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- ☆39Updated last year
- Inference code for LLaMA 2 models☆30Updated last year
- Tooling for exact and MinHash deduplication of large-scale text datasets☆51Updated this week
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Evaluating LLMs with Dynamic Data☆105Updated this week
- Inference of Mamba models in pure C☆196Updated last year
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆46Updated 2 months ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Code base for internal reward models and PPO training☆24Updated 2 years ago
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆43Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆223Updated this week
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆32Updated 2 years ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 11 months ago
- RWKV model implementation☆38Updated 2 years ago
- ☆26Updated 2 years ago
- minichatgpt - To Train ChatGPT In 5 Minutes☆169Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆65Updated 2 years ago