tensorpro / tpu_rwkvLinks
JAX implementations of RWKV
☆19Updated 2 years ago
Alternatives and similar repositories for tpu_rwkv
Users that are interested in tpu_rwkv are comparing it to the libraries listed below
Sorting:
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- Interpretability analysis of language model outlier and attempts to distill the model☆13Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆145Updated 2 years ago
- RWKV model implementation☆38Updated 2 years ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 8 months ago
- ☆42Updated 2 years ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆39Updated 2 years ago
- A converter and basic tester for rwkv onnx☆42Updated last year
- RWKV in nanoGPT style☆193Updated last year
- Training a reward model for RLHF using RWKV.☆15Updated 2 years ago
- RWKV, in easy to read code☆72Updated 6 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated 2 years ago
- tinygrad port of the RWKV large language model.☆44Updated 7 months ago
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- ☆11Updated 2 years ago
- Experiments with BitNet inference on CPU☆54Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- Token Omission Via Attention☆127Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- ☆28Updated last year
- RWKV-7: Surpassing GPT☆97Updated 10 months ago
- RWKV-7 mini☆11Updated 6 months ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆58Updated 3 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- ☆15Updated 7 months ago
- See https://github.com/cuda-mode/triton-index/ instead!☆10Updated last year