tensorpro / tpu_rwkv
JAX implementations of RWKV
☆19Updated last year
Related projects ⓘ
Alternatives and complementary repositories for tpu_rwkv
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- A converter and basic tester for rwkv onnx☆41Updated 9 months ago
- ☆42Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated last year
- RWKV, in easy to read code☆54Updated last week
- Course Project for COMP4471 on RWKV☆16Updated 8 months ago
- RWKV-7: Surpassing GPT☆40Updated this week
- tinygrad port of the RWKV large language model.☆43Updated 4 months ago
- RWKV model implementation☆38Updated last year
- Training a reward model for RLHF using RWKV.☆14Updated last year
- Interpretability analysis of language model outlier and attempts to distill the model☆13Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated last year
- Chatbot that answers frequently asked questions in French, English, and Tunisian using the Rasa NLU framework and RWKV-4-Raven☆13Updated last year
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆63Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆36Updated last year
- Fast modular code to create and train cutting edge LLMs☆65Updated 5 months ago
- ☆18Updated last week
- RWKV in nanoGPT style☆176Updated 5 months ago
- Experiments with BitNet inference on CPU☆50Updated 7 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆30Updated 2 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆20Updated this week
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Token Omission Via Attention☆119Updated 3 weeks ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆307Updated 9 months ago
- ☆13Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- Implementation of Spectral State Space Models☆17Updated 8 months ago