lukasVierling / FaceRWKV
Course Project for COMP4471 on RWKV
☆17Updated last year
Alternatives and similar repositories for FaceRWKV:
Users that are interested in FaceRWKV are comparing it to the libraries listed below
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated this week
- Experiments with BitNet inference on CPU☆53Updated last year
- RWKV centralised docs for the community☆24Updated last month
- RWKV-7: Surpassing GPT☆83Updated 5 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- RWKV, in easy to read code☆72Updated last month
- A converter and basic tester for rwkv onnx☆42Updated last year
- A large-scale RWKV v6, v7(World, ARWKV, PRWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy o…☆35Updated this week
- Thin wrapper around GGML to make life easier☆27Updated this week
- tinygrad port of the RWKV large language model.☆44Updated last month
- A fast RWKV Tokenizer written in Rust☆45Updated last month
- ☆49Updated last year
- ☆34Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆22Updated last month
- ☆46Updated 9 months ago
- Inference RWKV v7 in pure C.☆33Updated last month
- QuIP quantization☆52Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- ☆34Updated this week
- The source code of the game I made for the HuggingFace game jam☆14Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated 2 years ago
- RWKV v5,v6 LoRA Trainer on Cuda and Rocm Platform. RWKV is a RNN with transformer-level LLM performance. It can be directly trained like …☆13Updated last year
- ☆53Updated 11 months ago
- Fine-tunes a student LLM using teacher feedback for improved reasoning and answer quality. Implements GRPO with teacher-provided evaluati…☆41Updated 2 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 8 months ago
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 11 months ago