lukasVierling / FaceRWKVLinks
Course Project for COMP4471 on RWKV
☆17Updated last year
Alternatives and similar repositories for FaceRWKV
Users that are interested in FaceRWKV are comparing it to the libraries listed below
Sorting:
- RWKV-7: Surpassing GPT☆94Updated 8 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 3 months ago
- ☆9Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- tinygrad port of the RWKV large language model.☆45Updated 5 months ago
- RWKV centralised docs for the community☆28Updated 3 weeks ago
- A fast RWKV Tokenizer written in Rust☆47Updated 3 weeks ago
- RWKV in nanoGPT style☆191Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 11 months ago
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆42Updated 2 weeks ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 6 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆31Updated last week
- A converter and basic tester for rwkv onnx☆42Updated last year
- ☆34Updated last year
- RWKV, in easy to read code☆72Updated 4 months ago
- Fast modular code to create and train cutting edge LLMs☆67Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆82Updated 2 months ago
- ☆51Updated last year
- Video+code lecture on building nanoGPT from scratch☆69Updated last year
- Train your own small bitnet model☆75Updated 9 months ago
- Thin wrapper around GGML to make life easier☆40Updated last month
- A large-scale RWKV v6, v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to de…☆40Updated last week
- ☆38Updated 3 months ago
- Modeling code for a BitNet b1.58 Llama-style model.☆25Updated last year
- State tuning tunes the state☆35Updated 5 months ago
- ☆49Updated last year
- GoldFinch and other hybrid transformer components☆46Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year