playaswd / rwkv-by-hand-excelLinks
This project demonstrates the computation process of the RWKV (Receptance Weighted Key Value) model through Excel spreadsheets.
☆18Updated 7 months ago
Alternatives and similar repositories for rwkv-by-hand-excel
Users that are interested in rwkv-by-hand-excel are comparing it to the libraries listed below
Sorting:
- RWKV in nanoGPT style☆197Updated last year
- RWKV centralised docs for the community☆29Updated 4 months ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 11 months ago
- Inference RWKV v7 in pure C.☆43Updated 2 months ago
- ☆148Updated last year
- RWKV, in easy to read code☆72Updated 9 months ago
- ☆164Updated last week
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆54Updated 2 weeks ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆59Updated 3 months ago
- RWKV-7: Surpassing GPT☆103Updated last year
- Implementation of mamba with rust☆89Updated last year
- ☆63Updated 11 months ago
- Inference of Mamba models in pure C☆196Updated last year
- Train your own small bitnet model☆76Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- Evaluating the Mamba architecture on the Othello game☆49Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆46Updated 2 months ago
- Token Omission Via Attention☆128Updated last year
- ☆206Updated 3 weeks ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆104Updated 7 months ago
- A pure and fast NumPy implementation of Mamba with cache support.☆17Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 8 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆249Updated 11 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆164Updated 4 months ago
- noise_step: Training in 1.58b With No Gradient Memory☆220Updated last year