wozeparrot / tinyrwkvLinks
tinygrad port of the RWKV large language model.
☆46Updated 3 months ago
Alternatives and similar repositories for tinyrwkv
Users that are interested in tinyrwkv are comparing it to the libraries listed below
Sorting:
- ☆40Updated 2 years ago
- ☆49Updated last year
- RWKV, in easy to read code☆72Updated 2 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated last year
- RWKV-7: Surpassing GPT☆91Updated 7 months ago
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- RWKV in nanoGPT style☆191Updated last year
- ☆42Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- Train your own small bitnet model☆72Updated 8 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Python bindings for ggml☆141Updated 9 months ago
- GPT-2 small trained on phi-like data☆66Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆71Updated 4 months ago
- A relatively basic implementation of RWKV in Rust written by someone with very little math and ML knowledge. Supports 32, 8 and 4 bit eva…☆93Updated last year
- Fast modular code to create and train cutting edge LLMs☆67Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Inference of Mamba models in pure C☆187Updated last year
- Simplex Random Feature attention, in PyTorch☆74Updated last year
- Collection of autoregressive model implementation☆85Updated 2 months ago