wozeparrot / tinyrwkvLinks
tinygrad port of the RWKV large language model.
☆45Updated 2 months ago
Alternatives and similar repositories for tinyrwkv
Users that are interested in tinyrwkv are comparing it to the libraries listed below
Sorting:
- ☆40Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- ☆49Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆40Updated 2 years ago
- RWKV-7: Surpassing GPT☆88Updated 6 months ago
- RWKV in nanoGPT style☆189Updated 11 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Train your own small bitnet model☆71Updated 7 months ago
- ☆26Updated 2 years ago
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- RWKV, in easy to read code☆72Updated 2 months ago
- Full finetuning of large language models without large memory requirements☆93Updated last year
- new optimizer☆20Updated 10 months ago
- Inference of Mamba models in pure C☆186Updated last year
- RWKV centralised docs for the community☆25Updated 2 months ago
- GPT-2 small trained on phi-like data☆66Updated last year
- Preprint: Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆28Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- ☆60Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- ☆72Updated last year
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- ☆41Updated 2 years ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆35Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 4 months ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated 2 years ago