wozeparrot / tinyrwkvLinks
tinygrad port of the RWKV large language model.
☆45Updated 4 months ago
Alternatives and similar repositories for tinyrwkv
Users that are interested in tinyrwkv are comparing it to the libraries listed below
Sorting:
- ☆40Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- Train your own small bitnet model☆74Updated 8 months ago
- RWKV-7: Surpassing GPT☆92Updated 7 months ago
- GPT-2 small trained on phi-like data☆66Updated last year
- Inference of Mamba models in pure C☆188Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Framework agnostic python runtime for RWKV models☆147Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 5 months ago
- ☆35Updated 2 years ago
- Experiments with BitNet inference on CPU☆54Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- ☆49Updated last year
- RWKV in nanoGPT style☆191Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆312Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 9 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Port of Facebook's LLaMA model in C/C++☆22Updated last year
- ☆61Updated last year