nlpodyssey / rwkvLinks
RWKV (Receptance Weighted Key Value) is a RNN with Transformer-level performance
☆41Updated 2 years ago
Alternatives and similar repositories for rwkv
Users that are interested in rwkv are comparing it to the libraries listed below
Sorting:
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- RWKV in nanoGPT style☆195Updated last year
- ☆65Updated 8 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- A converter and basic tester for rwkv onnx☆43Updated last year
- ☆39Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆32Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- ☆81Updated last year
- RWKV centralised docs for the community☆29Updated 4 months ago
- A fast RWKV Tokenizer written in Rust☆54Updated 4 months ago
- Tooling for exact and MinHash deduplication of large-scale text datasets☆44Updated this week
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆46Updated 2 months ago
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- tinygrad port of the RWKV large language model.☆45Updated 9 months ago
- Inference of Mamba models in pure C☆195Updated last year
- ☆92Updated 3 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- A library for simplifying training with multi gpu setups in the HuggingFace / PyTorch ecosystem.☆16Updated last week
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆44Updated last year
- A Python implementation of Toolformer using Huggingface Transformers☆14Updated 2 years ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 10 months ago
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- Evaluating LLMs with Dynamic Data☆99Updated last week