nlpodyssey / rwkvLinks
RWKV (Receptance Weighted Key Value) is a RNN with Transformer-level performance
☆41Updated 2 years ago
Alternatives and similar repositories for rwkv
Users that are interested in rwkv are comparing it to the libraries listed below
Sorting:
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆39Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- RWKV in nanoGPT style☆193Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆55Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆145Updated 2 years ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- Adversarial Training and SFT for Bot Safety Models☆40Updated 2 years ago
- ☆64Updated 5 months ago
- ☆26Updated 2 years ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- A converter and basic tester for rwkv onnx☆42Updated last year
- RWKV centralised docs for the community☆29Updated 2 months ago
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆31Updated 2 years ago
- ☆39Updated last year
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆109Updated 2 years ago
- ☆127Updated 2 years ago
- Efficient RWKV inference engine. RWKV7 7.2B fp16 decoding 10250 tps @ single 5090.☆46Updated last week
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- Experiments on speculative sampling with Llama models☆126Updated 2 years ago
- ☆81Updated last year
- Minimal code to train a Large Language Model (LLM).☆172Updated 3 years ago
- Inference of Mamba models in pure C☆191Updated last year
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆42Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 8 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆39Updated 11 months ago
- Web browser version of StarCoder.cpp☆44Updated 2 years ago
- Evaluating LLMs with Dynamic Data☆96Updated 2 months ago