nlpodyssey / rwkvLinks
RWKV (Receptance Weighted Key Value) is a RNN with Transformer-level performance
☆41Updated 2 years ago
Alternatives and similar repositories for rwkv
Users that are interested in rwkv are comparing it to the libraries listed below
Sorting:
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- RWKV in nanoGPT style☆196Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- A converter and basic tester for rwkv onnx☆43Updated last year
- ☆39Updated last year
- ☆65Updated 7 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- RWKV centralised docs for the community☆30Updated 3 months ago
- ☆81Updated last year
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆32Updated 2 years ago
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆44Updated last year
- ☆39Updated 3 years ago
- RWKV model implementation☆38Updated 2 years ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Inference of Mamba models in pure C☆193Updated last year
- Efficient RWKV inference engine. RWKV7 7.2B fp16 decoding 10250 tps @ single 5090.☆61Updated last week
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆45Updated last month
- ☆36Updated 3 months ago
- RWKV-7: Surpassing GPT☆101Updated last year
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 3 years ago
- Inference code for LLaMA 2 models☆30Updated last year
- Enhancing LangChain prompts to work better with RWKV models☆34Updated 2 years ago