nlpodyssey / rwkv
RWKV (Receptance Weighted Key Value) is a RNN with Transformer-level performance
☆36Updated last year
Related projects ⓘ
Alternatives and complementary repositories for rwkv
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated last year
- Here we collect trick questions and failed tasks for open source LLMs to improve them.☆32Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆26Updated last year
- A converter and basic tester for rwkv onnx☆41Updated 9 months ago
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- RWKV model implementation☆38Updated last year
- ☆42Updated last year
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆66Updated 2 years ago
- Evaluating LLMs with Dynamic Data☆68Updated this week
- RWKV-7: Surpassing GPT☆43Updated this week
- ☆49Updated 8 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆133Updated 3 months ago
- ☆58Updated last week
- tinygrad port of the RWKV large language model.☆43Updated 4 months ago
- Experiments with generating opensource language model assistants☆97Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆45Updated 2 years ago
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆43Updated last month
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated last week
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆59Updated 6 months ago
- RWKV centralised docs for the community☆19Updated 2 months ago
- A fast RWKV Tokenizer written in Rust☆36Updated 2 months ago
- BigKnow2022: Bringing Language Models Up to Speed☆14Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year