resloved / RWKV-notebooks
📖 — Notebooks related to RWKV
☆59Updated last year
Alternatives and similar repositories for RWKV-notebooks:
Users that are interested in RWKV-notebooks are comparing it to the libraries listed below
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- Enhancing LangChain prompts to work better with RWKV models☆34Updated last year
- This project is established for real-time training of the RWKV model.☆49Updated 10 months ago
- ChatGPT-like Web UI for RWKVstic☆100Updated last year
- ☆82Updated 10 months ago
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- rwkv_chatbot☆62Updated 2 years ago
- Gradio UI for RWKV LLM☆29Updated 2 years ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library☆28Updated 9 months ago
- Instruct-tune LLaMA on consumer hardware☆73Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆412Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 11 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 7 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated 10 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Sentencepiece based BPE tokenizer for English and Japanese language text.☆27Updated 11 months ago
- Flask server for RWKV☆10Updated last year
- RWKV centralised docs for the community☆21Updated 2 weeks ago
- Fine-tuning RWKV-World model☆25Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- Conversational Language model toolkit for training against human preferences.☆42Updated 11 months ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- Model REVOLVER, a human in the loop model mixing system.☆33Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- Easily deploy your rwkv model☆18Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated last year
- Train Llama Loras Easily☆31Updated last year