resloved / RWKV-notebooks
π β Notebooks related to RWKV
β59Updated last year
Alternatives and similar repositories for RWKV-notebooks:
Users that are interested in RWKV-notebooks are comparing it to the libraries listed below
- Framework agnostic python runtime for RWKV modelsβ146Updated last year
- ChatGPT-like Web UI for RWKVsticβ100Updated 2 years ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning libraryβ28Updated 11 months ago
- Enhancing LangChain prompts to work better with RWKV modelsβ34Updated last year
- rwkv_chatbotβ62Updated 2 years ago
- β82Updated 11 months ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca datasetβ31Updated 2 years ago
- BlinkDL's RWKV-v4 running in the browserβ47Updated 2 years ago
- Instruct-tune LLaMA on consumer hardwareβ74Updated last year
- Gradio UI for RWKV LLMβ29Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV braβ¦β64Updated last year
- This project is established for real-time training of the RWKV model.β49Updated 11 months ago
- QLoRA: Efficient Finetuning of Quantized LLMsβ78Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bitβ31Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best β¦β414Updated last year
- Fine-tuning RWKV-World modelβ25Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!β148Updated 8 months ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.β51Updated last year
- RWKV centralised docs for the communityβ24Updated last month
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRAβ123Updated last year
- Tune MPTsβ84Updated last year
- β13Updated last year
- Patch for MPT-7B which allows using and training a LoRAβ58Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.β105Updated last year
- Flask server for RWKVβ10Updated 2 years ago
- 4 bits quantization of LLaMa using GPTQβ130Updated last year
- Instruct-tuning LLaMA on consumer hardwareβ66Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenciβ¦β311Updated last year
- Conversational Language model toolkit for training against human preferences.β42Updated last year
- β121Updated 3 weeks ago