resloved / RWKV-notebooksLinks
π β Notebooks related to RWKV
β59Updated 2 years ago
Alternatives and similar repositories for RWKV-notebooks
Users that are interested in RWKV-notebooks are comparing it to the libraries listed below
Sorting:
- Framework agnostic python runtime for RWKV modelsβ146Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best β¦β412Updated 2 years ago
- rwkv_chatbotβ62Updated 2 years ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca datasetβ31Updated 2 years ago
- β82Updated last year
- This project is established for real-time training of the RWKV model.β50Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!β148Updated 11 months ago
- ChatGPT-like Web UI for RWKVsticβ100Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenciβ¦β312Updated last year
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning libraryβ28Updated last year
- Instruct-tuning LLaMA on consumer hardwareβ66Updated 2 years ago
- Enhancing LangChain prompts to work better with RWKV modelsβ34Updated 2 years ago
- RWKV centralised docs for the communityβ28Updated 3 weeks ago
- 4 bits quantization of SantaCoder using GPTQβ51Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.β106Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRAβ123Updated 2 years ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based modelsβ177Updated this week
- Instruct-tune LLaMA on consumer hardwareβ73Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best β¦β10Updated last year
- Gradio UI for RWKV LLMβ29Updated 2 years ago
- 4 bits quantization of LLaMa using GPTQβ129Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV braβ¦β65Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMsβ78Updated last year
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring theβ¦β48Updated last month
- β534Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /β¦β40Updated 2 years ago
- Model REVOLVER, a human in the loop model mixing system.β33Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge promptsβ110Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.β118Updated 11 months ago
- Tune MPTsβ84Updated 2 years ago