PicoCreator / RWKV-LM-LoRALinks
RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
☆10Updated last year
Alternatives and similar repositories for RWKV-LM-LoRA
Users that are interested in RWKV-LM-LoRA are comparing it to the libraries listed below
Sorting:
- ☆14Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- Experimental sampler to make LLMs more creative☆31Updated last year
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆25Updated last year
- ☆27Updated last year
- RWKV centralised docs for the community☆27Updated last week
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆20Updated last year
- ☆74Updated last year
- Merge LLM that are split in to parts☆26Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 11 months ago
- Tune MPTs☆84Updated 2 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.