PicoCreator / RWKV-LM-LoRA
RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
☆10Updated last year
Alternatives and similar repositories for RWKV-LM-LoRA:
Users that are interested in RWKV-LM-LoRA are comparing it to the libraries listed below
- ☆14Updated 11 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- Merge LLM that are split in to parts☆26Updated last year
- ☆27Updated last year
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆44Updated 5 months ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆15Updated 4 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated 10 months ago
- ☆13Updated last year
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆26Updated last year
- RWKV centralised docs for the community☆21Updated 2 weeks ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- entropix style sampling + GUI☆25Updated 4 months ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- Finetune any model on HF in less than 30 seconds☆58Updated last month
- Easily deploy your rwkv model☆18Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- Modified Beam Search with periodical restart☆12Updated 6 months ago
- RWKV v5,v6 LoRA Trainer on Cuda and Rocm Platform. RWKV is a RNN with transformer-level LLM performance. It can be directly trained like …