Durham / RWKV-finetune-scriptLinks
Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset
β31Updated 2 years ago
Alternatives and similar repositories for RWKV-finetune-script
Users that are interested in RWKV-finetune-script are comparing it to the libraries listed below
Sorting:
- Framework agnostic python runtime for RWKV modelsβ147Updated 2 years ago
- π β Notebooks related to RWKVβ58Updated 2 years ago
- β81Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best β¦β10Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best β¦β412Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!β148Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /β¦β40Updated 2 years ago
- Instruct-tune LLaMA on consumer hardwareβ72Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythiaβ42Updated 2 years ago
- This project is established for real-time training of the RWKV model.β50Updated last year
- A converter and basic tester for rwkv onnxβ43Updated 2 years ago
- β171Updated 3 weeks ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring theβ¦β62Updated 4 months ago
- Evaluating LLMs with Dynamic Dataβ111Updated 3 weeks ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV braβ¦β65Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenciβ¦β313Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQβ101Updated 2 years ago
- A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning libraryβ28Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMsβ79Updated last year
- A lightweight, hackable, and efficient framework for training and fine-tuning language modelsβ187Updated last week
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.β52Updated 2 years ago
- ChatGPT-like Web UI for RWKVsticβ100Updated 2 years ago
- Tune MPTsβ84Updated 2 years ago
- β34Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deployβ¦β47Updated 3 months ago
- Fine-tuning RWKV-World modelβ26Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardwareβ66Updated 2 years ago
- Easily deploy your rwkv modelβ19Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapperβ110Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.β71Updated 2 years ago