iantbutler01 / rwkv-raven-qlora-4bit-instruct
A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library
☆28Updated 10 months ago
Alternatives and similar repositories for rwkv-raven-qlora-4bit-instruct:
Users that are interested in rwkv-raven-qlora-4bit-instruct are comparing it to the libraries listed below
- ☆82Updated 11 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 8 months ago
- This project is established for real-time training of the RWKV model.☆49Updated 11 months ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆310Updated last year
- Instruct-tune LLaMA on consumer hardware☆74Updated last year
- ☆119Updated last week
- 📖 — Notebooks related to RWKV☆59Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- rwkv finetuning☆36Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- RWKV in nanoGPT style☆189Updated 10 months ago
- tinygrad port of the RWKV large language model.☆44Updated last month
- ☆13Updated last year
- ☆12Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated 11 months ago
- Fine-tuning RWKV-World model☆25Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated last year
- GPT-2 small trained on phi-like data☆66Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- A fast RWKV Tokenizer written in Rust☆44Updated 3 weeks ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated 10 months ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- 基于RWKV模型的角色扮演,实际上是个改的妈都不认识的 RWKV_Role_Playing☆16Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆162Updated last week
- Merge Transformers language models by use of gradient parameters.☆206Updated 8 months ago