iantbutler01 / rwkv-raven-qlora-4bit-instructLinks
A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library
β28Updated last year
Alternatives and similar repositories for rwkv-raven-qlora-4bit-instruct
Users that are interested in rwkv-raven-qlora-4bit-instruct are comparing it to the libraries listed below
Sorting:
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!β147Updated last year
- π β Notebooks related to RWKVβ58Updated 2 years ago
- β81Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenciβ¦β313Updated last year
- Instruct-tune LLaMA on consumer hardwareβ72Updated 2 years ago
- This project is established for real-time training of the RWKV model.β49Updated last year
- Framework agnostic python runtime for RWKV modelsβ146Updated 2 years ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.β52Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best β¦β412Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRAβ123Updated 2 years ago
- β13Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.β106Updated last year
- 4 bits quantization of LLaMa using GPTQβ130Updated 2 years ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca datasetβ31Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMsβ77Updated last year
- tinygrad port of the RWKV large language model.β44Updated 7 months ago
- Merge Transformers language models by use of gradient parameters.β207Updated last year
- An OpenAI Completions API compatible server for NLP transformers modelsβ64Updated last year
- β150Updated last week
- Tune MPTsβ84Updated 2 years ago
- 8-bit CUDA functions for PyTorch in Windows 10β68Updated 2 years ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based modelsβ182Updated last month
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring theβ¦β54Updated last month
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.β63Updated 2 years ago
- GPT-2 small trained on phi-like dataβ67Updated last year
- Low-Rank adapter extraction for fine-tuned transformers modelsβ178Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQβ102Updated 2 years ago
- ChatGPT-like Web UI for RWKVsticβ99Updated 2 years ago
- Inference code for facebook LLaMA models with Wrapyfi supportβ129Updated 2 years ago
- Embeddings focused small version of Llama NLP modelβ105Updated 2 years ago