iantbutler01 / rwkv-raven-qlora-4bit-instruct
A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library
β28Updated 8 months ago
Alternatives and similar repositories for rwkv-raven-qlora-4bit-instruct:
Users that are interested in rwkv-raven-qlora-4bit-instruct are comparing it to the libraries listed below
- π β Notebooks related to RWKVβ59Updated last year
- Framework agnostic python runtime for RWKV modelsβ145Updated last year
- β81Updated 9 months ago
- Instruct-tune LLaMA on consumer hardwareβ73Updated last year
- β13Updated last year
- 4 bits quantization of LLaMa using GPTQβ131Updated last year
- Tune MPTsβ84Updated last year
- This project is established for real-time training of the RWKV model.β49Updated 9 months ago
- An unsupervised model merging algorithm for Transformers-based language models.β106Updated 10 months ago
- ChatGPT-like Web UI for RWKVsticβ100Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca datasetβ31Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.β50Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenciβ¦β309Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRAβ123Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMsβ77Updated 10 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!β146Updated 6 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQβ99Updated last year
- 4 bits quantization of SantaCoder using GPTQβ51Updated last year
- 8-bit CUDA functions for PyTorch in Windows 10β68Updated last year
- Low-Rank adapter extraction for fine-tuned transformers modelsβ170Updated 10 months ago
- Train Llama Loras Easilyβ31Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythiaβ41Updated last year
- GPT-2 small trained on phi-like dataβ65Updated last year
- β104Updated this week
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best β¦β410Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'β233Updated 9 months ago
- Merge Transformers language models by use of gradient parameters.β205Updated 6 months ago
- Simple and fast server for GPTQ-quantized LLaMA inferenceβ24Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based modelsβ164Updated this week
- Fine-tuning RWKV-World modelβ25Updated last year