iantbutler01 / rwkv-raven-qlora-4bit-instruct
A finetuning pipeline for instruct tuning Raven 14bn using QLORA 4bit and the Ditty finetuning library
β28Updated 5 months ago
Related projects β
Alternatives and complementary repositories for rwkv-raven-qlora-4bit-instruct
- π β Notebooks related to RWKVβ59Updated last year
- β81Updated 5 months ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca datasetβ31Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.β99Updated 6 months ago
- β13Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!β133Updated 2 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRAβ124Updated last year
- Tune MPTsβ84Updated last year
- GPT-2 small trained on phi-like dataβ65Updated 8 months ago
- Framework agnostic python runtime for RWKV modelsβ145Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenciβ¦β307Updated 9 months ago
- QLoRA: Efficient Finetuning of Quantized LLMsβ77Updated 6 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythiaβ41Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.β50Updated last year
- Enhancing LangChain prompts to work better with RWKV modelsβ34Updated last year
- A converter and basic tester for rwkv onnxβ41Updated 9 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQβ96Updated last year
- This project is established for real-time training of the RWKV model.β50Updated 5 months ago
- 4 bits quantization of LLaMa using GPTQβ130Updated last year
- RWKV in nanoGPT styleβ176Updated 5 months ago
- RWKV centralised docs for the communityβ19Updated 2 months ago
- β42Updated last year
- 4 bits quantization of SantaCoder using GPTQβ53Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best β¦β406Updated last year
- Instruct-tune LLaMA on consumer hardwareβ73Updated last year
- 8-bit CUDA functions for PyTorch in Windows 10β71Updated last year
- tinygrad port of the RWKV large language model.β43Updated 4 months ago
- Reinforcement Learning Toolkit for RWKV. Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning Let's boost the model's intβ¦β18Updated this week
- β82Updated this week