PicoCreator / RWKV-LM-LoRALinks
RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
☆10Updated 2 years ago
Alternatives and similar repositories for RWKV-LM-LoRA
Users that are interested in RWKV-LM-LoRA are comparing it to the libraries listed below
Sorting:
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆25Updated 2 years ago
- ☆16Updated last year
- ☆27Updated 2 years ago
- entropix style sampling + GUI☆27Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- ☆39Updated 7 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 6 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆45Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- ☆21Updated 2 years ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆70Updated last year
- C++ inference wrappers for running blazing fast embedding services on your favourite serverless like AWS Lambda. By Prithivi Da, PRs welc…☆23Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆102Updated 2 years ago
- ☆74Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- GPT-2 small trained on phi-like data☆67Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago