OpenMOSE / RWKV5-LM-LoRA
RWKV v5,v6 LoRA Trainer on Cuda and Rocm Platform. RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
☆11Updated 5 months ago
Related projects: ⓘ
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆131Updated last month
- ☆22Updated 3 months ago
- RWKV centralised docs for the community☆19Updated 2 weeks ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆57Updated 4 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 6 months ago
- A large-scale RWKV v6 inference with FLA . Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy on docker. Suppo…☆15Updated 2 weeks ago
- A pipeline for LLM knowledge distillation☆68Updated last month
- ☆48Updated 6 months ago
- QuIP quantization☆41Updated 6 months ago
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 5 months ago
- GoldFinch and other hybrid transformer components☆38Updated 2 months ago
- ☆33Updated last month
- Fast modular code to create and train cutting edge LLMs☆63Updated 4 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆96Updated 4 months ago
- State tuning tunes the state☆21Updated 5 months ago
- ☆31Updated 8 months ago
- A repository for research on medium sized language models.☆71Updated 3 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆34Updated 10 months ago
- ☆14Updated 5 months ago
- RWKV, in easy to read code☆52Updated 5 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆27Updated last month
- Implementation of the Mamba SSM with hf_integration.☆55Updated 2 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆38Updated 3 months ago
- Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆130Updated this week
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated 11 months ago
- ☆13Updated last year
- ☆50Updated last month
- ☆42Updated 3 weeks ago