PicoCreator / RWKV-LM-LoRA
RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
☆10Updated last year
Alternatives and similar repositories for RWKV-LM-LoRA:
Users that are interested in RWKV-LM-LoRA are comparing it to the libraries listed below
- ☆14Updated 11 months ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆15Updated 4 months ago
- ☆27Updated last year
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆26Updated last year
- entropix style sampling + GUI☆25Updated 4 months ago
- RWKV centralised docs for the community☆20Updated last week
- Modified Beam Search with periodical restart☆12Updated 6 months ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆33Updated last week
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Experimental sampler to make LLMs more creative☆30Updated last year
- ☆42Updated last year
- ☆12Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- Finetune any model on HF in less than 30 seconds☆58Updated last month
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 7 months ago
- ☆48Updated 4 months ago
- 🍳 AyaMCooking is a Voice-to-Voice Mutli-lingual RAG Agent that makes a perfect sous chef for your kitchen, in upto 10 Languages 🤌🧑🍳☆21Updated 4 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- A large-scale RWKV v6, v7(World, ARWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy on docke…☆31Updated 2 weeks ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated last year
- An Implementation of "Orca: Progressive Learning from Complex Explanation Traces of GPT-4"☆44Updated 5 months ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated last year
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- GoldFinch and other hybrid transformer components☆45Updated 7 months ago
- ☆34Updated 7 months ago