ArEnSc / Production-RWKV
This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV branch of code.
☆63Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Production-RWKV
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆66Updated 2 years ago
- ☆42Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆34Updated 3 years ago
- Experiments with generating opensource language model assistants☆97Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆20Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated last year
- ☆32Updated last year
- 📖 — Notebooks related to RWKV☆59Updated last year
- ☆49Updated 7 months ago
- Multi-Domain Expert Learning☆67Updated 9 months ago
- An open-source replication and extension of the Meta AI's LLAMA dataset☆24Updated last year
- A converter and basic tester for rwkv onnx☆41Updated 9 months ago
- ☆40Updated last year
- RWKV-7: Surpassing GPT☆40Updated this week
- RWKV model implementation☆38Updated last year
- ☆13Updated last year
- ☆128Updated 2 years ago
- Conversational Language model toolkit for training against human preferences.☆40Updated 7 months ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆53Updated last year
- Code repository for the c-BTM paper☆105Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 6 months ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated last year
- QuIP quantization☆46Updated 7 months ago