ArEnSc / Production-RWKVLinks
This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV branch of code.
☆64Updated 2 years ago
Alternatives and similar repositories for Production-RWKV
Users that are interested in Production-RWKV are comparing it to the libraries listed below
Sorting:
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- ☆42Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆66Updated 3 years ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆39Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆50Updated last year
- Code repository for the c-BTM paper☆107Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆40Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- ☆131Updated 3 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆58Updated 3 years ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- RWKV model implementation☆38Updated 2 years ago
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆33Updated last year
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 4 years ago
- ☆35Updated 2 years ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- RWKV in nanoGPT style☆193Updated last year
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- rwkv_chatbot☆61Updated 2 years ago
- Multi-Domain Expert Learning☆66Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated 2 years ago
- O-GIA is an umbrella for research, infrastructure and projects ecosystem that should provide open source, reproducible datasets, models, …☆89Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆93Updated last month
- ☆33Updated 2 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆222Updated last year