AXKuhta / rwkv-onnx-dml
Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this time because of .onnx 2GB file size limitation
☆20Updated last year
Alternatives and similar repositories for rwkv-onnx-dml:
Users that are interested in rwkv-onnx-dml are comparing it to the libraries listed below
- A converter and basic tester for rwkv onnx☆42Updated last year
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- ☆42Updated last year
- Training a reward model for RLHF using RWKV.☆14Updated last year
- JAX implementations of RWKV☆19Updated last year
- Gradio UI for RWKV LLM☆29Updated 2 years ago
- A highly customizable, full scale web backend for web-rwkv, built on axum with websocket protocol.☆26Updated 10 months ago
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated last year
- ☆13Updated last year
- Chatbot that answers frequently asked questions in French, English, and Tunisian using the Rasa NLU framework and RWKV-4-Raven☆13Updated last year
- Enhancing LangChain prompts to work better with RWKV models☆34Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated last year
- Conversational Language model toolkit for training against human preferences.☆41Updated 11 months ago
- tinygrad port of the RWKV large language model.☆44Updated this week
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- A fast RWKV Tokenizer written in Rust☆42Updated 6 months ago
- GoldFinch and other hybrid transformer components☆45Updated 7 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- ☆27Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated 6 months ago
- Experiments with BitNet inference on CPU☆53Updated 11 months ago
- An open-source replication and extension of the Meta AI's LLAMA dataset☆24Updated 2 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- RWKV centralised docs for the community☆20Updated last week
- Interpretability analysis of language model outlier and attempts to distill the model☆13Updated last year