npk48 / rwkv_cudaLinks
☆11Updated 2 years ago
Alternatives and similar repositories for rwkv_cuda
Users that are interested in rwkv_cuda are comparing it to the libraries listed below
Sorting:
- Chatbot that answers frequently asked questions in French, English, and Tunisian using the Rasa NLU framework and RWKV-4-Raven☆13Updated 2 years ago
- BlinkDL's RWKV-v4 running in the browser☆47Updated 2 years ago
- Web browser version of StarCoder.cpp☆45Updated 2 years ago
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- A converter and basic tester for rwkv onnx☆43Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- RWKV centralised docs for the community☆29Updated last month
- Download full or partial git-lfs repos without temporarily using 2x disk space☆30Updated last year
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 7 months ago
- Training a reward model for RLHF using RWKV.☆15Updated 2 years ago
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆25Updated last year
- JAX implementations of RWKV☆19Updated 2 years ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- ☆13Updated 2 years ago
- Port of Microsoft's BioGPT in C/C++ using ggml☆85Updated last year
- Experiments with BitNet inference on CPU☆54Updated last year
- Embeddings focused small version of Llama NLP model☆104Updated 2 years ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- ☆23Updated last month
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- ☆20Updated last year
- Modified Beam Search with periodical restart☆12Updated last year
- ☆38Updated 5 months ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- Easily convert HuggingFace models to GGUF-format for llama.cpp☆23Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆19Updated 2 years ago
- ☆26Updated 2 years ago
- Rust bindings for CTranslate2☆14Updated 2 years ago