UnstoppableCurry / RWKV-LM-Interpretability-ResearchLinks
Interpretability analysis of language model outlier and attempts to distill the model
☆13Updated 2 years ago
Alternatives and similar repositories for RWKV-LM-Interpretability-Research
Users that are interested in RWKV-LM-Interpretability-Research are comparing it to the libraries listed below
Sorting:
- JAX implementations of RWKV☆19Updated 2 years ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- ☆12Updated last year
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- ☆40Updated 2 years ago
- Training a reward model for RLHF using RWKV.☆15Updated 2 years ago
- Training Models Daily☆17Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- ☆27Updated 2 years ago
- GoldFinch and other hybrid transformer components☆12Updated last month
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- Implementation of mamba with rust☆90Updated last year
- RWKV model implementation☆38Updated 2 years ago
- Thispersondoesnotexist went down, so this time, while building it back up, I am going to open source all of it.☆91Updated 2 years ago
- BlinkDL's RWKV-v4 running in the browser☆48Updated 2 years ago
- ☆29Updated last year
- ChatGPT-like Web UI for RWKVstic☆100Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Chatbot that answers frequently asked questions in French, English, and Tunisian using the Rasa NLU framework and RWKV-4-Raven☆13Updated 2 years ago
- ☆71Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- RWKV in nanoGPT style☆197Updated last year
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!☆40Updated last year
- The Next Generation Multi-Modality Superintelligence☆70Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆56Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆65Updated 2 years ago