recursal / minmodmonLinks
Mini Model Daemon
☆12Updated last year
Alternatives and similar repositories for minmodmon
Users that are interested in minmodmon are comparing it to the libraries listed below
Sorting:
- tinygrad port of the RWKV large language model.☆45Updated 11 months ago
- RWKV, in easy to read code☆72Updated 10 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 9 months ago
- Course Project for COMP4471 on RWKV☆17Updated 2 years ago
- JAX implementations of RWKV☆19Updated 2 years ago
- RWKV-7: Surpassing GPT☆104Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆47Updated 3 months ago
- GoldFinch and other hybrid transformer components☆12Updated 2 months ago
- ☆171Updated 3 weeks ago
- Inference of Mamba and Mamba2 models in pure C☆196Updated 2 weeks ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated last year
- RWKV in nanoGPT style☆197Updated last year
- A fast RWKV Tokenizer written in Rust☆54Updated 5 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- RWKV centralised docs for the community☆31Updated 3 weeks ago
- BlinkDL's RWKV-v4 running in the browser☆48Updated 2 years ago
- Some preliminary explorations of Mamba's context scaling.☆13Updated last year
- ☆40Updated 2 years ago
- Inference RWKV v7 in pure C.☆44Updated 4 months ago
- new optimizer☆20Updated last year
- Train your own small bitnet model☆77Updated last year
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Updated last year
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆125Updated this week
- 5X faster 60% less memory QLoRA finetuning☆21Updated last year
- ☆41Updated 9 months ago
- Experiments with BitNet inference on CPU☆55Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- Token Omission Via Attention☆128Updated last year