MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an excellent balance between performance and efficiency.
☆32Mar 9, 2026Updated last week
Alternatives and similar repositories for MiSS
Users that are interested in MiSS are comparing it to the libraries listed below
Sorting:
- ☆41Apr 30, 2025Updated 10 months ago
- Mini Model Daemon☆12Nov 9, 2024Updated last year
- RWKV-SpeechChat is a real-time dialogue script based on a frozen 3B RWKV model with trained adapters and initial states. Various trained …☆29Jan 1, 2025Updated last year
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- A 20M RWKV v6 can do nonogram☆14Oct 18, 2024Updated last year
- A fast RWKV Tokenizer written in Rust☆54Aug 12, 2025Updated 7 months ago
- This repo is an exploratory experiment to enable frozen pretrained RWKV language models to accept speech modality input. We followed the …☆54Dec 23, 2024Updated last year
- ☆19May 2, 2024Updated last year
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆90Feb 14, 2026Updated last month
- Inference RWKV with multiple supported backends.☆81Mar 11, 2026Updated last week
- [ACL 2025] Beyond Prompt Engineering: Robust Behavior Control in LLMs via Steering Target Atoms☆38Jun 4, 2025Updated 9 months ago
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆57Dec 24, 2025Updated 2 months ago
- [COLM 2025] "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆20Apr 9, 2025Updated 11 months ago
- BlackGoose Rimer: RWKV as a Superior Architecture for Large-Scale Time Series Modeling☆32Jul 11, 2025Updated 8 months ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- Inference RWKV v7 in pure C.☆44Oct 10, 2025Updated 5 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆56Updated this week
- Evaluating LLMs with Dynamic Data☆112Feb 11, 2026Updated last month
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆48Aug 22, 2025Updated 6 months ago
- ☆13Jan 22, 2025Updated last year
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Mar 1, 2024Updated 2 years ago
- ☆13Dec 21, 2024Updated last year
- Model Context Protocol (MCP) library for the D language☆13Sep 14, 2025Updated 6 months ago
- [EMNLP'24] LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆31Apr 8, 2024Updated last year
- RWKV v5,v6 LoRA Trainer on Cuda and Rocm Platform. RWKV is a RNN with transformer-level LLM performance. It can be directly trained like …☆13Mar 24, 2024Updated last year
- Fast modular code to create and train cutting edge LLMs☆68May 16, 2024Updated last year
- Fine-tuning Quantized Neural Networks with Zeroth-order Optimization☆16Sep 17, 2025Updated 6 months ago
- [AAAI 2025] PAT: Pruning-Aware Tuning for Large Language Models☆36Feb 1, 2025Updated last year
- Efficient RWKV inference engine. RWKV7 7.2B fp16 decoding 10250 tps @ single 5090.☆95Feb 1, 2026Updated last month
- An agent of personal activity monitoring system for Windows desktop.☆12Sep 19, 2018Updated 7 years ago
- ☆13Feb 20, 2026Updated last month
- ☆15Sep 24, 2023Updated 2 years ago
- A free open-source visual novel engine written in D.☆24Jan 17, 2026Updated 2 months ago
- ☆35Mar 25, 2024Updated last year
- RWKV-7 mini☆12Mar 29, 2025Updated 11 months ago
- Thaumcraft 4 Addon☆13Mar 15, 2026Updated last week
- A 100% locally run AI web tool for generating WeChat replies using the RWKV runner☆10Oct 29, 2024Updated last year
- [ICML2025] LoRA fine-tune directly on the quantized models.☆39Nov 25, 2024Updated last year
- ICLR 2025☆31May 21, 2025Updated 10 months ago