zyaaa-ux / ROSA-TuningLinks
ROSA-Tuning
☆65Updated last week
Alternatives and similar repositories for ROSA-Tuning
Users that are interested in ROSA-Tuning are comparing it to the libraries listed below
Sorting:
- ROSA+: RWKV's ROSA implementation with fallback statistical predictor☆31Updated 3 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆47Updated 3 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Updated last month
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆30Updated 2 weeks ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆47Updated 5 months ago
- RADLADS training code☆36Updated 9 months ago
- ☆13Updated last year
- RWKV-7: Surpassing GPT☆104Updated last year
- ☆163Updated 7 months ago
- RWKV, in easy to read code☆72Updated 10 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆233Updated 7 months ago
- Work in progress.☆79Updated 2 months ago
- continous batching and parallel acceleration for RWKV6☆22Updated last year
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 3 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Transformers components but in Triton☆34Updated 9 months ago
- ☆54Updated last year
- ☆158Updated 11 months ago
- Code for data-aware compression of DeepSeek models☆70Updated 2 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- ☆71Updated 7 months ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Updated last year
- ☆171Updated 3 weeks ago
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆36Updated 7 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆204Updated 2 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆119Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆117Updated 2 weeks ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 9 months ago
- QuIP quantization☆61Updated last year