codefuse-ai / rodimusLinks
☆177Updated 9 months ago
Alternatives and similar repositories for rodimus
Users that are interested in rodimus are comparing it to the libraries listed below
Sorting:
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆53Updated 2 weeks ago
- RADLADS training code☆36Updated 8 months ago
- ☆71Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆47Updated 3 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 8 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆81Updated last month
- ☆40Updated 9 months ago
- Memory optimized Mixture of Experts☆72Updated 6 months ago
- GoldFinch and other hybrid transformer components☆12Updated last month
- Official Code Repository for the paper "Key-value memory in the brain"☆31Updated 11 months ago
- [NeurIPS 2025] Official implementation of "Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning"☆29Updated 3 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- Esoteric Language Models☆109Updated 2 months ago
- RWKV-7: Surpassing GPT☆104Updated last year
- Universal Reasoning Model☆121Updated 2 weeks ago
- ☆29Updated 2 months ago
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆25Updated 3 months ago
- Here we will test various linear attention designs.☆62Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 10 months ago
- JAX Scalify: end-to-end scaled arithmetics☆18Updated last year
- Code and Model for NeurIPS 2024 Spotlight Paper "Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training…☆44Updated last year
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆46Updated 5 months ago
- ☆85Updated 2 months ago
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆34Updated 8 months ago
- ☆54Updated last year
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 5 months ago
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆289Updated 6 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 3 months ago