codefuse-ai / rodimusLinks
☆178Updated 8 months ago
Alternatives and similar repositories for rodimus
Users that are interested in rodimus are comparing it to the libraries listed below
Sorting:
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆53Updated 5 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- RADLADS training code☆35Updated 8 months ago
- Universal Reasoning Model☆113Updated 2 weeks ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆46Updated 2 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Official implementation of GRAPE: Group Representational Position Encoding (https://arxiv.org/abs/2512.07805)☆70Updated last week
- ☆71Updated last year
- ☆62Updated this week
- Esoteric Language Models☆108Updated last month
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆81Updated 2 weeks ago
- ☆39Updated 8 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆105Updated 7 months ago
- RWKV-7: Surpassing GPT☆103Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 8 months ago
- ☆112Updated last year
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆46Updated 4 months ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆187Updated 6 months ago
- Official repo of paper LM2☆46Updated 10 months ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Updated 10 months ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆54Updated last year
- When Reasoning Meets Its Laws☆33Updated last week
- GoldFinch and other hybrid transformer components☆12Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 4 months ago
- Here we will test various linear attention designs.☆62Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆87Updated 2 months ago
- ☆24Updated 7 months ago
- This is the official implementation for paper "PENCIL: Long Thoughts with Short Memory".☆69Updated 8 months ago