The-Swarm-Corporation / Mamba-R1Links
Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Experts (MoE).
☆25Updated 2 weeks ago
Alternatives and similar repositories for Mamba-R1
Users that are interested in Mamba-R1 are comparing it to the libraries listed below
Sorting:
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- OmniByteFormer is a generalized Transformer model that can process any type of data by converting it into byte sequences, bypassing tradi…☆13Updated this week
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆14Updated last year
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆15Updated 11 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- A framework making it effortless to convert any llm model into a reasoning agent like o1 or DeepSeek's r1☆22Updated 2 weeks ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 5 months ago
- AgentParse is a high-performance parsing library designed to map various structured data formats (such as Pydantic models, JSON, YAML, an…☆16Updated 2 weeks ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated 2 weeks ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆15Updated 11 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆45Updated last week
- Enhancement in Multimodal Representation Learning.☆40Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 4 months ago
- alternative way to calculating self attention☆18Updated last year
- Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll…☆28Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated 2 weeks ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 7 months ago
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- A simple reproducible template to implement AI research papers☆22Updated last year
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆94Updated 8 months ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆12Updated 11 months ago
- implementation of https://arxiv.org/pdf/2312.09299☆21Updated last year
- Train, tune, and infer Bamba model