Alic-Li / BlackGoose_RimerLinks
BlackGoose Rimer: RWKV as a Superior Architecture for Large-Scale Time Series Modeling
☆28Updated 3 months ago
Alternatives and similar repositories for BlackGoose_Rimer
Users that are interested in BlackGoose_Rimer are comparing it to the libraries listed below
Sorting:
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆94Updated 8 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆45Updated 2 months ago
- State Space Models☆70Updated last year
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆207Updated last week
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆112Updated this week
- A repository for DenseSSMs☆89Updated last year
- The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging diff…☆58Updated last week
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated this week
- ☆22Updated 9 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆45Updated this week
- ☆38Updated 5 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆47Updated last month
- C++ and Cuda ops for fused FourierKAN☆81Updated last year
- ☆47Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated last week
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆25Updated last month
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆120Updated 8 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆27Updated 5 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 7 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆15Updated 2 years ago
- My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing o…☆44Updated 10 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆50Updated 3 months ago
- ☆23Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆192Updated 2 weeks ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆100Updated 4 months ago
- ☆85Updated 5 months ago
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆118Updated 2 weeks ago
- ☆18Updated 10 months ago