Alic-Li / BlackGoose_RimerLinks
BlackGoose Rimer: RWKV as a Superior Architecture for Large-Scale Time Series Modeling
☆29Updated 4 months ago
Alternatives and similar repositories for BlackGoose_Rimer
Users that are interested in BlackGoose_Rimer are comparing it to the libraries listed below
Sorting:
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆45Updated 3 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆45Updated last month
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28Updated 7 months ago
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆25Updated last month
- The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging diff…☆60Updated last month
- ☆39Updated 7 months ago
- ☆23Updated 11 months ago
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆95Updated 9 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆51Updated 4 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 10 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆117Updated last month
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 7 months ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last month
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆38Updated 9 months ago
- State Space Models☆71Updated last year
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Updated 3 weeks ago
- Here we will test various linear attention designs.☆62Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆74Updated last week
- A repository for DenseSSMs☆89Updated last year
- Multi-Layer Key-Value sharing experiments on Pythia models☆34Updated last year
- ☆49Updated 5 months ago
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆127Updated 9 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Updated last year
- ☆19Updated 11 months ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆104Updated 6 months ago
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆43Updated 10 months ago
- RWKV-7: Surpassing GPT☆101Updated last year
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆57Updated 8 months ago