seung7361 / RWKV-PytorchLinks
☆17Updated 2 years ago
Alternatives and similar repositories for RWKV-Pytorch
Users that are interested in RWKV-Pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated 3 months ago
- RWKV-TS: Beyond Traditional Recurrent Neural Network for Time Series Tasks☆122Updated last year
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆227Updated last year
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆118Updated this week
- Efficient Python library for Extended LSTM with exponential gating, memory mixing, and matrix memory for superior sequence modeling.☆303Updated last year
- Resources about xLSTM by Sepp Hochreiter☆318Updated last year
- 一些RNN的实现☆52Updated 2 years ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated 2 years ago
- [ACL 2024 Findings] Hierarchy-aware Biased Bound Margin Loss Function for Hierarchical Text Classification☆15Updated last year
- Minimal Mamba-2 implementation in PyTorch☆242Updated last year
- ☆13Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆206Updated 3 weeks ago
- Pytorch implementation of the xLSTM model by Beck et al. (2024)☆181Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆250Updated 2 years ago
- ☆39Updated last week
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆189Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆213Updated last week
- Implementation of the paper "Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting", https://arxi…☆19Updated 4 years ago
- ☆13Updated last year
- Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)☆85Updated 3 years ago
- Non-official implementation of "Attention as an RNN" from https://arxiv.org/pdf/2405.13956, efficient associative parallel prefix scan an…☆27Updated last year
- ☆172Updated this week
- ☆49Updated 7 months ago
- ☆23Updated last year
- ☆257Updated 3 months ago
- Pytorch (Lightning) implementation of the Mamba model☆35Updated 9 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆112Updated 2 years ago
- tinybig for deep function learning☆60Updated 8 months ago
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆136Updated 3 weeks ago