bojone / rnnLinks
一些RNN的实现
☆51Updated 2 years ago
Alternatives and similar repositories for rnn
Users that are interested in rnn are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)☆85Updated 3 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- A pytorch &keras implementation and demo of Fastformer.☆189Updated 2 years ago
- ☆28Updated last year
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆35Updated 3 years ago
- 逻辑回归和单层softmax的解析解☆12Updated 4 years ago
- This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generati…☆34Updated 2 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆117Updated 4 years ago
- 基于Transformer的单模型、 多尺度的VAE模型☆57Updated 4 years ago
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆80Updated last year
- A Tight-fisted Optimizer☆50Updated 2 years ago
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 2 years ago
- FLASHQuad_pytorch☆68Updated 3 years ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last week
- ☆83Updated 5 years ago
- ☆48Updated 3 months ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆125Updated last year
- Python下shuffle几百G文件☆33Updated 4 years ago
- ☆67Updated last year
- Implementation of the paper "Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting", https://arxi…☆19Updated 4 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆62Updated 2 years ago
- ☆51Updated 2 years ago
- ☆37Updated 3 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆70Updated 2 years ago
- ☆20Updated 2 years ago
- ☆13Updated 2 years ago
- ☆16Updated 2 years ago
- Sparse Attention with Linear Units☆19Updated 4 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 5 years ago