bojone / rnn
一些RNN的实现
☆50Updated 2 years ago
Alternatives and similar repositories for rnn
Users that are interested in rnn are comparing it to the libraries listed below
Sorting:
- Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)☆33Updated 3 years ago
- Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)☆84Updated 3 years ago
- ☆27Updated 10 months ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated 2 years ago
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆53Updated last month
- A Tight-fisted Optimizer☆47Updated 2 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated last year
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 2 years ago
- FLASHQuad_pytorch☆67Updated 3 years ago
- Implementation of the paper "Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting", https://arxi…☆19Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Python下shuffle几百G文件☆33Updated 3 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- A pytorch &keras implementation and demo of Fastformer.☆188Updated 2 years ago
- ☆83Updated 5 years ago
- 逻辑回归和单层softmax的解析解☆12Updated 3 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆63Updated 3 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 4 years ago
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆77Updated 4 years ago
- Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.☆76Updated 4 years ago
- ☆35Updated 3 years ago
- About Code release for "Flowformer: Linearizing Transformers with Conservation Flows" (ICML 2022), https://arxiv.org/pdf/2202.06258.pdf☆321Updated 10 months ago
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- A custom pytorch Dataset extension that provides a faster iteration and better RAM usage☆43Updated last year
- Official Pytorch Implementation for "Continual Transformers: Redundancy-Free Attention for Online Inference" [ICLR 2023]☆28Updated last year
- Code for ICML 2020 paper: Do RNN and LSTM have Long Memory?☆17Updated 4 years ago
- Lion and Adam optimization comparison☆61Updated 2 years ago
- ICML 21 - Voice2Series: Adversarial Reprogramming Acoustic Models for Time Series Classification☆72Updated 11 months ago
- ☆51Updated 2 years ago