uzaymacar / attention-mechanismsLinks
Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with TensorFlow 2.0 and Keras.
☆361Updated last year
Alternatives and similar repositories for attention-mechanisms
Users that are interested in attention-mechanisms are comparing it to the libraries listed below
Sorting:
- A wrapper layer for stacking layers horizontally☆228Updated 3 years ago
- Attention mechanism for processing sequential data that considers the context for each timestamp.☆655Updated 3 years ago
- Keras Layer implementation of Attention for Sequential models☆441Updated 2 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆125Updated 3 years ago
- Transformer implemented in Keras☆371Updated 3 years ago
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆537Updated 5 years ago
- Attention-based bidirectional LSTM for Classification Task (ICASSP)☆115Updated 2 years ago
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆711Updated 3 years ago
- An example attention network with simple dataset.☆229Updated 6 years ago
- TensorFlow implementation of focal loss☆189Updated 4 years ago
- All about attention in neural networks. Soft attention, attention maps, local and global attention and multi-head attention.☆231Updated 5 years ago
- attention-based LSTM/Dense implemented by Keras☆299Updated 7 years ago
- 自注意力与文本分类☆119Updated 6 years ago
- Keras/TF implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers☆167Updated 3 years ago
- A bidirectional LSTM with attention for multiclass/multilabel text classification.☆172Updated 10 months ago
- Word Embedding + LSTM + FC☆161Updated 11 months ago
- This is a repository for Multi-task learning with toy data in Pytorch and Tensorflow☆136Updated 6 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆134Updated 5 years ago
- ☆76Updated 5 years ago
- A pytorch implementation of Fairseq Convolutional Sequence to Sequence Learning(Gehring et al. 2017)☆46Updated 6 years ago
- A small and simple tutorial on how to craft a LSTM nn.Module by hand on PyTorch.☆126Updated 5 years ago
- An Attention Layer in Keras☆43Updated 6 years ago
- ☆185Updated 7 years ago
- Collection of custom layers and utility functions for Keras which are missing in the main framework.☆62Updated 5 years ago
- Minimal RNN classifier with self-attention in Pytorch☆150Updated 3 years ago
- document classification using LSTM + self attention☆113Updated 5 years ago
- Implementation of the Transformer architecture described by Vaswani et al. in "Attention Is All You Need"☆28Updated 6 years ago
- Some State-of-the-Art few shot learning algorithms in tensorflow 2☆215Updated last year
- Ensemble learning related books, papers, videos, and toolboxes☆296Updated 5 years ago
- Binary and Categorical Focal loss implementation in Keras.☆278Updated 7 months ago