CyberZHG / keras-self-attentionView external linksLinks
Attention mechanism for processing sequential data that considers the context for each timestamp.
☆657Jan 22, 2022Updated 4 years ago
Alternatives and similar repositories for keras-self-attention
Users that are interested in keras-self-attention are comparing it to the libraries listed below
Sorting:
- A wrapper layer for stacking layers horizontally☆228Jan 22, 2022Updated 4 years ago
- Keras Attention Layer (Luong and Bahdanau scores).☆2,816Nov 17, 2023Updated 2 years ago
- Transformer implemented in Keras☆369Jan 22, 2022Updated 4 years ago
- 自注意力与文本分类☆119Nov 3, 2018Updated 7 years ago
- Keras library for building (Universal) Transformers, facilitating BERT and GPT models☆541May 30, 2020Updated 5 years ago
- Implementation of BERT that could load official pre-trained models for feature extraction and prediction☆2,428Jan 22, 2022Updated 4 years ago
- Keras Layer implementation of Attention for Sequential models☆444Mar 25, 2023Updated 2 years ago
- Layer normalization implemented in Keras☆60Jan 22, 2022Updated 4 years ago
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆713Sep 24, 2021Updated 4 years ago
- Visualizing RNNs using the attention mechanism☆751Jun 25, 2019Updated 6 years ago
- SNAIL Attention Block for Keras.☆17Mar 30, 2020Updated 5 years ago
- Transformer-XL with checkpoint loader☆68Jan 22, 2022Updated 4 years ago
- Implementations for a family of attention mechanisms, suitable for all kinds of natural language processing tasks and compatible with Ten…☆362Feb 6, 2024Updated 2 years ago
- Keras implementation of BERT with pre-trained weights☆816Jul 26, 2019Updated 6 years ago
- attention-based LSTM/Dense implemented by Keras☆300Apr 21, 2018Updated 7 years ago
- Contains an implementation of the attention mechanism and a keras text classifier wrapper.☆29Sep 18, 2018Updated 7 years ago
- Collection of custom layers and utility functions for Keras which are missing in the main framework.☆62May 25, 2020Updated 5 years ago
- some attention implements☆1,452Nov 20, 2019Updated 6 years ago
- Adaptive embedding and softmax☆17Jan 22, 2022Updated 4 years ago
- Keras Temporal Convolutional Network. Supports Python and R.☆2,001Apr 8, 2025Updated 10 months ago
- Position embedding layers in Keras☆58Jan 22, 2022Updated 4 years ago
- Graph convolutional layers☆62Jan 22, 2022Updated 4 years ago
- Calculate similarity with embedding☆11Jan 22, 2022Updated 4 years ago
- Keras community contributions☆1,585Oct 21, 2022Updated 3 years ago
- An example attention network with simple dataset.☆228Mar 5, 2019Updated 6 years ago
- CRF(Conditional Random Field) Layer for TensorFlow 1.X with many powerful functions☆15Jan 3, 2020Updated 6 years ago
- Keras implementation of Attention Augmented Convolutional Neural Networks☆120Mar 6, 2020Updated 5 years ago
- How to use ELMo embeddings in Keras with Tensorflow Hub☆260Dec 18, 2018Updated 7 years ago
- AdaBound optimizer in Keras☆56Jul 11, 2020Updated 5 years ago
- A Hyperparameter Tuning Library for Keras☆2,917Dec 1, 2025Updated 2 months ago
- Keras/TF implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers☆169Jan 6, 2022Updated 4 years ago
- Gradient accumulation for Keras☆35Jun 27, 2021Updated 4 years ago
- 在Keras下微调Bert的一些例子;some examples of bert in keras☆657Oct 24, 2019Updated 6 years ago
- a simple implementation of self attention layer that outputs flattened sentence embedding matrix, with the Frobenius norm penalty☆16Sep 14, 2018Updated 7 years ago
- keras implement of transformers for humans☆5,420Nov 11, 2024Updated last year
- My implementation of "Hierarchical Attention Networks for Document Classification" in Keras☆26Feb 14, 2018Updated 8 years ago
- Sequence to Sequence Learning with Keras☆3,177Aug 20, 2022Updated 3 years ago
- RAdam implemented in Keras & TensorFlow☆325Jan 22, 2022Updated 4 years ago
- keras example of seq2seq, auto title☆332Dec 9, 2019Updated 6 years ago