mertensu / transformer-tutorial
Visualising the Transformer encoder
☆111Updated 4 years ago
Alternatives and similar repositories for transformer-tutorial:
Users that are interested in transformer-tutorial are comparing it to the libraries listed below
- http://nlp.seas.harvard.edu/2018/04/03/attention.html☆63Updated 3 years ago
- ☆102Updated 4 years ago
- Creates a learning-curve plot for Jupyter/Colab notebooks that is updated in real-time.☆175Updated 3 years ago
- Distillation of BERT model with catalyst framework☆76Updated last year
- 📄 A repo containing notes and discussions for our weekly NLP/ML paper discussions.☆150Updated 4 years ago
- A tiny Catalyst-like experiment runner framework on top of micrograd.☆51Updated 4 years ago
- Docs☆143Updated 3 months ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆81Updated last year
- Create interactive textual heat maps for Jupiter notebooks☆196Updated 9 months ago
- Course webpage for COMP 790, (Deep) Learning from Limited Labeled Data☆304Updated 4 years ago
- This is the second part of the Deep Learning Course for the Master in High-Performance Computing (SISSA/ICTP).)☆33Updated 4 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- Robustness Gym is an evaluation toolkit for machine learning.☆442Updated 2 years ago
- 📰Natural language processing (NLP) newsletter☆301Updated 4 years ago
- A collection of code snippets for my PyTorch Lightning projects☆107Updated 4 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆146Updated 3 years ago
- A demonstration of the attention mechanism with some toy experiments and explanations.☆107Updated 6 years ago
- Check if you have training samples in your test set☆64Updated 2 years ago
- AI/ML citation graph with postgres + graphql☆187Updated 4 years ago
- Repository for tutorial sessions at EEML2020☆272Updated 4 years ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆141Updated 2 years ago
- Implementation of Feedback Transformer in Pytorch☆105Updated 4 years ago
- Pre-training of Language Models for Language Understanding☆83Updated 5 years ago
- ☆74Updated 5 years ago
- ☆40Updated last year
- ☆108Updated 2 years ago
- ☆264Updated 5 years ago
- HetSeq: Distributed GPU Training on Heterogeneous Infrastructure☆106Updated last year
- ☆64Updated 4 years ago
- ☆153Updated 4 years ago