mertensu / transformer-tutorial
Visualising the Transformer encoder
☆111Updated 4 years ago
Alternatives and similar repositories for transformer-tutorial:
Users that are interested in transformer-tutorial are comparing it to the libraries listed below
- http://nlp.seas.harvard.edu/2018/04/03/attention.html☆63Updated 3 years ago
- Create interactive textual heat maps for Jupiter notebooks☆196Updated 8 months ago
- ☆102Updated 4 years ago
- Docs☆143Updated 2 months ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆145Updated 3 years ago
- 📄 A repo containing notes and discussions for our weekly NLP/ML paper discussions.☆150Updated 4 years ago
- A tiny Catalyst-like experiment runner framework on top of micrograd.☆51Updated 4 years ago
- 📰Natural language processing (NLP) newsletter☆301Updated 4 years ago
- Creates a learning-curve plot for Jupyter/Colab notebooks that is updated in real-time.☆175Updated 3 years ago
- ☆153Updated 4 years ago
- Provides a systematic and extensible way to build, train, evaluate, and tune deep learning models using PyTorch.☆93Updated 8 months ago
- Experiments logging & visualization☆50Updated 3 years ago
- A collection of code snippets for my PyTorch Lightning projects☆107Updated 4 years ago
- AI/ML citation graph with postgres + graphql☆187Updated 4 years ago
- ☆264Updated 5 years ago
- Python implementation of GLN in different frameworks☆98Updated 4 years ago
- Configure Python functions explicitly and safely☆126Updated 3 months ago
- Manifold-Mixup implementation for fastai V1☆19Updated 4 years ago
- Measure and visualize machine learning model performance without the usual boilerplate.☆97Updated 5 months ago
- Distillation of BERT model with catalyst framework☆76Updated last year
- Course webpage for COMP 790, (Deep) Learning from Limited Labeled Data☆303Updated 4 years ago
- TF2.0 port for Augmix paper☆79Updated 5 years ago
- Robustness Gym is an evaluation toolkit for machine learning.☆442Updated 2 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.☆126Updated 4 years ago
- ☆108Updated 2 years ago
- Implementation of Feedback Transformer in Pytorch☆105Updated 3 years ago
- Tricks for Colab power users☆171Updated 4 years ago
- NeuralPy: A Keras like deep learning library works on top of PyTorch☆78Updated 8 months ago
- ☆137Updated 2 years ago