s-nlp / annotated-transformerLinks
http://nlp.seas.harvard.edu/2018/04/03/attention.html
☆62Updated 4 years ago
Alternatives and similar repositories for annotated-transformer
Users that are interested in annotated-transformer are comparing it to the libraries listed below
Sorting:
- Visualising the Transformer encoder☆111Updated 5 years ago
- ☆104Updated 4 years ago
- Create interactive textual heat maps for Jupiter notebooks☆196Updated last year
- A tiny Catalyst-like experiment runner framework on top of micrograd.☆51Updated 4 years ago
- Distillation of BERT model with catalyst framework☆78Updated 2 years ago
- Docs☆143Updated 11 months ago
- A small library with distillation, quantization and pruning pipelines☆26Updated 4 years ago
- Pytorch library for end-to-end transformer models training, inference and serving☆70Updated 6 months ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 5 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆82Updated last year
- LM Pretraining with PyTorch/TPU☆136Updated 6 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- A collection of code snippets for my PyTorch Lightning projects☆107Updated 4 years ago
- ☆21Updated 6 years ago
- Code for scaling Transformers☆26Updated 4 years ago
- ☆30Updated 4 years ago
- Viewer for the 🤗 datasets library.☆85Updated 4 years ago
- A deep learning library based on Pytorch focussed on low resource language research and robustness☆70Updated 3 years ago
- ☆63Updated 5 years ago
- Yet another mini autodiff system for educational purposes☆30Updated 11 months ago
- ☆108Updated 2 years ago
- Code for BERT classifier finetuning for multiclass text classification☆71Updated 4 months ago
- The Python library with command line tools to interact with Dynabench(https://dynabench.org/), such as uploading models.☆55Updated 3 years ago
- (re)Implementation of Learning Multi-level Dependencies for Robust Word Recognition☆17Updated last year
- HetSeq: Distributed GPU Training on Heterogeneous Infrastructure☆106Updated 2 years ago
- Deep Learning for Natural Language Processing - Lectures 2023☆173Updated last year
- Central repository for all lectures on deep learning at UPC ETSETB TelecomBCN.☆54Updated 2 years ago
- Training Transformer-XL on 128 GPUs☆141Updated 5 years ago
- The "tl;dr" on a few notable transformer papers (pre-2022).☆190Updated 2 years ago
- Theoretical Deep Learning: generalization ability☆46Updated 5 years ago