harvardnlp / annotated-transformerLinks
An annotated implementation of the Transformer paper.
☆6,257Updated last year
Alternatives and similar repositories for annotated-transformer
Users that are interested in annotated-transformer are comparing it to the libraries listed below
Sorting:
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,217Updated last year
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)☆7,430Updated last year
- Google AI 2018 BERT pytorch implementation☆6,415Updated last year
- ☆11,414Updated 2 months ago
- Graph Neural Network Library for PyTorch☆22,416Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,670Updated 3 weeks ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,771Updated last week
- Transformer: PyTorch Implementation of "Attention Is All You Need"☆3,752Updated 9 months ago
- ☆3,653Updated 2 years ago
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆22,961Updated 3 months ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆16,172Updated 2 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,340Updated this week
- This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".☆14,850Updated 10 months ago
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆8,956Updated last month
- Python package built to ease deep learning on graph, on top of existing DL frameworks.☆13,893Updated 3 months ago
- 🎨 ML Visuals contains figures and templates which you can reuse and customize to improve your scientific writing.☆15,177Updated 2 years ago
- A TensorFlow Implementation of the Transformer: Attention Is All You Need☆4,369Updated 2 years ago
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆34,319Updated this week
- State-of-the-Art Text Embeddings☆16,812Updated last week
- An open-source NLP research library, built on PyTorch.☆11,850Updated 2 years ago
- Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.☆5,565Updated last year
- tensorboard for pytorch (and chainer, mxnet, numpy, ...)☆7,943Updated last month
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,367Updated last month
- Ongoing research training transformer models at scale☆12,468Updated this week
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,185Updated 2 years ago
- Fast and memory-efficient exact attention☆17,664Updated this week
- State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enter…☆14,298Updated 9 months ago
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆6,881Updated 2 months ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆31,479Updated 4 months ago
- Must-read papers on graph neural networks (GNN)☆16,428Updated last year