harvardnlp / annotated-transformerLinks
An annotated implementation of the Transformer paper.
☆6,710Updated last year
Alternatives and similar repositories for annotated-transformer
Users that are interested in annotated-transformer are comparing it to the libraries listed below
Sorting:
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,505Updated last year
- Google AI 2018 BERT pytorch implementation☆6,501Updated 2 years ago
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)☆7,764Updated 5 months ago
- Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.☆5,640Updated last year
- Transformer: PyTorch Implementation of "Attention Is All You Need"☆4,252Updated 4 months ago
- Transformer seq2seq model, program that can build a language translator from parallel corpus☆1,419Updated 2 years ago
- ☆3,678Updated 3 years ago
- A TensorFlow Implementation of the Transformer: Attention Is All You Need☆4,438Updated 2 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,686Updated 2 weeks ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆16,729Updated 2 years ago
- Must-read Papers on pre-trained language models.☆3,363Updated 3 years ago
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆6,972Updated last month
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,847Updated this week
- Longformer: The Long-Document Transformer☆2,174Updated 2 years ago
- Models, data loaders and abstractions for language processing, powered by PyTorch☆3,559Updated 2 months ago
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821☆3,612Updated last year
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,257Updated 6 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,450Updated 2 weeks ago
- ☆12,025Updated 8 months ago
- Ongoing research training transformer models at scale☆14,225Updated this week
- Unsupervised text tokenizer for Neural Network-based text generation.☆11,441Updated 2 weeks ago
- Natural Language Processing Tutorial for Deep Learning Researchers☆14,781Updated last year
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,283Updated 3 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,289Updated last week
- Simple transformer implementation from scratch in pytorch. (archival, latest version on codeberg)☆1,093Updated 8 months ago
- Acceptance rates for the major AI conferences☆4,670Updated 2 months ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,178Updated 2 years ago
- Natural Language Processing Tutorial for Deep Learning Researchers☆1,153Updated 3 years ago
- Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ)☆2,950Updated 6 years ago
- Transformer implementation in PyTorch.☆490Updated 6 years ago