Whiax / BERT-Transformer-PytorchLinks
Basic implementation of BERT and Transformer in Pytorch in one short python file (also includes "predict next word" GPT task)
☆45Updated 2 years ago
Alternatives and similar repositories for BERT-Transformer-Pytorch
Users that are interested in BERT-Transformer-Pytorch are comparing it to the libraries listed below
Sorting:
- Checkout the new version at the link!☆22Updated 5 years ago
- ☆64Updated 5 years ago
- An implementation of masked language modeling for Pytorch, made as concise and simple as possible☆179Updated 2 years ago
- PyTorch implementation of the Word2Vec (Skip-Gram Model) and visualizing the trained embeddings using TSNE☆54Updated 5 years ago
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"☆40Updated 3 years ago
- Minimal implementation of Multi-layer Recurrent Neural Networks (LSTM) for character-level language modelling in PyTorch☆49Updated 6 years ago
- A repository containing the code for the Bistable Recurrent Cell☆47Updated 5 years ago
- Pytorch implementation of bistable recurrent cell with baseline comparisons.☆25Updated 2 years ago
- Implementation of Machine Learning algorithms from scratch in Python☆34Updated 6 years ago
- Tensorflow implementation of a linear attention architecture☆44Updated 4 years ago
- Simple illustrative examples for energy-based models in PyTorch☆68Updated 5 years ago
- The code for the video tutorial series on building a Transformer from scratch: https://www.youtube.com/watch?v=XR4VDnJzB8o☆19Updated 2 years ago
- RNN Encoder-Decoder in PyTorch☆45Updated last year
- Some notebooks for NLP☆207Updated 2 years ago
- ☆31Updated 6 years ago
- A library for making Transformer Variational Autoencoders. (Extends the Huggingface/transformers library.)☆145Updated 4 years ago
- A PyTorch implementation of the Transformer model from "Attention Is All You Need".☆60Updated 6 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆132Updated 4 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆167Updated last year
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆134Updated 6 years ago
- ☆46Updated 5 years ago
- A tour of different optimization algorithms in PyTorch.☆99Updated 4 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆118Updated 4 years ago
- Deep learning and natural language processing tutorial in PyTorch☆23Updated 7 years ago
- A small repo showing how to easily use BERT (or other transformers) for inference☆99Updated 6 years ago
- Pytorch implementation of Dauphin et al. (2016) "Language Modeling with Gated Convolutional Networks"☆29Updated 3 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆134Updated last year
- Deterministic Decoding for Discrete Data in Variational Autoencoders☆24Updated 5 years ago
- ☆48Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago