lukemelas / simple-bertLinks
A simple PyTorch implementation of BERT, complete with pretrained models and training scripts.
☆44Updated 6 years ago
Alternatives and similar repositories for simple-bert
Users that are interested in simple-bert are comparing it to the libraries listed below
Sorting:
- An open source implementation of CLIP.☆32Updated 2 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch☆59Updated 4 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆43Updated 4 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆58Updated 4 years ago
- ☆24Updated 4 years ago
- [NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks☆60Updated 2 years ago
- State-of-the-art pretrained vision model from Bing Multimedia☆18Updated last year
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- An implementation of 2021 paper by Geoffrey Hinton: "How to represent part-whole hierarchies in a neural network" in Pytorch.☆57Updated 4 years ago
- ☆15Updated 2 years ago
- A JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short.☆48Updated last year
- ☆12Updated 3 years ago
- Implementation of Multistream Transformers in Pytorch☆54Updated 3 years ago
- (unofficial) - customized fork of DETR, optimized for intelligent obj detection on 'real world' custom datasets☆12Updated 4 years ago
- A non-JIT version implementation / replication of CLIP of OpenAI in pytorch☆34Updated 4 years ago
- ☆21Updated 4 years ago
- CLIP-Art: Contrastive Pre-training for Fine-Grained Art Classification - 4th Workshop on Computer Vision for Fashion, Art, and Design☆27Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- PyTorch implementation of IRMAE https//arxiv.org/abs/2010.00679☆47Updated 3 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Official code for NeurIPS paper "Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach".☆16Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- (partial) replication of results from https://arxiv.org/abs/1912.07768☆26Updated 5 years ago
- Pytorch implementation of the hamburger module from the ICLR 2021 paper "Is Attention Better Than Matrix Decomposition"☆99Updated 4 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- High performance pytorch modules☆18Updated 2 years ago