BobMcDear / vit-pytorch
PyTorch implementation of the vision transformer
☆19Updated last year
Related projects: ⓘ
- PyTorch implementation of SimSiam☆8Updated last year
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆56Updated 10 months ago
- ☆35Updated 5 months ago
- Code for the PAPA paper☆27Updated last year
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆58Updated 2 years ago
- Transformer implemented with graph attention network (GAT) layers from PyTorch Geometric☆15Updated 2 years ago
- several types of attention modules written in PyTorch☆37Updated 4 months ago
- PyTorch implementation of EfficientNet☆9Updated last year
- ☆48Updated 3 months ago
- Deep Learning Experiment Code.☆19Updated last month
- Explorations into the recently proposed Taylor Series Linear Attention☆85Updated last month
- This is a simple torch implementation of the high performance Multi-Query Attention☆15Updated last year
- Pytorch implementation of a simple way to enable (Stochastic) Frame Averaging for any network☆45Updated last month
- ☆23Updated this week
- ☆35Updated last year
- Implementation of transformers based architecture in PyTorch.☆48Updated 3 years ago
- Implementation of CaiT models in TensorFlow and ImageNet-1k checkpoints. Includes code for inference and fine-tuning.☆12Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆50Updated 10 months ago
- ☆28Updated last week
- FID computation in Jax/Flax.☆23Updated 2 months ago
- Examples of using PyTorch hooks, as covered in my YouTube tutorial video.☆32Updated 10 months ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆35Updated 2 years ago
- A practical implementation of GradNorm, Gradient Normalization for Adaptive Loss Balancing, in Pytorch☆74Updated 7 months ago
- Code for the paper "On the Expressivity Role of LayerNorm in Transformers' Attention" (Findings of ACL'2023)☆43Updated last year
- ☆20Updated this week
- Collection of snippets for PyTorch users☆26Updated 2 years ago
- Factorized Neural Layers☆27Updated last year
- ☆11Updated this week
- Using FlexAttention to compute attention with different masking patterns☆28Updated last week
- Implementation of Agent Attention in Pytorch☆83Updated 2 months ago