archinetai / vat-pytorchLinks
Virtual Adversarial Training (VAT) techniques in PyTorch
☆17Updated 3 years ago
Alternatives and similar repositories for vat-pytorch
Users that are interested in vat-pytorch are comparing it to the libraries listed below
Sorting:
- Code for the paper "Query-Key Normalization for Transformers"☆49Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated 4 months ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Tra…☆32Updated 4 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- PyTorch implementation of FNet: Mixing Tokens with Fourier transforms☆28Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Updated last year
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆12Updated 2 years ago
- HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [ACL 2023]☆14Updated 2 years ago
- Directed masked autoencoders☆14Updated 2 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 5 years ago
- ☆31Updated last year
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆23Updated 7 months ago
- This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Pr…☆26Updated 3 years ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated last year
- [ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators☆25Updated 2 years ago
- A Pytorch implementation of Attention on Attention module (both self and guided variants), for Visual Question Answering☆42Updated 4 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 3 years ago
- ☆11Updated 2 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆31Updated 5 years ago
- A visualizer to display attention weights on text☆23Updated 6 years ago
- ☆20Updated last year
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- Pretraining summarization models using a corpus of nonsense☆13Updated 4 years ago