archinetai / vat-pytorchLinks
Virtual Adversarial Training (VAT) techniques in PyTorch
☆17Updated 2 years ago
Alternatives and similar repositories for vat-pytorch
Users that are interested in vat-pytorch are comparing it to the libraries listed below
Sorting:
- Code for the paper "Query-Key Normalization for Transformers"☆43Updated 4 years ago
- This repository contains the data and code for the paper "Diverse Text Generation via Variational Encoder-Decoder Models with Gaussian Pr…☆25Updated 3 years ago
- Code for paper: "LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits"☆13Updated 9 months ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- “Style Transfer as Data Augmentation: A Case Study on Named Entity Recognition” (EMNLP 2022)☆16Updated 2 years ago
- Few-shot Learning with Auxiliary Data☆28Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Updated 5 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [ACL 2023]☆14Updated 2 years ago
- ☆11Updated last month
- Directed masked autoencoders☆14Updated 2 years ago
- Code for the PAPA paper☆27Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆17Updated last year
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last month
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated 2 weeks ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆12Updated 2 years ago
- ☆32Updated last year
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated last year
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆23Updated 3 months ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆23Updated last year
- Towards Semantics-Enhanced Pre-Training: Can Lexicon Definitions Help Learning Sentence Meanings? (AAAI 2021)☆9Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Applies ROME and MEMIT on Mamba-S4 models☆14Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆37Updated last year
- Pretraining summarization models using a corpus of nonsense☆13Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆62Updated 3 years ago