neubig / minnn-assignmentLinks
An assignment on creating a minimalist neural network toolkit for CS11-747
☆64Updated last year
Alternatives and similar repositories for minnn-assignment
Users that are interested in minnn-assignment are comparing it to the libraries listed below
Sorting:
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- ☆64Updated 5 years ago
- ☆48Updated 5 years ago
- LM Pretraining with PyTorch/TPU☆136Updated 6 years ago
- Code repo for "Transformer on a Diet" paper☆31Updated 5 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆137Updated 2 years ago
- ☆104Updated 4 years ago
- Hyperparameter Search for AllenNLP☆140Updated 8 months ago
- Checking the interpretability of attention on text classification models☆49Updated 6 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆133Updated last year
- Training Transformer-XL on 128 GPUs☆141Updated 5 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆173Updated 5 years ago
- This is a repository with the code for the EMNLP 2020 paper "Information-Theoretic Probing with Minimum Description Length"☆71Updated last year
- Neural Text Generation with Unlikelihood Training☆310Updated 4 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆58Updated 4 years ago
- Factorization of the neural parameter space for zero-shot multi-lingual and multi-task transfer☆39Updated 5 years ago
- This repository contains code to replicate the no-longer publicly available Toronto BookCorpus dataset☆49Updated 3 years ago
- ☆14Updated 6 years ago
- Pre-training of Language Models for Language Understanding☆83Updated 6 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Code for EMNLP 2018 paper "Auto-Encoding Dictionary Definitions into Consistent Word Embeddings"☆36Updated 7 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- ☆219Updated 5 years ago
- Code for reproducing experiments in our ACL 2019 paper "Probing Neural Network Comprehension of Natural Language Arguments"☆53Updated 3 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆98Updated 3 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 5 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 5 years ago
- A BART version of an open-domain QA model in a closed-book setup☆119Updated 5 years ago
- Understanding the Difficulty of Training Transformers☆332Updated 3 years ago
- Learning the Difference that Makes a Difference with Counterfactually-Augmented Data☆170Updated 4 years ago