schwartz-lab-NLP / papaLinks
Code for the PAPA paper
☆27Updated 2 years ago
Alternatives and similar repositories for papa
Users that are interested in papa are comparing it to the libraries listed below
Sorting:
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆23Updated 3 months ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆62Updated 3 years ago
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆24Updated last year
- The Codebase for Causal Distillation for Language Models (NAACL '22)☆25Updated 3 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 5 years ago
- ☆26Updated last year
- Code for gradient rollback, which explains predictions of neural matrix factorization models, as for example used for knowledge base comp…☆21Updated 4 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆76Updated last year
- Staged Training for Transformer Language Models☆32Updated 3 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- This is the official PyTorch repo for "UNIREX: A Unified Learning Framework for Language Model Rationale Extraction" (ICML 2022).☆25Updated 2 years ago
- ☆18Updated 2 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- ☆51Updated 2 years ago
- ☆22Updated 2 years ago
- [EMNLP 2022] Language Model Pre-Training with Sparse Latent Typing☆14Updated 2 years ago
- ☆97Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆52Updated 2 years ago
- ☆21Updated 2 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆43Updated 4 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆65Updated 4 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆14Updated 3 years ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last month
- Parameter Efficient Transfer Learning with Diff Pruning☆74Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated last month