fdalvi / analyzing-redundancy-in-pretrained-transformer-modelsLinks
Code for Analyzing Redundancy in Pretrained Transformer Models accepted at EMNLP 2020
☆13Updated 4 years ago
Alternatives and similar repositories for analyzing-redundancy-in-pretrained-transformer-models
Users that are interested in analyzing-redundancy-in-pretrained-transformer-models are comparing it to the libraries listed below
Sorting:
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated 2 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆14Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆57Updated 2 years ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated last month
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last week
- ☆14Updated 8 months ago
- Transformers at any scale☆41Updated last year
- Few-shot Learning with Auxiliary Data☆28Updated last year
- Embedding Recycling for Language models☆38Updated last year
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Updated 2 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- Tasks for describing differences between text distributions.☆16Updated 9 months ago
- Open-source Human Feedback Library☆11Updated last year
- triton ver of gqa flash attn, based on the tutorial☆11Updated 10 months ago
- ☆11Updated 11 months ago
- Minimum Description Length probing for neural network representations☆19Updated 4 months ago
- Pile Deduplication Code☆19Updated 2 years ago
- ☆25Updated 2 years ago
- ☆38Updated last year
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 2 years ago
- ☆44Updated 6 months ago
- Repository for Skill Set Optimization☆13Updated 10 months ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 2 years ago
- [EMNLP 2022] Language Model Pre-Training with Sparse Latent Typing☆14Updated 2 years ago
- ☆12Updated last year
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆21Updated 2 months ago
- Learning to Model Editing Processes☆26Updated 3 years ago
- Staged Training for Transformer Language Models☆32Updated 3 years ago