fdalvi / analyzing-redundancy-in-pretrained-transformer-modelsLinks
Code for Analyzing Redundancy in Pretrained Transformer Models accepted at EMNLP 2020
☆14Updated 5 years ago
Alternatives and similar repositories for analyzing-redundancy-in-pretrained-transformer-models
Users that are interested in analyzing-redundancy-in-pretrained-transformer-models are comparing it to the libraries listed below
Sorting:
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 10 months ago
- ☆14Updated last year
- ☆44Updated 11 months ago
- Few-shot Learning with Auxiliary Data☆31Updated last year
- Hugging Face RoBERTa with Flash Attention 2☆23Updated last month
- ☆26Updated 2 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆14Updated 4 years ago
- Code for NAACL 2022 paper "Reframing Human-AI Collaboration for Generating Free-Text Explanations"☆31Updated 2 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 4 months ago
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated 2 years ago
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchi…☆35Updated last year
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆28Updated 2 weeks ago
- ☆15Updated 2 years ago
- Unifew: Unified Fewshot Learning Model☆18Updated 4 years ago
- [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees☆24Updated 2 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 2 years ago
- ☆56Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- Transformers at any scale☆41Updated last year
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆27Updated last year
- Repo for "Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks" ACL 2023 Findings☆16Updated 2 years ago
- ☆35Updated 3 months ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated 2 years ago
- Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (https…☆44Updated last year
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Updated 2 years ago