frankaging / Causal-DistillLinks
The Codebase for Causal Distillation for Language Models (NAACL '22)
☆25Updated 3 years ago
Alternatives and similar repositories for Causal-Distill
Users that are interested in Causal-Distill are comparing it to the libraries listed below
Sorting:
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆23Updated 7 months ago
- Codebase for running (conditional) probing experiments☆21Updated 2 years ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Updated 2 years ago
- ReCross: Unsupervised Cross-Task Generalization via Retrieval Augmentation☆24Updated 3 years ago
- Few-shot Learning with Auxiliary Data☆31Updated last year
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 3 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- [EMNLP 2022] Language Model Pre-Training with Sparse Latent Typing☆14Updated 2 years ago
- Suite of 500 procedurally-generated NLP tasks to study language model adaptability☆21Updated 3 years ago
- ☆13Updated 2 years ago
- Rationales for Sequential Predictions☆40Updated 3 years ago
- Code for the paper "Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers"☆17Updated 4 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- Measuring if attention is explanation with ROAR☆22Updated 2 years ago
- ☆38Updated 3 years ago
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆25Updated last year
- Exploring Few-Shot Adaptation of Language Models with Tables☆24Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- [ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators☆25Updated 2 years ago
- [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees☆24Updated 2 years ago
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated 2 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Updated 3 years ago
- Code for co-training large language models (e.g. T0) with smaller ones (e.g. BERT) to boost few-shot performance☆17Updated 3 years ago
- ☆13Updated 3 years ago
- EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation☆41Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago