frankaging / Causal-DistillLinks
The Codebase for Causal Distillation for Language Models (NAACL '22)
☆25Updated 3 years ago
Alternatives and similar repositories for Causal-Distill
Users that are interested in Causal-Distill are comparing it to the libraries listed below
Sorting:
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆23Updated 8 months ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 2 years ago
- [EMNLP 2022] Language Model Pre-Training with Sparse Latent Typing☆14Updated 2 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- ReCross: Unsupervised Cross-Task Generalization via Retrieval Augmentation☆24Updated 3 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 4 years ago
- Few-shot Learning with Auxiliary Data☆31Updated last year
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆57Updated 2 years ago
- Code release for "TempLM: Distilling Language Models into Template-Based Generators"☆14Updated 3 years ago
- Codebase for running (conditional) probing experiments☆21Updated 3 years ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 3 years ago
- ☆38Updated 3 years ago
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago
- ☆23Updated 3 years ago
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆20Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- Suite of 500 procedurally-generated NLP tasks to study language model adaptability☆21Updated 3 years ago
- ☆13Updated 3 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- ☆13Updated 2 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Updated 3 years ago
- Code for co-training large language models (e.g. T0) with smaller ones (e.g. BERT) to boost few-shot performance☆17Updated 3 years ago
- Model zoo for different kinds of uncertainty quantification methods used in Natural Language Processing, implemented in PyTorch.☆53Updated 2 years ago
- Rationales for Sequential Predictions☆40Updated 3 years ago
- Pretraining summarization models using a corpus of nonsense☆13Updated 4 years ago
- Code for the paper "REV: Information-Theoretic Evaluation of Free-Text Rationales"☆16Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated 2 years ago
- PyTorch code for the RetoMaton paper: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022)☆74Updated 3 years ago