arumaekawa / text-dataset-distillationLinks
☆12Updated 3 years ago
Alternatives and similar repositories for text-dataset-distillation
Users that are interested in text-dataset-distillation are comparing it to the libraries listed below
Sorting:
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- ☆10Updated 4 months ago
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- Dataset for Unified Editing, EMNLP 2023. This is a model editing dataset where edits are natural language phrases.☆23Updated 9 months ago
- ☆27Updated 2 years ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆31Updated last year
- ☆14Updated last year
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- Official Code Repository for [AutoScale–Automatic Prediction of Compute-optimal Data Compositions for Training LLMs]☆12Updated 4 months ago
- code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification☆27Updated 3 years ago
- Methods and evaluation for aligning language models temporally☆29Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated 10 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆64Updated last year
- [NAACL 2022] "Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training", Yuanxin Liu, Fandong Meng, Zheng Lin, Pe…☆15Updated 2 years ago
- CLUENER2020 Chinese NER task☆11Updated 3 years ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 11 months ago
- Code for "Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning" (EMNLP 2022) and "Empowering Parameter-Efficient Transfer Learning…☆12Updated 2 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 2 years ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆31Updated last year
- ☆10Updated 3 years ago
- EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation☆41Updated 2 years ago
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year
- MATCH-TUNING☆15Updated 2 years ago
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated 10 months ago
- ☆16Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆33Updated 9 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- Crawl & visualize ICLR papers and reviews.☆18Updated 2 years ago
- Code for CascadeBERT, Findings of EMNLP 2021☆12Updated 3 years ago