mingdachen / TVRecap
TVRecap: A Dataset for Generating Stories with Character Descriptions
☆20Updated last year
Related projects: ⓘ
- Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Lo…☆38Updated 8 months ago
- ☆16Updated 10 months ago
- ☆23Updated 2 weeks ago
- ☆44Updated 2 months ago
- Code for Stage-wise Fine-tuning for Graph-to-Text Generation☆26Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆33Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆42Updated 10 months ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- Open source library for few shot NLP☆78Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- Code for the paper-"Mirostat: A Perplexity-Controlled Neural Text Decoding Algorithm" (https://arxiv.org/abs/2007.14966).☆56Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated last year
- The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-qual…☆47Updated 11 months ago
- Steering Vector Repo from "Extracting Latent Steering Vectors from Pretrained Language Models" - ACL2022 Findings☆9Updated 2 years ago
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆20Updated 3 months ago
- Large-scale query-focused multi-document Summarization dataset☆11Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆91Updated last year
- Apps built using Inspired Cognition's Critique.☆58Updated last year
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees☆23Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆27Updated last year
- ☆11Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆22Updated 5 months ago
- Few-shot Learning with Auxiliary Data☆26Updated 9 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆40Updated 8 months ago
- ☆32Updated last year
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆34Updated 6 months ago
- Finding semantically meaningful and accurate prompts.☆45Updated 10 months ago
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆74Updated 9 months ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago