mingdachen / TVRecap
TVRecap: A Dataset for Generating Stories with Character Descriptions
☆20Updated last year
Alternatives and similar repositories for TVRecap:
Users that are interested in TVRecap are comparing it to the libraries listed below
- Helper scripts and notes that were used while porting various nlp models☆45Updated 3 years ago
- ☆44Updated 4 months ago
- Code for the paper-"Mirostat: A Perplexity-Controlled Neural Text Decoding Algorithm" (https://arxiv.org/abs/2007.14966).☆58Updated 3 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆31Updated last year
- Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Lo…☆39Updated last year
- Code for Stage-wise Fine-tuning for Graph-to-Text Generation☆26Updated 2 years ago
- Embedding Recycling for Language models☆38Updated last year
- Transformers at any scale☆41Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆47Updated last year
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆30Updated 9 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-qual…☆46Updated 3 months ago
- One stop shop for all things carp☆59Updated 2 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago
- Code for SaGe subword tokenizer (EACL 2023)☆24Updated 3 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 9 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated this week
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://a…☆46Updated 2 years ago
- ☆23Updated 6 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆18Updated last month
- Can LLMs generate code-mixed sentences through zero-shot prompting?☆11Updated last year
- Large-scale query-focused multi-document Summarization dataset☆10Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago