radi-cho / RSTOD
Auxiliary tasks for task-oriented dialogue systems. Published in ICNLSP'22 and indexed in the ACL Anthology.
☆17Updated last year
Related projects ⓘ
Alternatives and complementary repositories for RSTOD
- Experiments with generating opensource language model assistants☆97Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Embedding Recycling for Language models☆38Updated last year
- ☆97Updated 2 years ago
- ☆16Updated last year
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆34Updated 7 months ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- Using short models to classify long texts☆20Updated last year
- ☆25Updated last year
- ☆23Updated 2 months ago
- ☆46Updated last month
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- ☆13Updated 11 months ago
- ☆20Updated 3 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆23Updated 6 months ago
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- Code & data for EMNLP 2020 paper "MOCHA: A Dataset for Training and Evaluating Reading Comprehension Metrics".☆16Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- Few-shot Learning with Auxiliary Data☆26Updated 11 months ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- Open source library for few shot NLP☆77Updated last year
- Consists of the largest (10K) human annotated code-switched semantic parsing dataset & 170K generated utterance using the CST5 augmentati…☆33Updated last year
- ☆13Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆44Updated 11 months ago
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated last year
- ☆46Updated 2 years ago
- Apps built using Inspired Cognition's Critique.☆58Updated last year
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 3 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 2 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆55Updated 5 months ago