hadarishav / Ruddit
This repo contains the dataset and description for Ruddit and its variants.
☆34Updated 3 years ago
Alternatives and similar repositories for Ruddit:
Users that are interested in Ruddit are comparing it to the libraries listed below
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆61Updated 2 years ago
- ☆12Updated 2 years ago
- Kaggle Tweet Sentiment Extraction Competition: 1st place solution (Dark of the Moon team)☆71Updated 2 years ago
- Early solution for Google AI4Code competition☆76Updated 2 years ago
- Implementation of Mixout with PyTorch☆74Updated 2 years ago
- 🎖️ 4th place solution in the Feedback Prize Competition🎖️☆73Updated 2 years ago
- 1st solution☆38Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆118Updated last year
- ☆116Updated 2 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆47Updated 2 years ago
- ☆27Updated 3 years ago
- EMNLP 2021 Tutorial: Multi-Domain Multilingual Question Answering☆38Updated 3 years ago
- Pre-training BART in Flax on The Pile dataset☆20Updated 3 years ago
- A long version of BART model based on Longformer model☆23Updated last year
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 2 years ago
- ☆21Updated 3 years ago
- Entity extraction using BERT + CRF for single-tun / multi-turn setting in dialogues☆30Updated 3 years ago
- ☆17Updated 3 years ago
- ☆42Updated 4 years ago
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)☆52Updated 2 years ago
- Efficient Attention for Long Sequence Processing☆92Updated last year
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Training☆154Updated 2 years ago
- 1. Pretrain Albert on custom corpus 2. Finetune the pretrained Albert model on downstream task☆33Updated 4 years ago
- ☆66Updated 3 years ago
- Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer (ACL 2021)☆30Updated 2 years ago
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆107Updated 4 years ago
- Multilingual abstractive summarization dataset extracted from WikiHow.☆87Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 4 years ago
- Script to pre-train hugginface transformers BART with Tensorflow 2☆33Updated last year