Kvasirs / MILESLinks
MILES is a multilingual text simplifier inspired by LSBert - A BERT-based lexical simplification approach proposed in 2018. Unlike LSBert, MILES uses the bert-base-multilingual-uncased model, as well as simple language-agnostic approaches to complex word identification (CWI) and candidate ranking.
☆49Updated 4 years ago
Alternatives and similar repositories for MILES
Users that are interested in MILES are comparing it to the libraries listed below
Sorting:
- Code to reproduce the experiments from the paper.☆101Updated 2 years ago
- A Word Sense Disambiguation system integrating implicit and explicit external knowledge.☆69Updated 4 years ago
- xfspell — the Transformer Spell Checker☆189Updated 5 years ago
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.☆127Updated 4 years ago
- Question-answers, collected from Google☆128Updated 4 years ago
- QED: A Framework and Dataset for Explanations in Question Answering☆117Updated 4 years ago
- On Generating Extended Summaries of Long Documents☆78Updated 4 years ago
- Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".☆99Updated 2 years ago
- ☆75Updated 4 years ago
- [EMNLP 2021] LM-Critic: Language Models for Unsupervised Grammatical Error Correction☆120Updated 4 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 4 years ago
- Pipeline component for spaCy (and other spaCy-wrapped parsers such as spacy-stanza and spacy-udpipe) that adds CoNLL-U properties to a Do…☆82Updated last year
- This repository contains datasets and code for the paper "HINT3: Raising the bar for Intent Detection in the Wild" accepted at EMNLP-2020…☆33Updated 4 years ago
- Examples for aligning, padding and batching sequence labeling data (NER) for use with pre-trained transformer models☆64Updated 2 years ago
- Code for obtaining the Curation Corpus abstractive text summarisation dataset☆127Updated 4 years ago
- Paraphrase any question with T5 (Text-To-Text Transfer Transformer) - Pretrained model and training script provided☆185Updated 2 years ago
- Multilingual abstractive summarization dataset extracted from WikiHow.☆95Updated 6 months ago
- Code for the paper: Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries☆19Updated 3 years ago
- Lexical Simplification with Pretrained Encoders☆70Updated 4 years ago
- DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models☆154Updated 2 years ago
- Easier Automatic Sentence Simplification Evaluation☆161Updated 2 years ago
- A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology…☆222Updated 2 years ago
- A python module for word inflections designed for use with spaCy.☆93Updated 5 years ago
- SummVis is an interactive visualization tool for text summarization.☆253Updated 3 years ago
- This dataset contains synthetic training data for grammatical error correction. The corpus is generated by corrupting clean sentences fro…☆161Updated last year
- Corresponding code repo for the paper at COLING 2020 - ARGMIN 2020: "DebateSum: A large-scale argument mining and summarization dataset"☆55Updated 3 years ago
- Automatic extraction of edited sentences from text edition histories.☆83Updated 3 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- Tutorial for first time BERT users,☆103Updated 2 years ago
- Create interactive textual heat maps for Jupiter notebooks☆196Updated last year