helboukkouri / character-bertLinks
Main repository for "CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters"
☆201Updated last year
Alternatives and similar repositories for character-bert
Users that are interested in character-bert are comparing it to the libraries listed below
Sorting:
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆204Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆137Updated 2 years ago
- Multilingual abstractive summarization dataset extracted from WikiHow.☆95Updated 6 months ago
- The corresponding code from our paper "DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations". Do not hesitate to o…☆380Updated 2 years ago
- State of the art Semantic Sentence Embeddings☆99Updated 3 years ago
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆135Updated last year
- Interpretable Evaluation for (Almost) All NLP Tasks☆195Updated 3 years ago
- ☆75Updated 4 years ago
- Collection of NLP model explanations and accompanying analysis tools☆144Updated 2 years ago
- Repository with code for MaChAmp: https://aclanthology.org/2021.eacl-demos.22/☆88Updated 4 months ago
- Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".☆99Updated 2 years ago
- Easier Automatic Sentence Simplification Evaluation☆161Updated last year
- Code to reproduce the experiments from the paper.☆101Updated last year
- ☆345Updated 4 years ago
- The official code for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization☆156Updated 2 years ago
- Examples for aligning, padding and batching sequence labeling data (NER) for use with pre-trained transformer models☆65Updated 2 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 4 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- Python-based implementation of the Translate-Align-Retrieve method to automatically translate the SQuAD Dataset to Spanish.☆59Updated 2 years ago
- Lexical Simplification with Pretrained Encoders☆70Updated 4 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆330Updated last year
- This repository contains the code for "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Representations".☆64Updated 5 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 4 years ago
- A repo to explore different NLP tasks which can be solved using T5☆172Updated 4 years ago
- ☆202Updated 3 years ago
- https://arxiv.org/pdf/1909.04054☆79Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- ☆87Updated 3 years ago
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Training☆153Updated 3 years ago
- MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance☆209Updated last year