Tikquuss / meta_XLMLinks
Cross-lingual Language Model (XLM) pretraining and Model-Agnostic Meta-Learning (MAML) for fast adaptation of deep networks
☆20Updated 4 years ago
Alternatives and similar repositories for meta_XLM
Users that are interested in meta_XLM are comparing it to the libraries listed below
Sorting:
- benchmarks for evaluating MT models☆12Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- Statistics on multilingual datasets☆17Updated 3 years ago
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆49Updated 2 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago
- Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.☆32Updated 2 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆50Updated 3 years ago
- This repositary hosts my experiments for the project, I did with OffNote Labs.☆10Updated 4 years ago
- Code base for the EMNLP 2021 Findings paper: Cartography Active Learning☆14Updated 7 months ago
- This repo contains a set of neural transducer, e.g. sequence-to-sequence model, focusing on character-level tasks.☆76Updated 2 years ago
- ☆46Updated 3 years ago
- Source codes of Neural Quality Estimation with Multiple Hypotheses for Grammatical Error Correction☆43Updated 4 years ago
- NTREX -- News Test References for MT Evaluation☆86Updated last year
- ☆44Updated 5 years ago
- A benchmark for code-switched NLP, ACL 2020☆76Updated last year
- Ranking of fine-tuned HF models as base models.☆36Updated 3 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- ☆52Updated 2 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- Implementation of the GBST block from the Charformer paper, in Pytorch☆118Updated 4 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆62Updated 3 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)☆48Updated 4 years ago
- Repository for Multilingual-VQA task created during HuggingFace JAX/Flax community week.☆34Updated 4 years ago
- Official code for LEWIS, from: "LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer", ACL-IJCNLP 2021 Findings by Machel Rei…☆31Updated 3 years ago
- A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approa…☆99Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- Python-based implementation of the Translate-Align-Retrieve method to automatically translate the SQuAD Dataset to Spanish.☆59Updated 3 years ago
- Code for pre-training CharacterBERT models (as well as BERT models).☆34Updated 4 years ago