Tikquuss / meta_XLM
Cross-lingual Language Model (XLM) pretraining and Model-Agnostic Meta-Learning (MAML) for fast adaptation of deep networks
☆20Updated 3 years ago
Alternatives and similar repositories for meta_XLM:
Users that are interested in meta_XLM are comparing it to the libraries listed below
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 2 years ago
- Statistics on multilingual datasets☆17Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆70Updated 11 months ago
- Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.☆32Updated last year
- This repositary hosts my experiments for the project, I did with OffNote Labs.☆10Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆47Updated 2 years ago
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)☆52Updated 2 years ago
- ☆13Updated 3 years ago
- Data and code for our paper "Exploring and Predicting Transferability across NLP Tasks", to appear at EMNLP 2020.☆49Updated 3 years ago
- Meta Representation Transformation for Low-resource Cross-lingual Learning☆40Updated 3 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆15Updated 3 years ago
- LAReQA is a challenging benchmark for evaluating language agnostic answer retrieval from a multilingual candidate pool. This repository c…☆14Updated 4 years ago
- Code for HypMix EMNLP 2021 (main)☆24Updated 3 years ago
- Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021☆34Updated 3 years ago
- Official Pytorch implementation of (Roles and Utilization of Attention Heads in Transformer-based Neural Language Models), ACL 2020☆16Updated 2 years ago
- Additional resources from our AACL tutorial☆10Updated last year
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆47Updated 2 years ago
- Official Code for Towards Transparent and Explainable Attention Models paper (ACL 2020)☆35Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆71Updated 6 months ago
- ☆29Updated 2 years ago
- Codebase for probing and visualizing multilingual models.☆47Updated 4 years ago
- Code base for the EMNLP 2021 Findings paper: Cartography Active Learning☆14Updated last year
- Code for EMNLP 2021 paper: "Is Everything in Order? A Simple Way to Order Sentences"☆41Updated last year
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆48Updated last year
- This is a niche collection of research papers which are proven to be gradients pushing the field of Natural Language Processing, Deep Lea…☆25Updated 3 months ago
- ☆10Updated 4 years ago
- Ranking of fine-tuned HF models as base models.☆35Updated last year
- a repository containing the details of natural language inference dataset in Hindi☆11Updated 4 years ago
- Multi-task modelling extensions for huggingface transformers☆20Updated last year