elephantmipt / bert-distillationLinks
Distillation of BERT model with catalyst framework
☆78Updated 2 years ago
Alternatives and similar repositories for bert-distillation
Users that are interested in bert-distillation are comparing it to the libraries listed below
Sorting:
- A small library with distillation, quantization and pruning pipelines☆26Updated 4 years ago
- Pytorch library for end-to-end transformer models training, inference and serving☆70Updated 4 months ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆137Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- A deep learning library based on Pytorch focussed on low resource language research and robustness☆70Updated 3 years ago
- LM Pretraining with PyTorch/TPU☆136Updated 5 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- Repository with illustrations for cft-contest-2018☆12Updated 6 years ago
- 🛠️ Tools for Transformers compression using PyTorch Lightning ⚡☆84Updated 10 months ago
- reference pytorch code for intent classification☆45Updated 10 months ago
- (re)Implementation of Learning Multi-level Dependencies for Robust Word Recognition☆17Updated last year
- ☆21Updated 4 years ago
- ML Reproducibility Challenge 2020: Electra reimplementation using PyTorch and Transformers☆12Updated 4 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- Dual Encoders for State-of-the-art Natural Language Processing.☆61Updated 3 years ago
- This repository contains the code for "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Representations".☆64Updated 5 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- Factorization of the neural parameter space for zero-shot multi-lingual and multi-task transfer☆39Updated 4 years ago
- http://nlp.seas.harvard.edu/2018/04/03/attention.html☆62Updated 4 years ago
- Generate BERT vocabularies and pretraining examples from Wikipedias☆17Updated 5 years ago
- Viewer for the 🤗 datasets library.☆85Updated 4 years ago
- PyTorch code for the EMNLP 2020 paper "Embedding Words in Non-Vector Space with Unsupervised Graph Learning"☆41Updated 4 years ago
- ☆87Updated 3 years ago
- Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding (AAAI 2020) - PyTorch Implementation☆32Updated 2 years ago
- Fine-tune transformers with pytorch-lightning☆44Updated 3 years ago
- Stacked Denoising BERT for Noisy Text Classification (Neural Networks 2020)☆32Updated 2 years ago
- A library to conduct ranking experiments with transformers.☆160Updated 2 years ago
- Unofficial implementation of QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition.☆65Updated 2 years ago
- PyTorch implementation of 'An Unsupervised Neural Attention Model for Aspect Extraction' by He et al. ACL2017'☆66Updated 3 years ago