IntelLabs / academic-budget-bert
Repository containing code for "How to Train BERT with an Academic Budget" paper
☆313Updated last year
Alternatives and similar repositories for academic-budget-bert:
Users that are interested in academic-budget-bert are comparing it to the libraries listed below
- Interpretable Evaluation for AI Systems☆364Updated 2 years ago
- Search Engines with Autoregressive Language models☆284Updated 2 years ago
- ☆318Updated 3 years ago
- Understanding the Difficulty of Training Transformers☆329Updated 2 years ago
- Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"☆202Updated 3 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)☆328Updated last year
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 3 years ago
- NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations☆781Updated 11 months ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆154Updated last year
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Updated 2 years ago
- UnifiedQA: Crossing Format Boundaries With a Single QA System☆433Updated 2 years ago
- Efficient, check-pointed data loading for deep learning with massive data sets.☆206Updated last year
- Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"☆293Updated 2 years ago
- SentAugment is a data augmentation technique for NLP that retrieves similar sentences from a large bank of sentences. It can be used in c…☆362Updated 3 years ago
- Adversarial Natural Language Inference Benchmark☆393Updated 2 years ago
- The corresponding code from our paper "DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations". Do not hesitate to o…☆380Updated 2 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆136Updated last year
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- ICML'2022: NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework☆258Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆464Updated 2 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆609Updated 2 years ago
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Training☆154Updated 2 years ago
- [ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.o…☆604Updated 2 years ago
- New dataset☆304Updated 3 years ago
- FastFormers - highly efficient transformer models for NLU☆706Updated last month
- Hyperparameter Search for AllenNLP☆139Updated last month
- ☆182Updated last year
- Prune a model while finetuning or training.☆402Updated 2 years ago
- Research code for pixel-based encoders of language (PIXEL)☆335Updated last year
- ☆291Updated 2 years ago