thevasudevgupta / bigbirdLinks
Google's BigBird (Jax/Flax & PyTorch) @ π€Transformers
β49Updated 2 years ago
Alternatives and similar repositories for bigbird
Users that are interested in bigbird are comparing it to the libraries listed below
Sorting:
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ76Updated 4 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"β49Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arxβ¦β136Updated 2 years ago
- β75Updated 4 years ago
- QED: A Framework and Dataset for Explanations in Question Answeringβ117Updated 4 years ago
- This repository accompanies our paper βDo Prompt-Based Models Really Understand the Meaning of Their Prompts?ββ85Updated 3 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 4 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ155Updated last year
- β68Updated 3 months ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".β188Updated 3 years ago
- Dataset from the paper "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering" (COLING 2022)β114Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- State of the art Semantic Sentence Embeddingsβ99Updated 3 years ago
- On Generating Extended Summaries of Long Documentsβ78Updated 4 years ago
- PyTorch β SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.β63Updated 3 years ago
- β59Updated 4 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ137Updated last year
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Selfβ¦β203Updated 2 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 3 years ago
- Code and Data for Evaluation WGβ42Updated 3 years ago
- Code for the paper "True Few-Shot Learning in Language Models" (https://arxiv.org/abs/2105.11447)β145Updated 3 years ago
- Few-shot NLP benchmark for unified, rigorous evalβ91Updated 3 years ago
- Question-answers, collected from Googleβ129Updated 4 years ago
- Code base for the EMNLP 2021 Findings paper: Cartography Active Learningβ14Updated 2 months ago
- The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-qualβ¦β47Updated 8 months ago
- A diff tool for language modelsβ43Updated last year
- Using business-level retrieval system (BM25) with Python in just a few lines.β31Updated 2 years ago
- An instruction-based benchmark for text improvements.β141Updated 2 years ago
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || π A central repository collecting pre-trained adapter modulesβ68Updated last year
- A categorical archive of ChatGPT failuresβ64Updated 2 years ago