yang-zhang / lightning-language-modeling
Language Modeling Example with Transformers and PyTorch Lighting
☆65Updated 4 years ago
Alternatives and similar repositories for lightning-language-modeling:
Users that are interested in lightning-language-modeling are comparing it to the libraries listed below
- Shared code for training sentence embeddings with Flax / JAX☆27Updated 3 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆224Updated last year
- Fine-tune transformers with pytorch-lightning☆44Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆135Updated last year
- LM Pretraining with PyTorch/TPU☆134Updated 5 years ago
- A library to conduct ranking experiments with transformers.☆161Updated last year
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆146Updated 3 years ago
- State of the art Semantic Sentence Embeddings☆99Updated 2 years ago
- Hyperparameter Search for AllenNLP☆137Updated 3 weeks ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 3 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆201Updated 2 years ago
- A small repo showing how to easily use BERT (or other transformers) for inference☆99Updated 5 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 4 years ago
- Distillation of BERT model with catalyst framework☆76Updated last year
- Implementation of the GBST block from the Charformer paper, in Pytorch☆116Updated 3 years ago
- ☆68Updated 3 years ago
- Fine-tuned Transformers compatible BERT models for Sequence Tagging☆40Updated 4 years ago
- Anserini notebooks☆69Updated last year
- Code and datasets for the EMNLP 2020 paper "Calibration of Pre-trained Transformers"☆59Updated last year
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆58Updated 3 years ago
- ☆75Updated 3 years ago
- Repository collecting resources and best practices to improve experimental rigour in deep learning research.☆27Updated last year
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆96Updated 2 years ago
- ☆20Updated 4 years ago
- Checkout the new version at the link!☆22Updated 4 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- QED: A Framework and Dataset for Explanations in Question Answering☆116Updated 3 years ago
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Training☆154Updated 2 years ago
- 🛠️ Tools for Transformers compression using PyTorch Lightning ⚡☆82Updated 4 months ago
- ☆21Updated 3 years ago