yang-zhang / lightning-language-modeling
Language Modeling Example with Transformers and PyTorch Lighting
☆65Updated 4 years ago
Alternatives and similar repositories for lightning-language-modeling
Users that are interested in lightning-language-modeling are comparing it to the libraries listed below
Sorting:
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆136Updated last year
- A library to conduct ranking experiments with transformers.☆161Updated last year
- Fine-tune transformers with pytorch-lightning☆44Updated 3 years ago
- State of the art Semantic Sentence Embeddings☆99Updated 2 years ago
- This repository contains the code for "Generating Datasets with Pretrained Language Models".☆188Updated 3 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆203Updated 2 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆226Updated last year
- LM Pretraining with PyTorch/TPU☆134Updated 5 years ago
- ☆76Updated 3 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 3 years ago
- ☆21Updated 3 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆117Updated 3 years ago
- Fine-tuned Transformers compatible BERT models for Sequence Tagging☆40Updated 4 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 4 years ago
- Hyperparameter Search for AllenNLP☆139Updated 2 months ago
- Shared code for training sentence embeddings with Flax / JAX☆27Updated 3 years ago
- A BART version of an open-domain QA model in a closed-book setup☆119Updated 4 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆154Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Checkout the new version at the link!☆22Updated 4 years ago
- Introduction to the recently released T5 model from the paper - Exploring the Limits of Transfer Learning with a Unified Text-to-Text Tra…☆35Updated 4 years ago
- Repository collecting resources and best practices to improve experimental rigour in deep learning research.☆27Updated 2 years ago
- Anserini notebooks☆69Updated 2 years ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆26Updated 2 years ago
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Training☆154Updated 2 years ago
- ☆68Updated 2 weeks ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)☆121Updated 4 years ago
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆31Updated 3 years ago
- Viewer for the 🤗 datasets library.☆84Updated 3 years ago