ltgoslo / gpt-bertLinks
Official implementation of "GPT or BERT: why not both?"
☆58Updated last month
Alternatives and similar repositories for gpt-bert
Users that are interested in gpt-bert are comparing it to the libraries listed below
Sorting:
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 6 months ago
- LTG-Bert☆33Updated last year
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆59Updated last year
- Code for Zero-Shot Tokenizer Transfer☆137Updated 8 months ago
- ☆83Updated 3 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆46Updated 2 months ago
- Official code release for "SuperBPE: Space Travel for Language Models"☆65Updated 2 months ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆84Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆109Updated 2 weeks ago
- ☆51Updated 7 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆27Updated last year
- State-of-the-art paired encoder and decoder models (17M-1B params)☆45Updated last month
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆34Updated 3 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆63Updated 2 weeks ago
- Efficient Transformers with Dynamic Token Pooling☆63Updated 2 years ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆46Updated 7 months ago
- ☆48Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆95Updated 2 years ago
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆29Updated 7 months ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆73Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- ☆28Updated last year
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆104Updated last year
- Truly flash T5 realization!☆70Updated last year
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆121Updated 2 years ago
- ☆66Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated 2 years ago