ltgoslo / gpt-bertLinks
Official implementation of "GPT or BERT: why not both?"
☆62Updated 3 months ago
Alternatives and similar repositories for gpt-bert
Users that are interested in gpt-bert are comparing it to the libraries listed below
Sorting:
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆110Updated 3 weeks ago
- Code for Zero-Shot Tokenizer Transfer☆141Updated 10 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- LTG-Bert☆34Updated last year
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆85Updated last year
- Efficient Transformers with Dynamic Token Pooling☆64Updated 2 years ago
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆34Updated 5 months ago
- State-of-the-art paired encoder and decoder models (17M-1B params)☆53Updated 3 months ago
- ☆92Updated 5 months ago
- ☆55Updated 9 months ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆47Updated 9 months ago
- ☆27Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆32Updated 9 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆49Updated 4 months ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆116Updated 2 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- Official code release for "SuperBPE: Space Travel for Language Models"☆75Updated 2 weeks ago
- Truly flash T5 realization!☆70Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆52Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆121Updated 2 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆83Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."☆51Updated last month
- ☆101Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆57Updated 3 years ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆66Updated last month
- Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting ir…☆58Updated last year
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago