ltgoslo / gpt-bert
Official implementation of "GPT or BERT: why not both?"
☆52Updated last month
Alternatives and similar repositories for gpt-bert:
Users that are interested in gpt-bert are comparing it to the libraries listed below
- ☆28Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆103Updated last month
- LTG-Bert☆32Updated last year
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆92Updated 9 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆58Updated 11 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆26Updated last year
- ☆13Updated last week
- Official implementation of "BERTs are Generative In-Context Learners"☆27Updated last month
- ☆11Updated 4 months ago
- ☆47Updated 8 months ago
- ☆49Updated 2 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆46Updated 3 weeks ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆72Updated 6 months ago
- Bi-encoder entity linking architecture☆44Updated 7 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆23Updated 2 months ago
- Code for Zero-Shot Tokenizer Transfer☆127Updated 3 months ago
- ☆38Updated last year
- ☆80Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆31Updated 10 months ago
- ☆45Updated 3 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆19Updated 3 months ago
- ☆20Updated 2 years ago
- Truly flash T5 realization!☆64Updated 11 months ago
- Official code release for "SuperBPE: Space Travel for Language Models"☆39Updated 3 weeks ago
- Code for SaGe subword tokenizer (EACL 2023)☆24Updated 5 months ago
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Updated last year