ltgoslo / gpt-bert
Official implementation of "GPT or BERT: why not both?"
☆52Updated last month
Alternatives and similar repositories for gpt-bert:
Users that are interested in gpt-bert are comparing it to the libraries listed below
- LTG-Bert☆32Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆99Updated 3 weeks ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆75Updated last year
- ☆44Updated last month
- ☆27Updated last year
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆57Updated 10 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- Code for Zero-Shot Tokenizer Transfer☆127Updated 3 months ago
- ☆11Updated 4 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆26Updated last month
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆30Updated 5 months ago
- ☆97Updated 2 years ago
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆30Updated 9 months ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆80Updated 7 months ago
- Are foundation LMs multilingual knowledge bases? (EMNLP 2023)☆19Updated last year
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆119Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆60Updated last year
- ☆47Updated 7 months ago
- Truly flash T5 realization!☆64Updated 10 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆90Updated 8 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆71Updated last year
- Code and data for the paper "Turning English-centric LLMs Into Polyglots: How Much Multilinguality Is Needed?"☆25Updated 3 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆26Updated 11 months ago
- ☆44Updated 2 months ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆57Updated 8 months ago
- A repository containing the code for translating popular LLM benchmarks to German.☆25Updated last year
- Supercharge huggingface transformers with model parallelism.☆76Updated 6 months ago
- ☆16Updated 10 months ago