ltgoslo / ltg-bertLinks
LTG-Bert
☆34Updated last year
Alternatives and similar repositories for ltg-bert
Users that are interested in ltg-bert are comparing it to the libraries listed below
Sorting:
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆85Updated last year
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- Simple-to-use scoring function for arbitrarily tokenized texts.☆47Updated 10 months ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- ☆52Updated 2 years ago
- Code for SaGe subword tokenizer (EACL 2023)☆27Updated last year
- Official implementation of "GPT or BERT: why not both?"☆63Updated 4 months ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆119Updated 4 years ago
- T-Projection is a method to perform high-quality Annotation Projection of Sequence Labeling datasets.☆13Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆34Updated 6 months ago
- A tiny BERT for low-resource monolingual models☆31Updated 2 months ago
- ☆101Updated 3 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆112Updated last month
- ☆46Updated 3 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)☆48Updated 4 years ago
- ☆17Updated 2 years ago
- Code for the paper "Getting the most out of your tokenizer for pre-training and domain adaptation"☆21Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- ☆13Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆67Updated 2 months ago
- 🔍 Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment☆11Updated 8 months ago
- SeqScore: Scoring for named entity recognition and other sequence labeling tasks☆23Updated last week
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago
- This repo contains a set of neural transducer, e.g. sequence-to-sequence model, focusing on character-level tasks.☆76Updated 2 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 3 years ago