mrpeerat / SCTLinks
SCT: An Efficient Self-Supervised Cross-View Training For Sentence Embedding (TACL)
☆16Updated 11 months ago
Alternatives and similar repositories for SCT
Users that are interested in SCT are comparing it to the libraries listed below
Sorting:
- Implementation of ConGen: Unsupervised Control and Generalization Distillation For Sentence Representation (Finding of EMNLP 2022).☆22Updated last year
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆16Updated last year
- zero-vocab or low-vocab embeddings☆18Updated 2 years ago
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆32Updated 3 weeks ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆82Updated 9 months ago
- LTG-Bert☆33Updated last year
- GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embeddings☆43Updated last year
- ☆12Updated 3 weeks ago
- Hugging Face RoBERTa with Flash Attention 2☆23Updated last year
- Multilingual Entity Linking model by BELA model☆12Updated last year
- Fast whitespace correction with Transformers☆16Updated last month
- Label shift estimation for transfer difficulty with Familiarity.☆10Updated 4 months ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated 10 months ago
- Ensembling Hugging Face transformers made easy☆63Updated 2 years ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆58Updated last month
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- zero shot NER fine tuning☆13Updated 3 months ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆14Updated 3 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆72Updated last year
- ☆13Updated last month
- Using short models to classify long texts☆21Updated 2 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆101Updated last year
- Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting ir…☆40Updated 8 months ago
- A tiny BERT for low-resource monolingual models☆31Updated 8 months ago
- Auxiliary tasks for task-oriented dialogue systems. Published in ICNLSP'22 and indexed in the ACL Anthology.☆17Updated 2 years ago
- Pre-train Static Word Embeddings☆79Updated 3 weeks ago