mrpeerat / SCTLinks
SCT: An Efficient Self-Supervised Cross-View Training For Sentence Embedding (TACL)
☆16Updated last year
Alternatives and similar repositories for SCT
Users that are interested in SCT are comparing it to the libraries listed below
Sorting:
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆87Updated last year
- Implementation of ConGen: Unsupervised Control and Generalization Distillation For Sentence Representation (Finding of EMNLP 2022).☆22Updated 2 years ago
- [EMNLP'23] Official Code for "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models"☆35Updated 7 months ago
- GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embeddings☆44Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- Fast search index for SPLADE sparse retrieval models implemented in Python using Numpy and Numba☆32Updated 2 months ago
- Do Multilingual Language Models Think Better in English?☆43Updated 2 years ago
- ☆52Updated 2 years ago
- 🔍 Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment☆11Updated 9 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- ☆10Updated 6 years ago
- German Alpaca Dataset (Cleaned + Translated)☆26Updated 2 years ago
- Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Training☆74Updated 2 months ago
- Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.☆79Updated 3 years ago
- GLADIS: A General and Large Acronym Disambiguation Benchmark (EACL 23)☆18Updated last year
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆121Updated 2 years ago
- Ensembling Hugging Face transformers made easy☆61Updated 3 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- ☆21Updated 3 years ago
- ☆16Updated 3 years ago
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆64Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- triple-encoders is a library for contextualizing distributed Sentence Transformers representations.☆15Updated last year
- Bi-encoder entity linking architecture☆51Updated last year
- LTG-Bert☆34Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆42Updated 2 years ago
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆18Updated 2 years ago
- Hugging Face RoBERTa with Flash Attention 2☆24Updated 3 months ago