mrpeerat / SCTLinks
SCT: An Efficient Self-Supervised Cross-View Training For Sentence Embedding (TACL)
☆16Updated 10 months ago
Alternatives and similar repositories for SCT
Users that are interested in SCT are comparing it to the libraries listed below
Sorting:
- Implementation of ConGen: Unsupervised Control and Generalization Distillation For Sentence Representation (Finding of EMNLP 2022).☆22Updated last year
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆81Updated 8 months ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆72Updated last year
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learning☆30Updated 2 years ago
- ☆20Updated 2 years ago
- A tiny BERT for low-resource monolingual models☆31Updated 8 months ago
- Multilingual Entity Linking model by BELA model☆12Updated last year
- Mr. TyDi is a multi-lingual benchmark dataset built on TyDi, covering eleven typologically diverse languages.☆76Updated 3 years ago
- zero shot NER fine tuning☆13Updated 2 months ago
- Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages -- ACL 2023☆101Updated last year
- Semantically Structured Sentence Embeddings☆66Updated 7 months ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆45Updated last year
- Code & Data for Comparative Opinion Summarization via Collaborative Decoding (Iso et al; Findings of ACL 2022)☆21Updated 3 months ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago
- ☆10Updated 2 years ago
- M2D2: A Massively Multi-domain Language Modeling Dataset (EMNLP 2022) by Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer☆55Updated 2 years ago
- GLADIS: A General and Large Acronym Disambiguation Benchmark (EACL 23)☆16Updated 11 months ago
- A library of translation-based text similarity measures☆25Updated last year
- A collection of datasets for language model pretraining including scripts for downloading, preprocesssing, and sampling.☆59Updated 10 months ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks☆63Updated 3 years ago
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆16Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- A Wikipedia-based summarization dataset☆13Updated 2 years ago
- Auxiliary tasks for task-oriented dialogue systems. Published in ICNLSP'22 and indexed in the ACL Anthology.☆17Updated 2 years ago
- Implementation of "SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages" paper, accepted to E…☆21Updated 2 years ago
- Tools for evaluating the performance of MT metrics on data from recent WMT metrics shared tasks.☆108Updated 2 months ago
- triple-encoders is a library for contextualizing distributed Sentence Transformers representations.☆14Updated 9 months ago
- zero-vocab or low-vocab embeddings☆18Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embeddings☆43Updated last year