jeongukjae / smaller-labse
Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
β18Updated 3 years ago
Alternatives and similar repositories for smaller-labse:
Users that are interested in smaller-labse are comparing it to the libraries listed below
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ16Updated 2 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- Megatron LM 11B on Huggingface Transformersβ27Updated 3 years ago
- exBERT on Transformersπ€β10Updated 3 years ago
- Implementation of stop sequencer for Huggingface Transformersβ16Updated last year
- Anh - LAION's multilingual assistant datasets and modelsβ27Updated last year
- Helper scripts and notes that were used while porting various nlp modelsβ45Updated 3 years ago
- Difference-based Contrastive Learning for Korean Sentence Embeddingsβ24Updated last year
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β102Updated 2 years ago
- Calculating Expected Time for training LLM.β38Updated last year
- β21Updated 3 years ago
- Hate speech detection corpus in Korean, shared with EMNLP 2023 paperβ14Updated 11 months ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β26Updated 3 years ago
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretrainingβ16Updated last year
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β37Updated 4 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ63Updated 3 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER modelsβ32Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ52Updated last year
- zero-vocab or low-vocab embeddingsβ18Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β26Updated 11 months ago
- reference pytorch code for intent classificationβ44Updated 5 months ago
- Convenient Text-to-Text Training for Transformersβ19Updated 3 years ago
- Abstractive summarization using Bert2Bert framework.β31Updated 4 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- β34Updated 4 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogueβ32Updated 2 years ago
- Ensembling Hugging Face transformers made easyβ62Updated 2 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrievalβ28Updated 2 years ago
- python project template for personal projects! πββοΈβ10Updated 4 years ago