jeongukjae / smaller-labseLinks
Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
β19Updated 4 years ago
Alternatives and similar repositories for smaller-labse
Users that are interested in smaller-labse are comparing it to the libraries listed below
Sorting:
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β105Updated 3 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ17Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Updated 2 years ago
- β21Updated 4 years ago
- exBERT on Transformersπ€β10Updated 4 years ago
- Ensembling Hugging Face transformers made easyβ61Updated 3 years ago
- Package for controllable summarizationβ79Updated 3 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogueβ32Updated 3 years ago
- β20Updated 4 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ32Updated 2 years ago
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- Observe the slow deterioration of my mental sanity in the github commit historyβ12Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β35Updated 2 years ago
- Embedding Recycling for Language modelsβ38Updated 2 years ago
- Open source library for few shot NLPβ78Updated 2 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ63Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ157Updated 2 years ago
- Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paperβ148Updated 8 months ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.β47Updated 2 years ago
- Anh - LAION's multilingual assistant datasets and modelsβ27Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β28Updated last year
- β11Updated 4 years ago
- Experiments for XLM-V Transformers Integerationβ13Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.β86Updated last year
- β37Updated 2 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 laβ¦β49Updated 2 years ago
- β33Updated 2 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β27Updated 4 years ago