jeongukjae / smaller-labseLinks
Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
β19Updated 3 years ago
Alternatives and similar repositories for smaller-labse
Users that are interested in smaller-labse are comparing it to the libraries listed below
Sorting:
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ16Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β95Updated 2 years ago
- This tool helps automatic generation of grammatically valid synthetic Code-mixed data by utilizing linguistic theories such as Equivalencβ¦β56Updated last year
- Anh - LAION's multilingual assistant datasets and modelsβ27Updated 2 years ago
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β105Updated 3 years ago
- Megatron LM 11B on Huggingface Transformersβ27Updated 4 years ago
- β21Updated 4 years ago
- A PyTorch Implementation of the Luna: Linear Unified Nested Attentionβ41Updated 4 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogueβ32Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ156Updated last year
- Ensembling Hugging Face transformers made easyβ63Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ32Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated 2 years ago
- Package for controllable summarizationβ78Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.β84Updated last year
- Convenient Text-to-Text Training for Transformersβ19Updated 3 years ago
- Open source library for few shot NLPβ79Updated 2 years ago
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- Helper scripts and notes that were used while porting various nlp modelsβ47Updated 3 years ago
- Developing tools to automatically analyze datasetsβ75Updated 10 months ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ63Updated 3 years ago
- exBERT on Transformersπ€β10Updated 4 years ago
- Implementation of stop sequencer for Huggingface Transformersβ16Updated 2 years ago
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretrainingβ18Updated last year
- β16Updated 2 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β27Updated 3 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 4 years ago