jeongukjae / smaller-labseLinks
Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
β18Updated 3 years ago
Alternatives and similar repositories for smaller-labse
Users that are interested in smaller-labse are comparing it to the libraries listed below
Sorting:
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ16Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β82Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- Anh - LAION's multilingual assistant datasets and modelsβ27Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β105Updated 3 years ago
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- β21Updated 3 years ago
- exBERT on Transformersπ€β10Updated 4 years ago
- Package for controllable summarizationβ78Updated 2 years ago
- Observe the slow deterioration of my mental sanity in the github commit historyβ12Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Faceβ32Updated 2 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ63Updated 3 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogueβ32Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.β82Updated 10 months ago
- Megatron LM 11B on Huggingface Transformersβ27Updated 4 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ52Updated 2 years ago
- Developing tools to automatically analyze datasetsβ74Updated 9 months ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β48Updated 3 years ago
- β20Updated 4 years ago
- β30Updated 2 years ago
- Convenient Text-to-Text Training for Transformersβ19Updated 3 years ago
- Embedding Recycling for Language modelsβ39Updated 2 years ago
- β46Updated 3 years ago
- **ARCHIVED** Filesystem interface to π€ Hubβ58Updated 2 years ago
- Open source library for few shot NLPβ78Updated 2 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 4 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β27Updated last year
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretrainingβ18Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen languageβ73Updated last year