jeongukjae / smaller-labseLinks
Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE
β18Updated 3 years ago
Alternatives and similar repositories for smaller-labse
Users that are interested in smaller-labse are comparing it to the libraries listed below
Sorting:
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ16Updated 2 years ago
- exBERT on Transformersπ€β10Updated 4 years ago
- Megatron LM 11B on Huggingface Transformersβ27Updated 3 years ago
- Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Loβ¦β39Updated last year
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β103Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorchβ74Updated 2 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER modelsβ33Updated 2 years ago
- βοΈ T5 Machine Translation from English to Koreanβ18Updated 2 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β82Updated 3 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogueβ32Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Anh - LAION's multilingual assistant datasets and modelsβ27Updated 2 years ago
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β38Updated 4 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β48Updated 3 years ago
- Difference-based Contrastive Learning for Korean Sentence Embeddingsβ24Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β27Updated last year
- Implementation of stop sequencer for Huggingface Transformersβ16Updated 2 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"β27Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ52Updated 2 years ago
- A flexible sentence segmentation library using CRF model and regex rulesβ29Updated last year
- β21Updated 3 years ago
- reference pytorch code for intent classificationβ44Updated 8 months ago
- β20Updated 2 years ago
- NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)β36Updated 3 years ago
- baikal.ai's pre-trained BERT models: descriptions and sample codesβ12Updated 4 years ago
- A PyTorch Implementation of the Luna: Linear Unified Nested Attentionβ41Updated 3 years ago
- β37Updated 2 years ago
- Pre-training BART in Flax on The Pile datasetβ21Updated 3 years ago