nreimers / se-pytorch-xlaLinks
☆21Updated 4 years ago
Alternatives and similar repositories for se-pytorch-xla
Users that are interested in se-pytorch-xla are comparing it to the libraries listed below
Sorting:
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆27Updated 4 years ago
- ☆54Updated 2 years ago
- Shared code for training sentence embeddings with Flax / JAX☆28Updated 4 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 3 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://a…☆46Updated 3 years ago
- ☆101Updated 3 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- ☆46Updated 3 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 5 months ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 4 years ago
- ☆68Updated 7 months ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- ☆30Updated 3 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering☆17Updated 2 years ago
- Efficient-Sentence-Embedding-using-Discrete-Cosine-Transform☆17Updated 5 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆157Updated 2 years ago
- CCQA A New Web-Scale Question Answering Dataset for Model Pre-Training☆32Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- ☆75Updated 4 years ago
- Official codebase accompanying our ACL 2022 paper "RELiC: Retrieving Evidence for Literary Claims" (https://relic.cs.umass.edu).☆20Updated 3 years ago
- ☆59Updated 4 years ago
- Ranking of fine-tuned HF models as base models.☆36Updated 3 months ago
- Ensembling Hugging Face transformers made easy☆61Updated 3 years ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆47Updated 2 years ago
- 🦮 Code and pretrained models for Findings of ACL 2022 paper "LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrie…☆49Updated 3 years ago