nreimers / se-pytorch-xlaLinks
☆21Updated 4 years ago
Alternatives and similar repositories for se-pytorch-xla
Users that are interested in se-pytorch-xla are comparing it to the libraries listed below
Sorting:
- ☆54Updated 2 years ago
- Shared code for training sentence embeddings with Flax / JAX☆28Updated 4 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆95Updated 2 years ago
- ☆101Updated 2 years ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆47Updated 2 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆27Updated 4 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 3 years ago
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆24Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- CCQA A New Web-Scale Question Answering Dataset for Model Pre-Training☆32Updated 3 years ago
- ☆46Updated 3 years ago
- Ensembling Hugging Face transformers made easy☆63Updated 2 years ago
- We are creating a challenging new benchmark MultiReQA: A Cross-Domain Evaluation for Retrieval Question Answering Models. Retrieval quest…☆31Updated 5 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆21Updated 3 months ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- One-stop shop for running and fine-tuning transformer-based language models for retrieval☆59Updated this week
- Dense hybrid representations for text retrieval☆63Updated 2 years ago
- ☆46Updated 3 years ago
- Code and pre-trained models for "ReasonBert: Pre-trained to Reason with Distant Supervision", EMNLP'2021☆29Updated 2 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering☆17Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- ☆75Updated 4 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 4 years ago
- This repository contains the code for the paper 'PARM: Paragraph Aggregation Retrieval Model for Dense Document-to-Document Retrieval' pu…☆41Updated 3 years ago
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated 2 years ago
- ☆68Updated 5 months ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- ☆38Updated 2 years ago