Ankur3107 / dpr-tfLinks
Dense Passage Retrieval using tensorflow-keras on TPU
☆16Updated 4 years ago
Alternatives and similar repositories for dpr-tf
Users that are interested in dpr-tf are comparing it to the libraries listed below
Sorting:
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- Shared code for training sentence embeddings with Flax / JAX☆27Updated 4 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- Helper scripts and notes that were used while porting various nlp models☆47Updated 3 years ago
- ☆21Updated 4 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- A Benchmark Dataset for Understanding Disfluencies in Question Answering☆63Updated 4 years ago
- Agents that build knowledge graphs and explore textual worlds by asking questions☆79Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Viewer for the 🤗 datasets library.☆85Updated 4 years ago
- A deep learning library based on Pytorch focussed on low resource language research and robustness☆70Updated 3 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://a…☆46Updated 3 years ago
- A diff tool for language models☆44Updated last year
- KitanaQA: Adversarial training and data augmentation for neural question-answering models☆56Updated 2 years ago
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated 2 years ago
- The repository for the paper "When Do You Need Billions of Words of Pretraining Data?"☆21Updated 4 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Updated 2 years ago
- Ranking of fine-tuned HF models as base models.☆36Updated last week
- Open source library for few shot NLP☆79Updated 2 years ago
- ☆75Updated 4 years ago
- ☆13Updated 6 years ago
- Embedding Recycling for Language models☆39Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- Factorization of the neural parameter space for zero-shot multi-lingual and multi-task transfer☆39Updated 5 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆118Updated 4 years ago
- ☆30Updated 3 years ago
- ☆19Updated 4 years ago