GiovanniTRA / MultimodalNeuralDatabasesLinks
This repository contains the code for the perspective paper "Multimodal Neural Databases" accepted at SIGIR 2023.
☆20Updated last year
Alternatives and similar repositories for MultimodalNeuralDatabases
Users that are interested in MultimodalNeuralDatabases are comparing it to the libraries listed below
Sorting:
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 3 years ago
- Efficient Transformers with Dynamic Token Pooling☆67Updated 2 years ago
- ☆77Updated last year
- ☆23Updated last month
- ☆13Updated 3 years ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- Evaluation pipeline for the BabyLM Challenge 2023.☆77Updated 2 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- Embedding Recycling for Language models☆38Updated 2 years ago
- SILO Language Models code repository☆83Updated last year
- ☆67Updated last year
- ☆45Updated 2 years ago
- This is the official PyTorch repo for "UNIREX: A Unified Learning Framework for Language Model Rationale Extraction" (ICML 2022).☆26Updated 2 years ago
- A Toolkit for Distributional Control of Generative Models☆74Updated 2 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 2 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- ☆53Updated last year
- Benchmark API for Multidomain Language Modeling☆25Updated 3 years ago
- Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”☆62Updated 4 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- ☆56Updated 2 years ago
- ☆11Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Google Research☆46Updated 3 years ago