JHU-CLSP / ettin-encoder-vs-decoderLinks
State-of-the-art paired encoder and decoder models (17M-1B params)
☆45Updated last month
Alternatives and similar repositories for ettin-encoder-vs-decoder
Users that are interested in ettin-encoder-vs-decoder are comparing it to the libraries listed below
Sorting:
- Official Repository for "Hypencoder: Hypernetworks for Information Retrieval"☆29Updated 6 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆63Updated 2 weeks ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆46Updated 2 months ago
- ☆29Updated last year
- Embedding Recycling for Language models☆39Updated 2 years ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆107Updated 6 months ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated 2 years ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- ☆53Updated 2 months ago
- INCOME: An Easy Repository for Training and Evaluation of Index Compression Methods in Dense Retrieval. Includes BPR and JPQ.☆24Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆21Updated 2 months ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆59Updated last year
- ☆13Updated 9 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆95Updated 2 years ago
- Bi-encoder entity linking architecture☆50Updated last year
- ☆101Updated 2 years ago
- Retrieval-Augmented Generation battle!☆58Updated last month
- Code for SaGe subword tokenizer (EACL 2023)☆26Updated 9 months ago
- ☆10Updated 11 months ago
- Code for Zero-Shot Tokenizer Transfer☆137Updated 8 months ago
- ☆68Updated last month
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆186Updated 2 months ago
- ☆14Updated this week
- Few-shot Learning with Auxiliary Data☆31Updated last year
- One-stop shop for running and fine-tuning transformer-based language models for retrieval☆59Updated this week
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 8 months ago
- SPRINT Toolkit helps you evaluate diverse neural sparse models easily using a single click on any IR dataset.☆47Updated 2 years ago
- Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders☆18Updated 3 months ago