tigerchen52 / PEARLLinks
Learning High-Quality and General-Purpose Phrase Representations. Findings of EACL 2024
☆15Updated last year
Alternatives and similar repositories for PEARL
Users that are interested in PEARL are comparing it to the libraries listed below
Sorting:
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated 2 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆44Updated last year
- ☆43Updated last month
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆75Updated last year
- ☆20Updated 7 months ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 3 months ago
- Evaluation framework for document processing models and services.☆46Updated this week
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated 2 years ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- ☆50Updated last year
- Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise Passage Re-Ranking with Cross-Encoders☆18Updated 4 months ago
- Code for KaLM-Embedding models☆93Updated 3 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Updated last year
- Efficient few-shot learning with cross-encoders.☆59Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆66Updated 3 weeks ago
- List of papers on Self-Correction of LLMs.☆78Updated 9 months ago
- Showcase how mxbai-embed-large-v1 can be used to produce binary embedding. Binary embeddings enabled 32x storage savings and 40x faster r…☆19Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated this week
- ☆14Updated last year
- A Data Source for Reasoning Embodied Agents☆19Updated 2 years ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆35Updated 2 years ago
- Zero-Shot Learning in Named Entity Recognition with Common Sense Knowledge☆17Updated 3 years ago
- ☆39Updated last year
- Python library to use Pleias-RAG models☆63Updated 5 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 8 months ago
- ☆49Updated 8 months ago
- ☆20Updated 6 months ago