RulinShao / massive-serve
Python package for serving a local search engine. One command to download and serve a datastore---that's it π.
β13Updated this week
Alternatives and similar repositories for massive-serve
Users that are interested in massive-serve are comparing it to the libraries listed below
Sorting:
- β38Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 laβ¦β48Updated last year
- β16Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Trainingβ21Updated 8 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructionsβ44Updated 10 months ago
- β28Updated last year
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrievalβ45Updated 6 months ago
- β72Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"β76Updated 2 years ago
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"β35Updated 8 months ago
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"β28Updated 3 years ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our focβ¦β32Updated 11 months ago
- β54Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".β39Updated 2 years ago
- Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (httpsβ¦β43Updated 9 months ago
- β49Updated last year
- Findings of ACL'2023: Optimizing Test-Time Query Representations for Dense Retrievalβ30Updated last year
- Pile Deduplication Codeβ19Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddingsβ19Updated 3 months ago
- β39Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuningβ98Updated 2 years ago
- β49Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flβ¦β72Updated 8 months ago
- β85Updated 2 years ago
- β34Updated last year
- Code for paper 'Data-Efficient FineTuning'β29Updated last year
- β21Updated 2 years ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messagesβ45Updated 5 months ago
- β28Updated 10 months ago
- β44Updated 8 months ago