embeddings-benchmark / mtebLinks
MTEB: Massive Text Embedding Benchmark
☆2,753Updated this week
Alternatives and similar repositories for mteb
Users that are interested in mteb are comparing it to the libraries listed below
Sorting:
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,154Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,833Updated last week
- A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.☆1,908Updated 2 months ago
- Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'☆1,565Updated 6 months ago
- The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval☆1,339Updated 11 months ago
- Efficient Retrieval Augmentation and Generation Framework☆1,625Updated 6 months ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,992Updated 6 months ago
- ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)☆3,525Updated last week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,956Updated 11 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,524Updated last week
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,267Updated 2 months ago
- ☆2,056Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,389Updated this week
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,505Updated 2 months ago
- A blazing fast inference solution for text embeddings models☆3,857Updated 2 weeks ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,861Updated 5 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,082Updated last month
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,825Updated 7 months ago
- Bringing BERT into modernity via both architecture changes and scaling☆1,473Updated last month
- Automated Evaluation of RAG Systems☆637Updated 4 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,793Updated this week
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,464Updated 2 years ago
- ☆1,256Updated 5 months ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,713Updated 6 months ago
- Supercharge Your LLM Application Evaluations 🚀☆10,164Updated last week
- Toolkit for creating, sharing and using natural language prompts.☆2,917Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,553Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,503Updated 6 months ago
- Robust recipes to align language models with human and AI preferences☆5,299Updated last week
- Retrieval and Retrieval-augmented LLMs☆10,302Updated 3 weeks ago