puppetm4st3r / baai_m3_simple_serverLinks
This code sets up a simple yet robust server using FastAPI for handling asynchronous requests for embedding generation and reranking tasks using the BAAI M3 multilingual model.
☆72Updated last year
Alternatives and similar repositories for baai_m3_simple_server
Users that are interested in baai_m3_simple_server are comparing it to the libraries listed below
Sorting:
- Open Source Text Embedding Models with OpenAI Compatible API☆167Updated last year
- Code for explaining and evaluating late chunking (chunked pooling)☆485Updated last year
- ☆248Updated 7 months ago
- TextEmbed is a REST API crafted for high-throughput and low-latency embedding inference. It accommodates a wide variety of embedding mode…☆27Updated last year
- A fast, lightweight and easy-to-use Python library for splitting text into semantically meaningful chunks.☆544Updated 3 months ago
- Fine-Tuning Embedding for RAG with Synthetic Data☆523Updated 2 years ago
- ☆66Updated last year
- An enterprise-grade AI retriever designed to streamline AI integration into your applications, ensuring cutting-edge accuracy.☆292Updated 7 months ago
- ☆321Updated 2 years ago
- A tool for generating function arguments and choosing what function to call with local LLMs☆436Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆184Updated last year
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆572Updated last week
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆393Updated last week
- Ready-to-go containerized RAG service. Implemented with text-embedding-inference + Qdrant/LanceDB.☆73Updated last year
- This repository presents the original implementation of LumberChunker: Long-Form Narrative Document Segmentation by André V. Duarte, João…☆87Updated last year
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cro…☆933Updated 3 weeks ago
- A lightweight version of Milvus☆419Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- Compress your input to ChatGPT or other LLMs, to let them process 2x more content and save 40% memory and GPU time.☆409Updated last year
- Meta-Chunking: Learning Efficient Text Segmentation via Logical Perception☆272Updated 4 months ago
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆241Updated last year
- Vision Document Retrieval (ViDoRe): Benchmark. Evaluation code for the ColPali paper.☆258Updated last week
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆68Updated 2 years ago
- ☆415Updated last year
- A Structured Output Framework for LLM Outputs☆375Updated 2 months ago
- A library integrating embedding and reranker models from OpenAI, SentenceTransformers etc for semantic search in vector database.☆59Updated 9 months ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRav…☆318Updated 2 years ago
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA☆517Updated last year
- A Python library to chunk/group your texts based on semantic similarity.☆103Updated last year
- ☆201Updated this week