puppetm4st3r / baai_m3_simple_serverLinks
This code sets up a simple yet robust server using FastAPI for handling asynchronous requests for embedding generation and reranking tasks using the BAAI M3 multilingual model.
☆72Updated last year
Alternatives and similar repositories for baai_m3_simple_server
Users that are interested in baai_m3_simple_server are comparing it to the libraries listed below
Sorting:
- Open Source Text Embedding Models with OpenAI Compatible API☆167Updated last year
- Code for explaining and evaluating late chunking (chunked pooling)☆487Updated last year
- ☆249Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- TextEmbed is a REST API crafted for high-throughput and low-latency embedding inference. It accommodates a wide variety of embedding mode…☆28Updated last year
- A tool for generating function arguments and choosing what function to call with local LLMs☆436Updated last year
- An enterprise-grade AI retriever designed to streamline AI integration into your applications, ensuring cutting-edge accuracy.☆292Updated 7 months ago
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆401Updated this week
- Fine-Tuning Embedding for RAG with Synthetic Data☆523Updated 2 years ago
- A fast, lightweight and easy-to-use Python library for splitting text into semantically meaningful chunks.☆560Updated 3 months ago
- ☆321Updated 2 years ago
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cro…☆938Updated last month
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆184Updated last year
- ☆201Updated this week
- RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.☆575Updated this week
- LongCite: Enabling LLMs to Generate Fine-grained Citations in Long-context QA☆518Updated last year
- ☆66Updated last year
- This repository presents the original implementation of LumberChunker: Long-Form Narrative Document Segmentation by André V. Duarte, João…☆90Updated last year
- Meta-Chunking: Learning Efficient Text Segmentation via Logical Perception☆273Updated 4 months ago
- Code implement reposity of Paper HiQA☆104Updated 11 months ago
- Finetune ALL LLMs with ALL Adapeters on ALL Platforms!☆332Updated 6 months ago
- This repo is for handling Question Answering, especially for Multi-hop Question Answering☆69Updated 2 years ago
- Official repo for "LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs".☆242Updated last year
- Ready-to-go containerized RAG service. Implemented with text-embedding-inference + Qdrant/LanceDB.☆74Updated last year
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆91Updated 5 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- Deployment a light and full OpenAI API for production with vLLM to support /v1/embeddings with all embeddings models.☆44Updated last year
- ☆415Updated last year
- [EMNLP 2024: Demo Oral] RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation☆311Updated last year
- Simple package to extract text with coordinates from programmatic PDFs☆238Updated this week