nixiesearch / onnx-convert
An ONNX converter script focused on embedding models
☆31Updated 3 months ago
Alternatives and similar repositories for onnx-convert:
Users that are interested in onnx-convert are comparing it to the libraries listed below
- Ready-to-go containerized RAG service. Implemented with text-embedding-inference + Qdrant/LanceDB.☆64Updated 3 months ago
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated 6 months ago
- Efficient few-shot learning with cross-encoders.☆51Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆136Updated 8 months ago
- A Python library to chunk/group your texts based on semantic similarity.☆95Updated 9 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 5 months ago
- Baguetter is a flexible, efficient, and hackable search engine library implemented in Python. It's designed for quickly benchmarking, imp…☆174Updated 7 months ago
- Vector Database with support for late interaction and token level embeddings.☆54Updated 6 months ago
- Lite weight wrapper for the independent implementation of SPLADE++ models for search & retrieval pipelines. Models and Library created by…☆30Updated 7 months ago
- C++ inference wrappers for running blazing fast embedding services on your favourite serverless like AWS Lambda. By Prithivi Da, PRs welc…☆22Updated last year
- ☆47Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆130Updated 4 months ago
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆66Updated 5 months ago
- Chunk your text using gpt4o-mini more accurately☆44Updated 8 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆61Updated last year
- ☆67Updated 4 months ago
- XTR/WARP is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆123Updated 6 months ago
- Evaluation of bm42 sparse indexing algorithm☆65Updated 9 months ago
- Inference engine for GLiNER models, in Rust☆44Updated 3 weeks ago
- This is the repo for the container that holds the models for the text2vec-transformers module☆51Updated 3 weeks ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆74Updated 6 months ago
- Python API for https://vespa.ai, the open big data serving engine☆120Updated last week
- A framework for evaluating function calls made by LLMs☆37Updated 8 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.