justrach / satyaLinks
SATYA - High Performance Data Validation for Python written in Rust
☆21Updated 3 weeks ago
Alternatives and similar repositories for satya
Users that are interested in satya are comparing it to the libraries listed below
Sorting:
- ☆210Updated 4 months ago
- Check for data drift between two OpenAI multi-turn chat jsonl files.☆38Updated last year
- Cray-LM unified training and inference stack.☆22Updated 9 months ago
- ⚡ Bhumi – The fastest AI inference client for Python, built with Rust for unmatched speed, efficiency, and scalability 🚀☆62Updated last month
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.☆88Updated this week
- Multi-backend recommender systems with Keras 3☆146Updated 2 weeks ago
- ☆80Updated last year
- Seemless interface of using PyTOrch distributed with Jupyter notebooks☆53Updated 2 months ago
- Iterate fast on your RAG pipelines☆23Updated 4 months ago
- Simple UI for debugging correlations of text embeddings☆299Updated 5 months ago
- High-Performance Engine for Multi-Vector Search☆181Updated 2 weeks ago
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆151Updated 4 months ago
- A tool for benchmarking LLMs on Modal☆43Updated 2 months ago
- Framework for building data agent workflows☆84Updated last year
- ☆36Updated 6 months ago
- XTR/WARP (SIGIR'25) is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆169Updated 6 months ago
- Efficient vector database for hundred millions of embeddings.☆208Updated last year
- A Lightweight Library for AI Observability☆251Updated 8 months ago
- Pre-train Static Word Embeddings☆90Updated 2 months ago
- Baguetter is a flexible, efficient, and hackable search engine library implemented in Python. It's designed for quickly benchmarking, imp…☆190Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆117Updated 7 months ago
- ☆159Updated 11 months ago
- ☆86Updated 4 months ago
- ☆51Updated 9 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆113Updated 5 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated last month
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆58Updated last month
- Framework for building and maintaining self-updating prompts for LLMs☆64Updated last year
- Generalist and Lightweight Model for Text Classification☆164Updated 5 months ago
- ☆84Updated last year