benbrandt / text-splitterLinks
Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from Rust and Python.
β492Updated this week
Alternatives and similar repositories for text-splitter
Users that are interested in text-splitter are comparing it to the libraries listed below
Sorting:
- Highly Performant, Modular, Memory Safe and Production-ready Inference, Ingestion and Indexing built in Rust π¦β742Updated last week
- Rust library for generating vector embeddings, reranking. Re-write of qdrant/fastembed.β627Updated last month
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.β496Updated this week
- Fast, streaming indexing, query, and agentic LLM applications in Rustβ591Updated this week
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibesβ236Updated 2 months ago
- Neural search for web-sites, docs, articles - online!β143Updated 2 months ago
- LLM Orchestrator built in Rustβ283Updated last year
- Inference engine for GLiNER models, in Rustβ74Updated 3 months ago
- A fast, lightweight and easy-to-use Python library for splitting text into semantically meaningful chunks.β390Updated 2 months ago
- Faster structured generationβ254Updated last week
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datasβ¦β210Updated 2 weeks ago
- Super-fast Structured Outputsβ561Updated last week
- A realtime serving engine for Data-Intensive Generative AI Applicationsβ1,060Updated this week
- TensorRT-LLM server with Structured Outputs (JSON) built with Rustβ59Updated 5 months ago
- A cross-platform browser ML framework.β718Updated 10 months ago
- Rust client for Qdrant vector search engineβ344Updated last month
- π¦ A curated list of Rust tools, libraries, and frameworks for working with LLMs, GPT, AIβ485Updated last year
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python packageβ234Updated 3 weeks ago
- High-level, optionally asynchronous Rust bindings to llama.cppβ232Updated last year
- Built for demanding AI workflows, this gateway offers low-latency, provider-agnostic access, ensuring your AI applications run smoothly aβ¦β78Updated 4 months ago
- In-memory vector store with efficient read and write performance for semantic caching and retrieval system. Redis for Semantic Caching.β372Updated 10 months ago
- OpenAI compatible API for serving LLAMA-2 modelβ218Updated 2 years ago
- Extract core logic from qdrant and make it available as a library.β61Updated last year
- β237Updated 4 months ago
- Ready-made tokenizer library for working with GPT and tiktokenβ341Updated 2 weeks ago
- A python library to define and validate data types in Docling.β194Updated this week
- Fast Semantic Text Deduplication & Filteringβ816Updated 2 weeks ago
- Self-hosted web UI for Qdrantβ338Updated this week
- Rust bindings to https://github.com/k2-fsa/sherpa-onnxβ230Updated last week
- Modern, fast, document parser written in π¦β521Updated last week