titanml / takeoff-communityLinks
TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models accessible to everyone.
☆114Updated last year
Alternatives and similar repositories for takeoff-community
Users that are interested in takeoff-community are comparing it to the libraries listed below
Sorting:
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated 11 months ago
- ☆199Updated last year
- Multi-threaded matrix multiplication and cosine similarity calculations for dense and sparse matrices. Appropriate for calculating the K …☆83Updated 6 months ago
- ☆48Updated last year
- 📚 Datasets and models for instruction-tuning☆238Updated last year
- Mistral + Haystack: build RAG pipelines that rock 🤘☆105Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆105Updated 3 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆112Updated last month
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆59Updated last week
- Large Language Model (LLM) Inference API and Chatbot☆126Updated last year
- 🤝 Trade any tensors over the network☆30Updated last year
- Data extraction with LLM on CPU☆68Updated last year
- Find the optimal model serving solution for 🤗 Hugging Face models 🚀☆43Updated last year
- ☆87Updated last year
- Lite weight wrapper for the independent implementation of SPLADE++ models for search & retrieval pipelines. Models and Library created by…☆31Updated 10 months ago
- ☆75Updated last year
- ☆77Updated last year
- 💙 Unstructured Data Connectors for Haystack 2.0☆17Updated last year
- A Lightweight Library for AI Observability☆246Updated 4 months ago
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated last year
- ☆168Updated last year
- The backend behind the LLM-Perf Leaderboard☆10Updated last year
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated 2 months ago
- LangChain chat model abstractions for dynamic failover, load balancing, chaos engineering, and more!☆81Updated last year
- Check for data drift between two OpenAI multi-turn chat jsonl files.☆37Updated last year
- Command Line Interface for Hugging Face Inference Endpoints☆66Updated last year
- Leverage your LangChain trace data for fine tuning☆41Updated 11 months ago
- Framework for building and maintaining self-updating prompts for LLMs☆64Updated last year
- Build Enterprise RAG (Retriver Augmented Generation) Pipelines to tackle various Generative AI use cases with LLM's by simply plugging co…☆112Updated 11 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 8 months ago