titanml / takeoff-communityLinks
TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models accessible to everyone.
☆114Updated last year
Alternatives and similar repositories for takeoff-community
Users that are interested in takeoff-community are comparing it to the libraries listed below
Sorting:
- Multi-threaded matrix multiplication and cosine similarity calculations for dense and sparse matrices. Appropriate for calculating the K …☆86Updated last year
- ☆198Updated last year
- Machine Learning Serving focused on GenAI with simplicity as the top priority.☆59Updated this week
- Large Language Model (LLM) Inference API and Chatbot☆127Updated last year
- Using LlamaIndex with Ray for productionizing LLM applications☆71Updated 2 years ago
- Mistral + Haystack: build RAG pipelines that rock 🤘☆106Updated last year
- 💙 Unstructured Data Connectors for Haystack 2.0☆17Updated 2 years ago
- Large Language Model Hosting Container☆90Updated 3 months ago
- Lite weight wrapper for the independent implementation of SPLADE++ models for search & retrieval pipelines. Models and Library created by…☆34Updated last year
- 📚 Datasets and models for instruction-tuning☆238Updated 2 years ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆69Updated last month
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- 🤝 Trade any tensors over the network☆30Updated 2 years ago
- Command Line Interface for Hugging Face Inference Endpoints☆66Updated last year
- ☆48Updated 2 years ago
- ☆80Updated last year
- Python client library for improving your LLM app accuracy☆97Updated 10 months ago
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆119Updated 9 months ago
- PanML is a high level generative AI/ML development and analysis library designed for ease of use and fast experimentation.☆117Updated 2 years ago
- experiments with inference on llama☆103Updated last year
- Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)☆82Updated 10 months ago
- Hassle-free ML Pipelines on Kubernetes☆39Updated 2 years ago
- Framework for building and maintaining self-updating prompts for LLMs☆65Updated last year
- A Lightweight Library for AI Observability☆253Updated 10 months ago
- ☆74Updated last year
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Updated 3 months ago
- ☆89Updated 2 years ago
- ☆210Updated 6 months ago
- LangChain chat model abstractions for dynamic failover, load balancing, chaos engineering, and more!☆84Updated last year
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆81Updated last year