titanml / takeoff-communityLinks
TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models accessible to everyone.
β114Updated last year
Alternatives and similar repositories for takeoff-community
Users that are interested in takeoff-community are comparing it to the libraries listed below
Sorting:
- Machine Learning Serving focused on GenAI with simplicity as the top priority.β59Updated last month
- π Datasets and models for instruction-tuningβ238Updated last year
- Multi-threaded matrix multiplication and cosine similarity calculations for dense and sparse matrices. Appropriate for calculating the K β¦β83Updated 7 months ago
- π€ Trade any tensors over the networkβ30Updated last year
- β48Updated last year
- πΉοΈ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.β139Updated last year
- β199Updated last year
- Command Line Interface for Hugging Face Inference Endpointsβ66Updated last year
- Mistral + Haystack: build RAG pipelines that rock π€β105Updated last year
- Large Language Model (LLM) Inference API and Chatbotβ126Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Daβ113Updated 4 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β49Updated last year
- Chunk your text using gpt4o-mini more accuratelyβ44Updated last year
- A Lightweight Library for AI Observabilityβ250Updated 6 months ago
- β87Updated last year
- Using LlamaIndex with Ray for productionizing LLM applicationsβ71Updated 2 years ago
- Hassle-free ML Pipelines on Kubernetesβ39Updated 2 years ago
- LangChain chat model abstractions for dynamic failover, load balancing, chaos engineering, and more!β82Updated last year
- β174Updated last year
- π Unstructured Data Connectors for Haystack 2.0β17Updated last year
- Recipes and resources for building, deploying, and fine-tuning generative AI with Fireworks.β123Updated 2 weeks ago
- Additional packages (components, document stores and the likes) to extend the capabilities of Haystackβ161Updated last week
- β75Updated last year
- β80Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing β‘β66Updated 9 months ago
- Python client library for improving your LLM app accuracyβ98Updated 6 months ago
- Framework for building and maintaining self-updating prompts for LLMsβ64Updated last year
- The backend behind the LLM-Perf Leaderboardβ10Updated last year
- Lite weight wrapper for the independent implementation of SPLADE++ models for search & retrieval pipelines. Models and Library created byβ¦β32Updated last year
- experiments with inference on llamaβ104Updated last year