mystic-ai / pipelineLinks
Pipeline is an open source python SDK for building AI/ML workflows
☆138Updated last year
Alternatives and similar repositories for pipeline
Users that are interested in pipeline are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated last year
- ☆40Updated 5 months ago
- Python client library for improving your LLM app accuracy☆97Updated 8 months ago
- LLM finetuning☆41Updated 2 years ago
- ☆170Updated 8 months ago
- Examples of models deployable with Truss☆206Updated this week
- Tutorial to get started with SkyPilot!☆57Updated last year
- An open-source cloud-native of large multi-modal models (LMMs) serving framework.☆164Updated 2 years ago
- DiffusionWithAutoscaler☆29Updated last year
- ☆197Updated last year
- ⚡️ A fast and flexible PyTorch inference server that runs locally, on any cloud or AI HW.☆145Updated last year
- ☆64Updated 7 months ago
- Streamlit Web UI for AGiXT☆28Updated 4 months ago
- An endpoint server for efficiently serving quantized open-source LLMs for code.☆57Updated 2 years ago
- Run GPU inference and training jobs on serverless infrastructure that scales with you.☆102Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated 2 years ago
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆63Updated last year
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated last year
- Record and replay LLM interactions for langchain☆82Updated last year
- Recipes and resources for building, deploying, and fine-tuning generative AI with Fireworks.☆124Updated last week
- ☆50Updated 2 years ago
- Develop, evaluate and monitor LLM applications at scale☆98Updated 10 months ago
- ☆53Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆37Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆99Updated 2 years ago
- ☆67Updated last year
- The Next Generation Multi-Modality Superintelligence☆69Updated last year
- Web page with political compass quiz results for open LLMs☆35Updated last year
- A high performance batching router optimises max throughput for text inference workload☆16Updated 2 years ago
- AI-based search done right☆20Updated 3 weeks ago