mystic-ai / pipeline
Pipeline is an open source python SDK for building AI/ML workflows
☆130Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for pipeline
- Open source multi modal chat interface☆15Updated 6 months ago
- Run GPU inference and training jobs on serverless infrastructure that scales with you.☆98Updated 5 months ago
- ☆84Updated last year
- Python client library for improving your LLM app accuracy☆96Updated this week
- LLM finetuning☆42Updated last year
- ☆120Updated this week
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated 10 months ago
- 🐍 | Python library for RunPod API and serverless worker SDK.☆183Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆188Updated last month
- An HTTP serving framework by Banana☆98Updated 11 months ago
- Google TPU optimizations for transformers models☆75Updated this week
- ☆20Updated 10 months ago
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆63Updated last year
- ☆22Updated last year
- ⚡️ A fast and flexible PyTorch inference server that runs locally, on any cloud or AI HW.☆129Updated 5 months ago
- ☆12Updated last year
- A template to run LLaMA in Cog☆63Updated last year
- Natural Language Interfaces Powered by LLMs☆91Updated 3 months ago
- Tutorial to get started with SkyPilot!☆56Updated 6 months ago
- Gradio UI for a Cog API☆64Updated 7 months ago
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated last year
- A desktop for AI agents☆28Updated 2 weeks ago
- DiffusionWithAutoscaler☆29Updated 7 months ago
- A feed of trending repos/models from GitHub, Replicate, HuggingFace, and Reddit.☆108Updated 2 months ago
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated last year
- Code generation with LLMs 🔗☆51Updated last year
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆54Updated 7 months ago
- Replace expensive LLM calls with finetunes automatically☆62Updated 9 months ago
- [WIP] AI Try-On plugin for Chrome☆25Updated 8 months ago
- Public reports detailing responses to sets of prompts by Large Language Models.☆26Updated last year