instill-ai / instill-core
๐ฎ Instill Core is a full-stack AI infrastructure tool for data, model and pipeline orchestration, designed to streamline every aspect of building versatile AI-first applications
โ2,209Updated this week
Alternatives and similar repositories for instill-core:
Users that are interested in instill-core are comparing it to the libraries listed below
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroโฆโ2,763Updated 5 months ago
- The production toolkit for LLMs. Observability, prompt management and evaluations.โ1,138Updated this week
- Adding guardrails to large language models.โ4,379Updated this week
- The open-source visual AI programming environment and TypeScript libraryโ3,289Updated this week
- Developer-friendly, serverless vector database for AI applications. Easily add long-term memory to your LLM apps!โ5,290Updated this week
- SkyPilot: Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability vโฆโ7,057Updated this week
- ๐ฅท Run AI-agents with an APIโ5,504Updated 3 months ago
- Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasetsโ4,195Updated this week
- Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.โ2,136Updated this week
- H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/โ4,111Updated this week
- The AI Datastore for Schemas, BLOBs, and Predictions. Use with your apps or integrate built-in Human Supervision, Data Workflow, and UI Cโฆโ1,854Updated 2 months ago
- The simplest way to serve AI/ML models in productionโ936Updated this week
- RayLLM - LLMs on Rayโ1,248Updated 7 months ago
- TensorZero creates a feedback loop for optimizing LLM applications โ turning production data into smarter, faster, and cheaper models.โ1,989Updated this week
- Build Conversational AI in minutes โก๏ธโ7,911Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMsโ2,310Updated this week
- ๐ชข Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Llamโฆโ7,878Updated this week
- Postgres with GPUs for ML/AI apps.โ6,111Updated this week
- A library of data loaders for LLMs made by the community -- to be used with LlamaIndex and/or LangChainโ3,464Updated 10 months ago
- Interactively explore unstructured datasets from your dataframe.โ1,141Updated last week
- dstack is a lightweight, open-source alternative to Kubernetes & Slurm, simplifying AI container orchestration with multi-cloud & on-premโฆโ1,652Updated this week
- Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.โ5,928Updated this week
- A language for constraint-guided and efficient LLM programming.โ3,772Updated 7 months ago
- Run any open-source LLMs, such as Llama, Mistral, as OpenAI compatible API endpoint in the cloud.โ10,391Updated last week
- Retrieval Augmented Generation (RAG) chatbot powered by Weaviateโ6,646Updated this week
- The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM Observability all in one place.โ1,880Updated this week
- Simple, safe way to store and distribute tensorsโ3,012Updated last week
- Semantic cache for LLMs. Fully integrated with LangChain and llama_index.โ7,346Updated 4 months ago
- Large Language Model Text Generation Inferenceโ9,601Updated this week