aishwaryaprabhat / BigBerthaLinks
BigBertha is an architecture design that demonstrates how automated LLMOps (Large Language Models Operations) can be achieved on any Kubernetes cluster using open source container-native technologies π
β28Updated 2 years ago
Alternatives and similar repositories for BigBertha
Users that are interested in BigBertha are comparing it to the libraries listed below
Sorting:
- Using LlamaIndex with Ray for productionizing LLM applicationsβ71Updated 2 years ago
- Code and notebooks associated with my blogpostsβ68Updated last month
- RAG orchestration framework β΅οΈβ202Updated 6 months ago
- A python package that provides a custom streamlit connection to query data from weaviate, the AI native vector databaseβ57Updated last year
- Tools and utilities for operating Metaflow in productionβ67Updated 2 months ago
- Product analytics for AI Assistantsβ157Updated 7 months ago
- Helm charts to deploy Weaviate to k8sβ65Updated 2 months ago
- Leverage your LangChain trace data for fine tuningβ46Updated last year
- β75Updated last year
- A curated list of awesome open source tools and commercial products for monitoring data quality, monitoring model performance, and profilβ¦β90Updated last year
- Pebblo enables developers to safely load data and promote their Gen AI app to deploymentβ149Updated 6 months ago
- Research notes and extra resources for all the work at explodinggradients.comβ25Updated 10 months ago
- β164Updated last week
- Finetune LLMs on K8s by using Runbooksβ170Updated last year
- Adding NeMo Guardrails to a LlamaIndex RAG pipelineβ41Updated last year
- LangChain chat model abstractions for dynamic failover, load balancing, chaos engineering, and more!β84Updated last year
- Repository hosting Langchain helm charts.β80Updated this week
- Chassis turns machine learning models into portable container images that can run just about anywhere.β86Updated last year
- Additional packages (components, document stores and the likes) to extend the capabilities of Haystackβ178Updated this week
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models accessβ¦β114Updated last year
- A repository that showcases how you can use ZenML with Gitβ73Updated last week
- Applying Evaluation Driven Development (EDD) to aid in the design decision of RAG pipelinesβ31Updated 2 years ago
- MLFlow Deployment Plugin for Ray Serveβ46Updated 3 years ago
- A collection of examples and tutorials for Qdrant vector search engineβ201Updated 3 months ago
- Automated knowledge graph creation SDKβ122Updated last year
- A Lightweight Library for AI Observabilityβ253Updated 11 months ago
- π Use NVIDIA NIMs with Haystack pipelinesβ32Updated last year
- Dataset registry DVC projectβ85Updated last year
- Self-host LLMs with vLLM and BentoMLβ163Updated this week
- Flyte Documentation πβ86Updated 9 months ago