aishwaryaprabhat / BigBerthaLinks
BigBertha is an architecture design that demonstrates how automated LLMOps (Large Language Models Operations) can be achieved on any Kubernetes cluster using open source container-native technologies π
β28Updated last year
Alternatives and similar repositories for BigBertha
Users that are interested in BigBertha are comparing it to the libraries listed below
Sorting:
- Using LlamaIndex with Ray for productionizing LLM applicationsβ71Updated 2 years ago
- β16Updated last year
- RAG orchestration framework β΅οΈβ201Updated last month
- A curated list of awesome open source tools and commercial products for monitoring data quality, monitoring model performance, and profilβ¦β84Updated last year
- β75Updated last year
- β11Updated 2 years ago
- Product analytics for AI Assistantsβ154Updated 3 months ago
- Pebblo enables developers to safely load data and promote their Gen AI app to deploymentβ148Updated 2 months ago
- Additional packages (components, document stores and the likes) to extend the capabilities of Haystackβ164Updated this week
- A python package that provides a custom streamlit connection to query data from weaviate, the AI native vector databaseβ55Updated last year
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models accessβ¦β114Updated last year
- Build Enterprise RAG (Retriver Augmented Generation) Pipelines to tackle various Generative AI use cases with LLM's by simply plugging coβ¦β113Updated last year
- Repository for open inference protocol specificationβ59Updated 3 months ago
- Automated knowledge graph creation SDKβ122Updated 9 months ago
- A collection of examples and tutorials for Qdrant vector search engineβ183Updated 2 months ago
- Code and notebooks associated with my blogpostsβ65Updated 8 months ago
- Research notes and extra resources for all the work at explodinggradients.comβ24Updated 5 months ago
- Retrieval Augmented Generation applicationsβ26Updated last year
- Generate Tools and Toolkits from any Python SDK -- no extra code requiredβ53Updated 9 months ago
- Leverage your LangChain trace data for fine tuningβ44Updated last year
- MLFlow Deployment Plugin for Ray Serveβ46Updated 3 years ago
- A repository that showcases how you can use ZenML with Gitβ69Updated 3 weeks ago
- Adding NeMo Guardrails to a LlamaIndex RAG pipelineβ38Updated last year
- Fine-tune an LLM to perform batch inference and online serving.β112Updated 3 months ago
- An example application built with LangChain CLI and LangServeβ77Updated last year
- Tools and utilities for operating Metaflow in productionβ60Updated this week
- A specification for OpenInference, a semantic mapping of ML inferencesβ47Updated last year
- β80Updated last year
- A Lightweight Library for AI Observabilityβ250Updated 6 months ago
- β81Updated 9 months ago