foundation-model-stack / fms-guardrails-orchestratorLinks
π Guardrails orchestration server for application of various detections on text generation input and output.
β27Updated last week
Alternatives and similar repositories for fms-guardrails-orchestrator
Users that are interested in fms-guardrails-orchestrator are comparing it to the libraries listed below
Sorting:
- Synthetic Data Generation Toolkit for LLMsβ76Updated last week
- Taxonomy tree that will allow you to create models tuned with your dataβ287Updated 3 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inferenceβ62Updated 3 months ago
- Build Research and Rag agents with Granite on your laptopβ149Updated 2 months ago
- Repository for open inference protocol specificationβ61Updated 7 months ago
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymenβ¦β216Updated 2 weeks ago
- InstructLab Training Library - Efficient Fine-Tuning with Message-Format Dataβ44Updated this week
- Python framework which enables you to transform how a user calls or infers an IBM Granite model and how the output from the model is retuβ¦β51Updated this week
- Granite Snack Cookbook -- easily consumable recipes (python notebooks) that showcase the capabilities of the Granite modelsβ329Updated last week
- Fast serverless LLM inference, in Rust.β108Updated last month
- Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbenchβ194Updated 7 months ago
- Python SDK for Llama Stackβ191Updated this week
- Template to quickly start working with the BeeAI Framework in Python.β28Updated 3 weeks ago
- β177Updated this week
- AI Agents, LLM Fine-tuning, Developer Productivity, Governance, IBM watsonxβ45Updated last month
- Self-host LLMs with vLLM and BentoMLβ161Updated 3 weeks ago
- Examples for building and running LLM services and applications locally with Podmanβ185Updated 4 months ago
- Auto-tuning for vllm. Getting the best performance out of your LLM deployment (vllm+guidellm+optuna)β25Updated this week
- Python library for Evaluationβ16Updated last week
- Code samples from our Python agents tutorialβ109Updated 9 months ago
- β51Updated 4 months ago
- An HTTP service intended as a backend for an LLM that can run arbitrary pieces of Python code.β69Updated 3 months ago
- β267Updated 5 months ago
- Prompt Declaration Language (PDL) is a declarative prompt programming language.β267Updated this week
- Open source project for data preparation for GenAI applicationsβ867Updated last week
- Framework for deploying configurable AI agents with real-time streaming and tool execution.β38Updated 3 months ago
- Run the entire bee application stack using docker-composeβ153Updated 9 months ago
- An implementation of a multi-agent task management system that enables hierarchical agent coordination and task execution.β50Updated 5 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.β123Updated 2 months ago
- CUGA is an open-source generalist agent for the enterprise, supporting complex task execution on web and APIs, OpenAPI/MCP integrations, β¦β388Updated this week