mithril-security / blindbox
BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps
☆57Updated last year
Alternatives and similar repositories for blindbox:
Users that are interested in blindbox are comparing it to the libraries listed below
- Blindai Preview (no longer used, merged with the main repo blindai)☆23Updated last year
- MirrorDataGenerator is a python tool that generates synthetic data based on user-specified causal relations among features in the data. I…☆21Updated 2 years ago
- This repository contains code to generate and preprocess Learning with Errors (LWE) data and implementations of four LWE attacks uSVP, SA…☆40Updated this week
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆38Updated 7 months ago
- Confidential AI deployment with secure enclaves☆505Updated 11 months ago
- GraphRag vs Embeddings☆13Updated 7 months ago
- Supply chain security for ML☆131Updated this week
- Generative AI Governance for Enterprises☆15Updated 2 months ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆171Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆48Updated this week
- Fiddler Auditor is a tool to evaluate language models.☆176Updated last year
- Inferrd makes Machine Learning deployment easy and scalable.☆19Updated 2 years ago
- ☆66Updated 3 months ago
- A daemon that makes a desktop OS accessible to AI agents☆22Updated 2 weeks ago
- The Foundation Model Transparency Index☆77Updated 9 months ago
- LLM-Powered Analyses of your GitHub Community using EvaDB☆24Updated last year
- Your buddy in the (L)LM space.☆63Updated 5 months ago
- A privacy layer around LLMs☆31Updated 9 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆132Updated last year
- Security and compliance proxy for LLM APIs☆46Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 9 months ago
- Thorn in a HaizeStack test for evaluating long-context adversarial robustness.☆26Updated 7 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated last year
- Interactive LLM Chatbot that constructs direct and transitive software dependencies as a knowledge graph and answers user's questions lev…☆18Updated 2 months ago
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆19Updated this week
- A better way of testing, inspecting, and analyzing AI Agent traces.☆29Updated this week
- Writing Blog Posts with Generative Feedback Loops!☆47Updated 11 months ago
- A fully in-browser privacy solution to make Conversational AI privacy-friendly☆227Updated 4 months ago
- Python language bindings for smartnoise-core.☆76Updated 2 years ago
- MER is a software that identifies and highlights manipulative communication in text from human conversations and AI-generated responses. …☆13Updated 7 months ago