mithril-security / blindbox
BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps
☆56Updated last year
Alternatives and similar repositories for blindbox:
Users that are interested in blindbox are comparing it to the libraries listed below
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆38Updated 6 months ago
- List of ML file formats☆44Updated 10 months ago
- Confidential AI deployment with secure enclaves☆504Updated 10 months ago
- Test Software for the Characterization of AI Technologies☆236Updated this week
- Blindai Preview (no longer used, merged with the main repo blindai)☆24Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 8 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 10 months ago
- Supply chain security for ML☆126Updated this week
- This repository is for administrative documents for the CoSAI OASIS Open Project☆44Updated last week
- An open-source compliance-centered evaluation framework for Generative AI models☆123Updated last month
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆129Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆165Updated last year
- ☆70Updated 2 months ago
- A framework-less approach to robust agent development.☆149Updated last week
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆43Updated last month
- Red-Teaming Language Models with DSPy☆154Updated 9 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆47Updated last year
- CodeMind is a generic framework for evaluating inductive code reasoning of LLMs. It is equipped with a static analysis component that ena…☆33Updated 5 months ago
- Machine Learning Attack Series☆57Updated 8 months ago
- LLM for Email Spam Detection☆86Updated last year
- This repository contains code to generate and preprocess Learning with Errors (LWE) data and implementations of four LWE attacks uSVP, SA…☆40Updated 3 months ago
- Thorn in a HaizeStack test for evaluating long-context adversarial robustness.☆26Updated 5 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆46Updated 7 months ago
- GoalChain for goal-orientated LLM conversation flows☆67Updated last month
- Open source platform for the privacy-preserving machine learning lifecycle☆17Updated last year
- Generative AI Governance for Enterprises☆14Updated last month
- A better way of testing, inspecting, and analyzing AI Agent traces.☆27Updated this week
- Interactive LLM Chatbot that constructs direct and transitive software dependencies as a knowledge graph and answers user's questions lev…☆16Updated 2 weeks ago
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆18Updated 2 weeks ago
- The Foundation Model Transparency Index☆74Updated 8 months ago