mithril-security / blindbox
BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps
☆57Updated last year
Related projects ⓘ
Alternatives and complementary repositories for blindbox
- Confidential AI deployment with secure enclaves☆502Updated 8 months ago
- Generative AI Governance for Enterprises☆13Updated last month
- Red-Teaming Language Models with DSPy☆142Updated 7 months ago
- Blindai Preview (no longer used, merged with the main repo blindai)☆24Updated last year
- Creating the tools and data sets necessary to evaluate vulnerabilities in LLMs.☆21Updated 2 years ago
- A JupyterLab extension to evaluate the security of your Jupyter environment☆39Updated last year
- ATLAS tactics, techniques, and case studies data☆49Updated last month
- Your buddy in the (L)LM space.☆63Updated 2 months ago
- A simple framework for privacy-friendly data science collaboration☆170Updated last year
- Explore AI Supply Chain Risk with the AI Risk Database☆50Updated 6 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆107Updated 8 months ago
- Test Software for the Characterization of AI Technologies☆226Updated this week
- This repository contains code to generate and preprocess Learning with Errors (LWE) data and implementations of four LWE attacks uSVP, SA…☆24Updated 3 weeks ago
- The Foundation Model Transparency Index☆71Updated 5 months ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆92Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆47Updated 7 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆44Updated 5 months ago
- This repository is for administrative documents for the CoSAI OASIS Open Project☆40Updated last week
- MirrorDataGenerator is a python tool that generates synthetic data based on user-specified causal relations among features in the data. I…☆19Updated 2 years ago
- ☆15Updated last week
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆121Updated 10 months ago
- List of ML file formats☆39Updated 8 months ago
- Security and compliance proxy for LLM APIs☆45Updated last year
- LMQL implementation of tree of thoughts☆33Updated 9 months ago
- ☆20Updated this week
- Hassle-free ML Pipelines on Kubernetes☆38Updated last year
- ☆15Updated 7 months ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆161Updated last year
- Writing Blog Posts with Generative Feedback Loops!☆43Updated 8 months ago
- ☆60Updated 3 weeks ago