mithril-security / blindbox
BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps
☆57Updated 10 months ago
Related projects: ⓘ
- Blindai Preview (no longer used, merged with the main repo blindai)☆24Updated last year
- Supply chain security for ML☆105Updated last week
- A simple framework for privacy-friendly data science collaboration☆169Updated 11 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆116Updated 8 months ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆155Updated 11 months ago
- Confidential AI deployment with secure enclaves☆500Updated 6 months ago
- The Foundation Model Transparency Index☆65Updated 3 months ago
- Red-Teaming Language Models with DSPy☆116Updated 5 months ago
- ATLAS tactics, techniques, and case studies data☆46Updated 2 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆103Updated 6 months ago
- GraphRag vs Embeddings☆12Updated 2 months ago
- Test Software for the Characterization of AI Technologies☆212Updated this week
- 📖 A curated list of resources dedicated to synthetic data☆115Updated 2 years ago
- ☆89Updated last month
- Fiddler Auditor is a tool to evaluate language models.☆163Updated 6 months ago
- A software package for privacy-preserving generation of a synthetic twin to a given sensitive data set.☆46Updated 2 weeks ago
- A toolkit for tools and techniques related to the privacy and compliance of AI models.☆96Updated 2 months ago
- A JupyterLab extension to evaluate the security of your Jupyter environment☆36Updated last year
- List of ML file formats☆34Updated 6 months ago
- Explore AI Supply Chain Risk with the AI Risk Database☆44Updated 4 months ago
- This repository is for administrative documents for the CoSAI OASIS Open Project☆32Updated this week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆81Updated 6 months ago
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspire…☆53Updated last year
- [Corca / ML] Automatically solved Gandalf AI with LLM☆46Updated last year
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆92Updated 11 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆44Updated 3 months ago
- Your buddy in the (L)LM space.☆62Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆299Updated 7 months ago
- An open source library for asynchronous querying of LLM endpoints☆19Updated last week
- LMQL implementation of tree of thoughts☆33Updated 7 months ago