mithril-security / blindboxLinks
BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps
☆62Updated 2 years ago
Alternatives and similar repositories for blindbox
Users that are interested in blindbox are comparing it to the libraries listed below
Sorting:
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆151Updated last year
- Your buddy in the (L)LM space.☆64Updated last year
- Code for the paper "Defeating Prompt Injections by Design"☆150Updated 5 months ago
- Red-Teaming Language Models with DSPy☆235Updated 9 months ago
- Supply chain security for ML☆204Updated last week
- Modular, open source LLMOps stack that separates concerns: LiteLLM unifies LLM APIs, manages routing and cost controls, and ensures high-…☆125Updated 9 months ago
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆40Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆193Updated 7 months ago
- Test Software for the Characterization of AI Technologies☆262Updated last week
- Lightweight LLM Interaction Framework☆392Updated this week
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆135Updated this week
- This repository contains code to generate and preprocess Learning with Errors (LWE) data and implementations of four LWE attacks uSVP, SA…☆58Updated 6 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆98Updated 7 months ago
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- LLM plugin for models hosted by Anyscale Endpoints☆35Updated last year
- Constrain LLM output☆113Updated last year
- The fastest Trust Layer for AI Agents☆144Updated 5 months ago
- An open-source compliance-centered evaluation framework for Generative AI models☆170Updated this week
- LLM-powered autonomous agent with hierarchical task management☆52Updated 2 years ago
- Thorn in a HaizeStack test for evaluating long-context adversarial robustness.☆26Updated last year
- A fully in-browser privacy solution to make Conversational AI privacy-friendly☆235Updated last year
- ☆20Updated last year
- 👩🤝🤖 A curated list of datasets for large language models (LLMs), RLHF and related resources (continually updated)☆24Updated 2 years ago
- source for llmsec.net☆16Updated last year
- The Granite Guardian models are designed to detect risks in prompts and responses.☆120Updated last month
- LLM Optimize is a proof-of-concept library for doing LLM (large language model) guided blackbox optimization.☆59Updated 2 years ago
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆119Updated 2 years ago
- Public repository containing METR's DVC pipeline for eval data analysis☆129Updated 7 months ago
- GAMECHANGER Machine Learning Repo☆24Updated last year
- Sphynx Hallucination Induction☆53Updated 9 months ago