lakeraai / canica
A text embedding viewer for the Jupyter environment
☆19Updated last year
Alternatives and similar repositories for canica:
Users that are interested in canica are comparing it to the libraries listed below
- A benchmark for prompt injection detection systems.☆99Updated last month
- [Corca / ML] Automatically solved Gandalf AI with LLM☆48Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated last year
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆29Updated last year
- Project LLM Verification Standard☆41Updated 11 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 10 months ago
- Dropbox LLM Security research code and results☆221Updated 10 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆116Updated last week
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆20Updated 3 weeks ago
- Supply chain security for ML☆133Updated this week
- ☆13Updated 9 months ago
- ☆42Updated 8 months ago
- ☆44Updated 2 years ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆172Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆352Updated last year
- Red-Teaming Language Models with DSPy☆175Updated last month
- OWASP Machine Learning Security Top 10 Project☆83Updated 2 months ago
- Code to break Llama Guard☆31Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆453Updated 5 months ago
- Test Software for the Characterization of AI Technologies☆243Updated this week
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆81Updated 10 months ago
- Every practical and proposed defense against prompt injection.☆412Updated last month
- A collection of prompt injection mitigation techniques.☆20Updated last year
- OWASP Foundation Web Respository☆246Updated this week
- Privacy Testing for Deep Learning☆198Updated last year
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆109Updated 9 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆91Updated 3 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆50Updated 3 weeks ago
- A framework-less approach to robust agent development.☆156Updated this week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆63Updated 11 months ago