aiverify-foundation / moonshotLinks
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
☆242Updated this week
Alternatives and similar repositories for moonshot
Users that are interested in moonshot are comparing it to the libraries listed below
Sorting:
- AI Verify☆13Updated this week
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆33Updated this week
- ☆44Updated 10 months ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆250Updated last year
- A Comprehensive Assessment of Trustworthiness in GPT Models☆294Updated 8 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆175Updated this week
- Fiddler Auditor is a tool to evaluate language models.☆181Updated last year
- This repository provides a benchmark for prompt Injection attacks and defenses☆216Updated last week
- Red-Teaming Language Models with DSPy☆195Updated 3 months ago
- ☆72Updated 7 months ago
- A benchmark for prompt injection detection systems.☆115Updated 3 weeks ago
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆17Updated last year
- Guardrails for secure and robust agent development☆292Updated this week
- ☆9Updated 3 months ago
- ☆257Updated 6 months ago
- A tool for evaluating LLMs☆418Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆171Updated 5 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆85Updated 2 months ago
- Collection of evals for Inspect AI☆144Updated this week
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆118Updated 3 weeks ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆69Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆374Updated last year
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆439Updated 3 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆313Updated 4 months ago
- Papers about red teaming LLMs and Multimodal models.☆121Updated last week
- South-East Asia Large Language Models☆325Updated this week
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆110Updated 8 months ago
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆913Updated 6 months ago
- Python SDK for running evaluations on LLM generated responses☆281Updated 2 weeks ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆151Updated 5 months ago