aiverify-foundation / moonshotLinks
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
☆278Updated last month
Alternatives and similar repositories for moonshot
Users that are interested in moonshot are comparing it to the libraries listed below
Sorting:
- AI Verify☆36Updated 3 weeks ago
- ☆49Updated last year
- Red-Teaming Language Models with DSPy☆235Updated 8 months ago
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- An open-source compliance-centered evaluation framework for Generative AI models☆169Updated last week
- The Granite Guardian models are designed to detect risks in prompts and responses.☆119Updated 3 weeks ago
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆37Updated last month
- Collection of evals for Inspect AI☆264Updated this week
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆490Updated 8 months ago
- Guardrails for secure and robust agent development☆355Updated 3 months ago
- ☆73Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆328Updated 2 weeks ago
- A benchmark for prompt injection detection systems.☆144Updated 2 months ago
- The fastest Trust Layer for AI Agents☆144Updated 5 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆427Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆421Updated last year
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆294Updated last year
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆144Updated 2 weeks ago
- Papers about red teaming LLMs and Multimodal models.☆145Updated 5 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆357Updated 9 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆306Updated last year
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆18Updated last year
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆239Updated this week
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆192Updated 6 months ago
- This repository provides a benchmark for prompt injection attacks and defenses☆310Updated 2 weeks ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆571Updated last year
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆92Updated 10 months ago
- Every practical and proposed defense against prompt injection.☆570Updated 8 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆115Updated 3 months ago