aiverify-foundation / moonshotLinks
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
☆268Updated last week
Alternatives and similar repositories for moonshot
Users that are interested in moonshot are comparing it to the libraries listed below
Sorting:
- AI Verify☆32Updated this week
- ☆45Updated last year
- The Granite Guardian models are designed to detect risks in prompts and responses.☆112Updated last week
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆481Updated 7 months ago
- Red-Teaming Language Models with DSPy☆212Updated 7 months ago
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆36Updated last week
- Fiddler Auditor is a tool to evaluate language models.☆187Updated last year
- An open-source compliance-centered evaluation framework for Generative AI models☆163Updated this week
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆270Updated 2 weeks ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆416Updated last year
- Papers about red teaming LLMs and Multimodal models.☆136Updated 3 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆343Updated 7 months ago
- The fastest Trust Layer for AI Agents☆144Updated 3 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆302Updated last year
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆283Updated last year
- Guardrails for secure and robust agent development☆344Updated last month
- Collection of evals for Inspect AI☆230Updated this week
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆945Updated 9 months ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆559Updated last year
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆139Updated 3 weeks ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆104Updated this week
- ☆73Updated 10 months ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆192Updated 5 months ago
- ☆262Updated 2 months ago
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆48Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆114Updated last year
- A tool for evaluating LLMs☆424Updated last year
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆114Updated last month
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆717Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆79Updated last year