aiverify-foundation / moonshot
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
☆213Updated 2 weeks ago
Alternatives and similar repositories for moonshot:
Users that are interested in moonshot are comparing it to the libraries listed below
- AI Verify☆137Updated this week
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆28Updated 2 weeks ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆215Updated 8 months ago
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆16Updated last year
- Red-Teaming Language Models with DSPy☆169Updated last week
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆548Updated 6 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆274Updated 5 months ago
- ☆40Updated 6 months ago
- Fiddler Auditor is a tool to evaluate language models.☆175Updated 11 months ago
- ☆70Updated 4 months ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆493Updated 7 months ago
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆376Updated last week
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆264Updated last month
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆96Updated this week
- A framework-less approach to robust agent development.☆154Updated this week
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆106Updated 5 months ago
- Automated Evaluation of RAG Systems☆546Updated 3 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 11 months ago
- Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning☆45Updated last year
- Collection of evals for Inspect AI☆77Updated this week
- [NDSS'25 Poster] A collection of automated evaluators for assessing jailbreak attempts.☆112Updated this week
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆79Updated this week
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆169Updated last year
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆516Updated this week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆58Updated 10 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆92Updated 11 months ago
- List of papers on hallucination detection in LLMs.☆775Updated 2 months ago
- Papers about red teaming LLMs and Multimodal models.☆96Updated 3 months ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆172Updated last month
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆333Updated 11 months ago