aiverify-foundation / moonshotLinks
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
☆285Updated 2 months ago
Alternatives and similar repositories for moonshot
Users that are interested in moonshot are comparing it to the libraries listed below
Sorting:
- AI Verify☆37Updated 3 weeks ago
- ☆49Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆437Updated last year
- Red-Teaming Language Models with DSPy☆238Updated 9 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆357Updated last month
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆296Updated last year
- Collection of evals for Inspect AI☆289Updated last week
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆193Updated 7 months ago
- Guardrails for secure and robust agent development☆366Updated 4 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆121Updated last month
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆499Updated 9 months ago
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆38Updated 2 months ago
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆578Updated last year
- A Comprehensive Assessment of Trustworthiness in GPT Models☆308Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆334Updated last month
- An open-source compliance-centered evaluation framework for Generative AI models☆172Updated 2 weeks ago
- Papers about red teaming LLMs and Multimodal models.☆156Updated 6 months ago
- This repository provides a benchmark for prompt injection attacks and defenses☆346Updated last month
- Attribute (or cite) statements generated by LLMs back to in-context information.☆300Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆789Updated last year
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆111Updated last week
- The fastest Trust Layer for AI Agents☆145Updated 6 months ago
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆613Updated 5 months ago
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆18Updated 2 years ago
- Automatically evaluate your LLMs in Google Colab☆671Updated last year
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆298Updated 3 weeks ago
- A tool for evaluating LLMs☆428Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆364Updated 10 months ago
- Build datasets using natural language☆547Updated 2 months ago