aiverify-foundation / moonshotLinks
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
☆308Updated last week
Alternatives and similar repositories for moonshot
Users that are interested in moonshot are comparing it to the libraries listed below
Sorting:
- AI Verify☆47Updated 3 weeks ago
- Fiddler Auditor is a tool to evaluate language models.☆189Updated last year
- ☆50Updated last year
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆520Updated 11 months ago
- A benchmark for prompt injection detection systems.☆158Updated last month
- Red-Teaming Language Models with DSPy☆250Updated 11 months ago
- Collection of evals for Inspect AI☆357Updated this week
- An open-source compliance-centered evaluation framework for Generative AI models☆179Updated last week
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆452Updated last year
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆315Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆201Updated 9 months ago
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆601Updated last year
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆975Updated last year
- A Comprehensive Assessment of Trustworthiness in GPT Models☆311Updated last year
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆176Updated 2 weeks ago
- Guardrails for secure and robust agent development☆385Updated 3 weeks ago
- A curated list of awesome synthetic data tools (open source and commercial).☆239Updated 2 years ago
- Automatically evaluate your LLMs in Google Colab☆685Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆847Updated last year
- Papers about red teaming LLMs and Multimodal models.☆160Updated 8 months ago
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆39Updated this week
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆425Updated last week
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆252Updated last month
- Sample notebooks and prompts for LLM evaluation☆159Updated 3 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆117Updated last week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆452Updated 2 years ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆130Updated 4 months ago
- The fastest Trust Layer for AI Agents☆152Updated last week
- A tool for evaluating LLMs☆428Updated last year
- Attribute (or cite) statements generated by LLMs back to in-context information.☆321Updated last year