aiverify-foundation / moonshotLinks
Moonshot - A simple and modular tool to evaluate and red-team any LLM application.
☆261Updated last week
Alternatives and similar repositories for moonshot
Users that are interested in moonshot are comparing it to the libraries listed below
Sorting:
- AI Verify☆27Updated this week
- Red-Teaming Language Models with DSPy☆203Updated 5 months ago
- Fiddler Auditor is a tool to evaluate language models.☆184Updated last year
- ☆45Updated last year
- An open-source compliance-centered evaluation framework for Generative AI models☆158Updated last week
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆399Updated last year
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR t…☆459Updated 5 months ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆188Updated 3 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆91Updated last month
- A tool for evaluating LLMs☆423Updated last year
- The fastest Trust Layer for AI Agents☆140Updated 2 months ago
- ☆71Updated 9 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆113Updated this week
- Guardrails for secure and robust agent development☆324Updated this week
- Collection of evals for Inspect AI☆198Updated this week
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆548Updated last year
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆35Updated last week
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆17Updated last year
- A curated list of awesome synthetic data tools (open source and commercial).☆195Updated last year
- Papers about red teaming LLMs and Multimodal models.☆130Updated 2 months ago
- A benchmark for prompt injection detection systems.☆124Updated 2 weeks ago
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆129Updated 3 weeks ago
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆930Updated 8 months ago
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆222Updated 2 weeks ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆261Updated 9 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆323Updated 6 months ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆268Updated last year
- Automatically evaluate your LLMs in Google Colab☆649Updated last year
- ☆127Updated last month
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆288Updated this week