aiverify-foundation / aiverify
AI Verify
☆129Updated this week
Alternatives and similar repositories for aiverify:
Users that are interested in aiverify are comparing it to the libraries listed below
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆198Updated this week
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆21Updated this week
- ☆39Updated 5 months ago
- Fiddler Auditor is a tool to evaluate language models.☆174Updated 10 months ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆92Updated last year
- Red-Teaming Language Models with DSPy☆153Updated 9 months ago
- An open-source compliance-centered evaluation framework for Generative AI models☆121Updated last month
- A Comprehensive Assessment of Trustworthiness in GPT Models☆266Updated 4 months ago
- A curated list of awesome synthetic data tools (open source and commercial).☆133Updated last year
- A toolkit for tools and techniques related to the privacy and compliance of AI models.☆97Updated 6 months ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆166Updated last year
- Inspect: A framework for large language model evaluations☆724Updated this week
- Collection of evals for Inspect AI☆47Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 10 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆92Updated 10 months ago
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆106Updated this week
- Lakera - ChatGPT Data Leak Protection☆22Updated 6 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆73Updated this week
- The Rule-based Retrieval package is a Python package that enables you to create and manage Retrieval Augmented Generation (RAG) applicati…☆233Updated 3 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆92Updated 8 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]☆247Updated 3 months ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆88Updated 5 months ago
- ☆26Updated 2 months ago
- ☆84Updated last week
- This is an open-source tool to assess and improve the trustworthiness of AI systems.☆84Updated this week
- A text embedding viewer for the Jupyter environment☆19Updated 11 months ago
- Python SDK for running evaluations on LLM generated responses☆253Updated last week
- The Foundation Model Transparency Index☆73Updated 7 months ago
- A tool for evaluating LLMs☆397Updated 8 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆74Updated this week