mlcommons / modelgauge
Make it easy to automatically and uniformly measure the behavior of many AI Systems.
☆26Updated last month
Related projects ⓘ
Alternatives and complementary repositories for modelgauge
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆62Updated this week
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆92Updated 5 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆25Updated 5 months ago
- ☆22Updated last year
- Code for the ACL 2023 paper: "Rethinking the Role of Scale for In-Context Learning: An Interpretability-based Case Study at 66 Billion Sc…☆28Updated last year
- ☆28Updated last year
- Official implementation of FIND (NeurIPS '23) Function Interpretation Benchmark and Automated Interpretability Agents☆45Updated last month
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆82Updated 6 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆64Updated 9 months ago
- ☆12Updated 8 months ago
- NanoGPT-like codebase for LLM training☆75Updated this week
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆39Updated 3 weeks ago
- ☆32Updated last year
- ☆53Updated 3 weeks ago
- A mechanistic approach for understanding and detecting factual errors of large language models.☆39Updated 4 months ago
- A system for automating selection and optimization of pre-trained models from the TAO Model Zoo☆22Updated 4 months ago
- The repository contains code for Adaptive Data Optimization☆18Updated last month
- ☆101Updated 3 months ago
- ☆43Updated 9 months ago
- ☆44Updated last month
- Experiments to assess SPADE on different LLM pipelines.☆16Updated 7 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆53Updated last year
- Official repository for the paper "Zero-Shot AutoML with Pretrained Models"☆41Updated 10 months ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆42Updated 6 months ago
- ☆31Updated last year
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆63Updated 10 months ago
- Efficient LLM inference on Slurm clusters using vLLM.☆39Updated last week
- Evaluation of neuro-symbolic engines☆33Updated 3 months ago
- ☆26Updated last year
- Understanding how features learned by neural networks evolve throughout training☆31Updated 3 weeks ago