mlcommons / modelgaugeLinks
Make it easy to automatically and uniformly measure the behavior of many AI Systems.
☆26Updated last year
Alternatives and similar repositories for modelgauge
Users that are interested in modelgauge are comparing it to the libraries listed below
Sorting:
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆117Updated this week
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- Code for the ACL 2023 paper: "Rethinking the Role of Scale for In-Context Learning: An Interpretability-based Case Study at 66 Billion Sc…☆35Updated 2 years ago
- ☆57Updated 2 years ago
- ☆77Updated last year
- Erasing concepts from neural representations with provable guarantees☆243Updated last year
- ☆30Updated 2 years ago
- ☆152Updated 4 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆14Updated last year
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Official PyTorch Implementation for Meaning Representations from Trajectories in Autoregressive Models (ICLR 2024)☆22Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆135Updated 3 years ago
- Code for the paper "Fishing for Magikarp"☆179Updated 8 months ago
- ☆44Updated last year
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆26Updated 2 years ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated 2 months ago
- The Foundation Model Transparency Index☆85Updated last month
- ☆65Updated this week
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- ☆68Updated last year
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆93Updated 2 years ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆32Updated 4 months ago