mlcommons / ailuminateLinks
The AILuminate v1.1 benchmark suite is an AI risk assessment benchmark developed with broad involvement from leading AI companies, academia, and civil society.
☆22Updated 2 months ago
Alternatives and similar repositories for ailuminate
Users that are interested in ailuminate are comparing it to the libraries listed below
Sorting:
- ☆108Updated this week
- The Granite Guardian models are designed to detect risks in prompts and responses.☆93Updated this week
- Public repository containing METR's DVC pipeline for eval data analysis☆91Updated 4 months ago
- Red-Teaming Language Models with DSPy☆203Updated 5 months ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆82Updated this week
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆59Updated 8 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆100Updated 3 months ago
- ☆136Updated 4 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆176Updated 5 months ago
- ☆27Updated last month
- Open Source Replication of Anthropic's Alignment Faking Paper☆48Updated 4 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆132Updated last year
- A preprint version of our recent research on the capability of frontier AI systems to do self-replication☆59Updated 7 months ago
- Persona Vectors: Monitoring and Controlling Character Traits in Language Models☆135Updated last week
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated last week
- ☆76Updated this week
- Code and data for the paper "Why think step by step? Reasoning emerges from the locality of experience"☆61Updated 4 months ago
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆111Updated 9 months ago
- Source code of "How to Correctly do Semantic Backpropagation on Language-based Agentic Systems" 🤖☆73Updated 8 months ago
- ☆73Updated 5 months ago
- Evaluating LLMs with fewer examples☆160Updated last year
- A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you…☆76Updated 7 months ago
- Sphynx Hallucination Induction☆53Updated 6 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆92Updated 2 months ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆49Updated 9 months ago
- ☆182Updated 5 months ago
- Code for "Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs"☆54Updated 5 months ago
- Open source interpretability artefacts for R1.☆157Updated 3 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆95Updated 3 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago