collect-intel / llm-judge-bias-suiteLinks
☆27Updated 7 months ago
Alternatives and similar repositories for llm-judge-bias-suite
Users that are interested in llm-judge-bias-suite are comparing it to the libraries listed below
Sorting:
- ☆42Updated last year
- Code for the paper "Defeating Prompt Injections by Design"☆187Updated 6 months ago
- Red-Teaming Language Models with DSPy☆248Updated 10 months ago
- Collection of evals for Inspect AI☆320Updated this week
- Automated Qualitative Analysis of LLMs (ICLR 2025)☆53Updated 5 months ago
- Code for the paper "Fishing for Magikarp"☆177Updated 7 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆78Updated 3 months ago
- ☆63Updated 2 weeks ago
- ☆38Updated 2 weeks ago
- ☆34Updated last year
- ☆182Updated 2 weeks ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆95Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- ☆236Updated 3 weeks ago
- ☆260Updated 9 months ago
- An attribution library for LLMs☆46Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆195Updated 8 months ago
- Learning to route instances for Human vs AI Feedback (ACL Main '25)☆26Updated 5 months ago
- Evaluating LLMs with fewer examples☆170Updated last year
- autoredteam: code for training models that automatically red team other language models☆15Updated 2 years ago
- ☆112Updated 10 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆100Updated 8 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆57Updated 9 months ago
- The Foundation Model Transparency Index☆84Updated 3 weeks ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆157Updated 7 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆125Updated last month
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆92Updated last year
- A toolkit for describing model features and intervening on those features to steer behavior.