haizelabs / dspy-redteamLinks
Red-Teaming Language Models with DSPy
☆216Updated 7 months ago
Alternatives and similar repositories for dspy-redteam
Users that are interested in dspy-redteam are comparing it to the libraries listed below
Sorting:
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆96Updated 5 months ago
- Inference-time scaling for LLMs-as-a-judge.☆300Updated last week
- ☆26Updated 11 months ago
- Sphynx Hallucination Induction☆53Updated 8 months ago
- ⚖️ Awesome LLM Judges ⚖️☆130Updated 5 months ago
- ☆136Updated this week
- Collection of evals for Inspect AI☆241Updated this week
- Guardrails for secure and robust agent development☆348Updated 2 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆118Updated last year
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated last week
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆111Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆114Updated last year
- ☆153Updated 3 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆89Updated this week
- The fastest Trust Layer for AI Agents☆143Updated 4 months ago
- Code for the paper "Fishing for Magikarp"☆170Updated 4 months ago
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- ☆73Updated 11 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆297Updated last week
- Attribute (or cite) statements generated by LLMs back to in-context information.☆289Updated last year
- Curation of prompts that are known to be adversarial to large language models☆186Updated 2 years ago
- ☆135Updated 6 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆115Updated 2 months ago
- Official Repo for CRMArena and CRMArena-Pro☆118Updated 3 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆56Updated 6 months ago
- ☆58Updated last week
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 9 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆118Updated 3 weeks ago
- ☆35Updated 10 months ago