haizelabs / dspy-redteamLinks
Red-Teaming Language Models with DSPy
☆235Updated 9 months ago
Alternatives and similar repositories for dspy-redteam
Users that are interested in dspy-redteam are comparing it to the libraries listed below
Sorting:
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆98Updated 7 months ago
- Inference-time scaling for LLMs-as-a-judge.☆308Updated last week
- ☆26Updated last year
- Sphynx Hallucination Induction☆53Updated 9 months ago
- Collection of evals for Inspect AI☆284Updated this week
- ⚖️ Awesome LLM Judges ⚖️☆133Updated 6 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated 3 weeks ago
- ☆186Updated this week
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆92Updated last month
- Attribute (or cite) statements generated by LLMs back to in-context information.☆297Updated last year
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆299Updated 2 weeks ago
- Guardrails for secure and robust agent development☆364Updated 3 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆120Updated last week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆121Updated last year
- ☆135Updated 7 months ago
- Just a bunch of benchmark logs for different LLMs☆118Updated last year
- A small library of LLM judges☆301Updated 3 months ago
- Synthetic Data for LLM Fine-Tuning☆119Updated last year
- ☆226Updated 2 weeks ago
- Code for the paper "Fishing for Magikarp"☆172Updated 6 months ago
- Official Repo for CRMArena and CRMArena-Pro☆125Updated last week
- ☆168Updated 5 months ago
- ☆43Updated last year
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆114Updated last year
- ☆74Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- The Granite Guardian models are designed to detect risks in prompts and responses.☆120Updated last month
- Code for the paper "Defeating Prompt Injections by Design"☆146Updated 4 months ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆115Updated 3 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆57Updated 8 months ago