chziakas / redevalLinks
A library for red-teaming LLM applications with LLMs.
☆26Updated 8 months ago
Alternatives and similar repositories for redeval
Users that are interested in redeval are comparing it to the libraries listed below
Sorting:
- Sphynx Hallucination Induction☆54Updated 4 months ago
- Red-Teaming Language Models with DSPy☆198Updated 4 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆91Updated 2 months ago
- ☆23Updated 8 months ago
- ☆116Updated 2 weeks ago
- A prompt injection game to collect data for robust ML research☆62Updated 5 months ago
- ☆65Updated 5 months ago
- ☆21Updated last month
- ☆34Updated 7 months ago
- Code to break Llama Guard☆31Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆30Updated last week
- LLM security and privacy☆48Updated 8 months ago
- Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"☆42Updated 9 months ago
- Code for the paper "Fishing for Magikarp"☆157Updated last month
- General research for Dreadnode☆23Updated last year
- ☆16Updated last year
- Track the progress of LLM context utilisation☆54Updated 2 months ago
- ☆45Updated 2 months ago
- Papers about red teaming LLMs and Multimodal models.☆123Updated last month
- Dataset for the Tensor Trust project☆43Updated last year
- The fastest Trust Layer for AI Agents☆137Updated last month
- ⚖️ Awesome LLM Judges ⚖️☆105Updated 2 months ago
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆45Updated 2 months ago
- ☆20Updated 3 months ago
- ☆19Updated last year
- Measuring the situational awareness of language models☆35Updated last year
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆64Updated last year
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆40Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆108Updated last year