haizelabs / get-haizedLinks
A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.
☆100Updated 8 months ago
Alternatives and similar repositories for get-haized
Users that are interested in get-haized are comparing it to the libraries listed below
Sorting:
- Red-Teaming Language Models with DSPy☆248Updated 10 months ago
- Sphynx Hallucination Induction☆53Updated 10 months ago
- ☆26Updated last year
- ☆38Updated 6 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated 2 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆125Updated last month
- An automated tool for discovering insights from research papaer corpora☆137Updated last year
- Inference-time scaling for LLMs-as-a-judge.☆317Updated last month
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- they've simulated websites, worlds, and imaginary CLIs... but what if they simulated *you*?☆127Updated 2 months ago
- ⚖️ Awesome LLM Judges ⚖️☆146Updated 8 months ago
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆49Updated last year
- ☆47Updated last year
- explore token trajectory trees on instruct and base models☆149Updated 7 months ago
- Thorn in a HaizeStack test for evaluating long-context adversarial robustness.☆26Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆95Updated 2 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 2 months ago
- look how they massacred my boy☆63Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated last year
- Verbosity control for AI agents☆64Updated last year
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆57Updated 9 months ago
- ☆136Updated 9 months ago
- ☆233Updated 3 weeks ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 5 months ago
- ☆68Updated 7 months ago
- Track the progress of LLM context utilisation☆55Updated 8 months ago
- Approximation of the Claude 3 tokenizer by inspecting generation stream☆149Updated last year
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆61Updated 7 months ago
- ☆86Updated last year
- Small, simple agent task environments for training and evaluation☆19Updated last year