haizelabs / llama3-jailbreak
A trivial programmatic Llama 3 jailbreak. Sorry Zuck!
☆518Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for llama3-jailbreak
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆333Updated 5 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆259Updated last month
- ☆448Updated 7 months ago
- A library for making RepE control vectors☆481Updated last month
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆86Updated 5 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆282Updated last month
- Parseltongue is a powerful prompt hacking tool/browser extension for real-time tokenization visualization and seamless text conversion, s…☆393Updated 2 months ago
- A benchmark for emotional intelligence in large language models☆197Updated 3 months ago
- ☆935Updated 2 weeks ago
- Agentless🐱: an agentless approach to automatically solve software development problems☆723Updated last week
- multi1: create o1-like reasoning chains with multiple AI providers (and locally). Supports LiteLLM as backend too for 100+ providers at o…☆316Updated last month
- Automatically evaluate your LLMs in Google Colab☆559Updated 6 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated 6 months ago
- Red-Teaming Language Models with DSPy☆142Updated 7 months ago
- ☆149Updated 4 months ago
- ☆379Updated 3 months ago
- Fine-tune LLM agents with online reinforcement learning☆995Updated 8 months ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆159Updated 10 months ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRav…☆308Updated last year
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆499Updated 3 weeks ago
- From anywhere you can type, query and stream the output of an LLM or any other script☆474Updated 7 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆203Updated 6 months ago
- Fast parallel LLM inference for MLX☆149Updated 4 months ago
- Visualize the intermediate output of Mistral 7B☆316Updated 9 months ago
- WebAssembly binding for llama.cpp - Enabling in-browser LLM inference☆441Updated 3 weeks ago
- Arena-Hard-Auto: An automatic LLM benchmark.☆653Updated last week
- ☆228Updated last month
- Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer…☆418Updated this week
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆98Updated this week
- Guide for fine-tuning Llama/Mistral/CodeLlama models and more☆534Updated 2 months ago