uiuc-focal-lab / llm-priming-attacks
☆13Updated last year
Alternatives and similar repositories for llm-priming-attacks:
Users that are interested in llm-priming-attacks are comparing it to the libraries listed below
- FANC is a tool for the proof transfer of incomplete verification☆11Updated 3 years ago
- Efficient and general syntactical decoding for Large Language Models☆251Updated this week
- ☆73Updated last year
- EvoEval: Evolving Coding Benchmarks via LLM☆68Updated 11 months ago
- r2e: turn any github repository into a programming agent environment☆107Updated last month
- LLM Program Watermarking☆17Updated 11 months ago
- Sphynx Hallucination Induction☆53Updated 2 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆98Updated last year
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Updated 9 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆106Updated 5 months ago
- Knowledge transfer from high-resource to low-resource programming languages for Code LLMs☆12Updated 7 months ago
- Benchmark evaluating LLMs on their ability to create and resist disinformation. Includes comprehensive testing across major models (Claud…☆24Updated 2 weeks ago
- A framework-less approach to robust agent development.☆156Updated this week
- Improving Alignment and Robustness with Circuit Breakers☆192Updated 6 months ago
- Large-Language-Model to Machine Interface project.☆18Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆49Updated 7 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆198Updated 6 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆41Updated 7 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆166Updated 3 weeks ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆67Updated last year
- Thorn in a HaizeStack test for evaluating long-context adversarial robustness.☆26Updated 8 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆107Updated 8 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆89Updated 9 months ago
- RES-Q: Evaluating the Code-Editing Capability of Large Language Model Systems at the Repository Scale☆26Updated 9 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆49Updated last month
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆210Updated 10 months ago
- ☆44Updated last year
- CaSIL is an advanced natural language processing system that implements a sophisticated four-layer semantic analysis architecture. It pro…☆64Updated 4 months ago
- OpenPipe Reinforcement Learning Experiments☆21Updated 2 weeks ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆284Updated 2 months ago