☆147Jul 23, 2025Updated 7 months ago
Alternatives and similar repositories for machiavelli
Users that are interested in machiavelli are comparing it to the libraries listed below
Sorting:
- A library for mechanistic anomaly detection☆22Jan 9, 2025Updated last year
- Pin files for contextual, codebase-level AI assistance.☆16Jul 11, 2024Updated last year
- Experiments with representation engineering☆14Feb 28, 2024Updated 2 years ago
- Erasing concepts from neural representations with provable guarantees☆243Jan 27, 2025Updated last year
- 📚📚📚📚📚📚📚📚📚 Reading everything☆15Sep 12, 2025Updated 5 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆160May 29, 2025Updated 9 months ago
- ☆120Jan 19, 2026Updated last month
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- The Happy Faces Benchmark☆15Jul 20, 2023Updated 2 years ago
- ☆19Mar 5, 2024Updated 2 years ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆102Sep 21, 2023Updated 2 years ago
- ☆284Mar 2, 2024Updated 2 years ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Apr 24, 2024Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆134Mar 9, 2024Updated 2 years ago
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Mar 1, 2025Updated last year
- ☆24Feb 18, 2026Updated 2 weeks ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆957Aug 14, 2024Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Mar 30, 2024Updated last year
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆20Oct 18, 2024Updated last year
- ☆25Aug 23, 2024Updated last year
- Formalizing stochastic doubly-efficient debate☆118Oct 8, 2024Updated last year
- ☆11Mar 13, 2023Updated 2 years ago
- Instant Neural Graphics Primitives from scratch, zero dependencies. Learning by doing.☆10Aug 18, 2023Updated 2 years ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆572Jan 28, 2025Updated last year
- [ICLR 2025] General-purpose activation steering library☆145Sep 18, 2025Updated 5 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28May 23, 2024Updated last year
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆86Dec 12, 2025Updated 2 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Mar 2, 2026Updated last week
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"☆14Nov 17, 2023Updated 2 years ago
- 中文原生等级化代码能力测试基准☆15Apr 11, 2024Updated last year
- Accompanying code for "Analyzing Vision Tranformers in Class Embedding Space" (NeurIPS '23)☆15Jun 10, 2024Updated last year
- Codebase for Mechanistic Mode Connectivity☆13Jul 14, 2023Updated 2 years ago
- ☆12Jul 8, 2023Updated 2 years ago
- ☆11Oct 16, 2023Updated 2 years ago
- ☆34Jan 25, 2026Updated last month
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Mar 26, 2024Updated last year
- ☆273Oct 1, 2024Updated last year
- METR Task Standard☆177Feb 3, 2025Updated last year
- 🔥 A repository for collecting cyberdefense thoughts, books, and documents about AI cyberdefense☆13Jul 2, 2023Updated 2 years ago