safety-research / persona_vectorsLinks
Persona Vectors: Monitoring and Controlling Character Traits in Language Models
☆247Updated 2 months ago
Alternatives and similar repositories for persona_vectors
Users that are interested in persona_vectors are comparing it to the libraries listed below
Sorting:
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory☆74Updated 4 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆278Updated 3 months ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆101Updated last week
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 3 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆158Updated 7 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆245Updated 5 months ago
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆51Updated 10 months ago
- Improving Alignment and Robustness with Circuit Breakers☆236Updated last year
- Code for the paper: "Learning to Reason without External Rewards"☆360Updated 3 months ago
- ☆218Updated 7 months ago
- ☆143Updated 6 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆282Updated 2 weeks ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆117Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆125Updated 7 months ago
- ☆216Updated 7 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆132Updated last year
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆190Updated 7 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆154Updated last week
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆230Updated 2 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆200Updated last year
- AWM: Agent Workflow Memory☆328Updated 8 months ago
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆361Updated this week
- Code for the paper 🌳 Tree Search for Language Model Agents☆217Updated last year
- Open source interpretability artefacts for R1.☆161Updated 5 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆265Updated 5 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆343Updated 3 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆68Updated 7 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆165Updated last year
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆283Updated this week