technion-cs-nlp / LLMsKnowLinks
☆74Updated 5 months ago
Alternatives and similar repositories for LLMsKnow
Users that are interested in LLMsKnow are comparing it to the libraries listed below
Sorting:
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆50Updated 7 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆172Updated 3 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆112Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- ☆95Updated last year
- ☆46Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆101Updated 4 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆79Updated 8 months ago
- General-purpose activation steering library☆84Updated 2 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆95Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- Codebase for Inference-Time Policy Adapters☆24Updated last year
- ☆99Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆164Updated last year
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆38Updated 8 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- Critique-out-Loud Reward Models☆67Updated 8 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆102Updated 3 weeks ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆135Updated this week
- Improving Alignment and Robustness with Circuit Breakers☆220Updated 9 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆59Updated last month
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆149Updated 4 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆121Updated 7 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆95Updated 3 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆76Updated 2 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆61Updated 7 months ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆24Updated 6 months ago