Ybakman / TruthTorchLMLinks
☆50Updated 2 weeks ago
Alternatives and similar repositories for TruthTorchLM
Users that are interested in TruthTorchLM are comparing it to the libraries listed below
Sorting:
- This repository collects all relevant resources about interpretability in LLMs☆373Updated 11 months ago
- ☆199Updated 10 months ago
- ☆125Updated 2 weeks ago
- Conformal Language Modeling☆32Updated last year
- The one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE, WMDP, and many unlearning methods. All features: …☆379Updated 2 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆77Updated last year
- A resource repository for representation engineering in large language models☆136Updated 10 months ago
- AI Logging for Interpretability and Explainability🔬☆128Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆218Updated 10 months ago
- Using sparse coding to find distributed representations used by neural networks.☆272Updated last year
- [ICLR 2025] General-purpose activation steering library☆108Updated 2 weeks ago
- ☆174Updated 10 months ago
- ☆99Updated last year
- An Open Source Implementation of Anthropic's Paper: "Towards Monosemanticity: Decomposing Language Models with Dictionary Learning"☆49Updated last year
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆165Updated 3 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆134Updated 3 months ago
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆190Updated 2 months ago
- Steering Llama 2 with Contrastive Activation Addition☆187Updated last year
- ☆187Updated 2 months ago
- ☆177Updated last year
- ☆32Updated 10 months ago
- ☆347Updated last month
- ☆97Updated last year
- ☆242Updated last year
- Python package for measuring memorization in LLMs.☆166Updated 2 months ago
- ☆26Updated 7 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆271Updated 6 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆80Updated 7 months ago
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆371Updated last year
- ☆240Updated last year