msakarvadia / memorizationLinks
Localizing Memorized Sequences in Language Models
☆18Updated 5 months ago
Alternatives and similar repositories for memorization
Users that are interested in memorization are comparing it to the libraries listed below
Sorting:
- ☆22Updated 3 months ago
- Conformal Language Modeling☆32Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆54Updated 11 months ago
- ☆36Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆75Updated 11 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 7 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆27Updated last year
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆84Updated 3 months ago
- Code for ACL 2023 paper "BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases".☆20Updated 2 years ago
- ☆55Updated 2 years ago
- ☆99Updated last year
- "Understanding Dataset Difficulty with V-Usable Information" (ICML 2022, outstanding paper)☆87Updated last year
- Influence Analysis and Estimation - Survey, Papers, and Taxonomy☆82Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆96Updated 4 years ago
- ☆52Updated 5 months ago
- AI Logging for Interpretability and Explainability🔬☆126Updated last year
- ☆29Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆78Updated 6 months ago
- [ICLR 2025] General-purpose activation steering library☆102Updated 2 weeks ago
- ☆22Updated 4 months ago
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆162Updated 2 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆128Updated 2 months ago
- ☆106Updated 7 months ago
- ☆73Updated 3 years ago
- Interpretating the latent space representations of attention head outputs for LLMs☆34Updated last year
- ☆97Updated last year
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆75Updated last year
- ☆11Updated 2 weeks ago
- Sparse probing paper full code.☆60Updated last year