msakarvadia / memorizationLinks
Localizing Memorized Sequences in Language Models
☆16Updated 3 months ago
Alternatives and similar repositories for memorization
Users that are interested in memorization are comparing it to the libraries listed below
Sorting:
- ☆35Updated 6 months ago
- Conformal Language Modeling☆30Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆47Updated 8 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆70Updated 8 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 5 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆26Updated last year
- ☆9Updated last year
- ☆69Updated 3 years ago
- ☆44Updated 3 months ago
- ☆40Updated last year
- ☆51Updated last year
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆77Updated 2 weeks ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆77Updated 6 months ago
- ☆49Updated last year
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆29Updated 5 months ago
- ☆48Updated last year
- ☆30Updated last year
- ☆37Updated last month
- Learning adapter weights from task descriptions☆19Updated last year
- Code for "Automatic Circuit Finding and Faithfulness"☆11Updated 11 months ago
- Augmenting Statistical Models with Natural Language Parameters☆27Updated 9 months ago
- [NeurIPS 2023 Spotlight] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training☆35Updated 2 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year
- Bayesian low-rank adaptation for large language models☆23Updated last year
- Efficient empirical NTKs in PyTorch☆18Updated 3 years ago
- ☆60Updated 3 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆93Updated 3 years ago
- Code for paper: Are Large Language Models Post Hoc Explainers?☆33Updated 11 months ago
- code for EMNLP 2024 paper: How do Large Language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for M…☆12Updated 7 months ago
- Implementation of Gradient Information Optimization (GIO) for effective and scalable training data selection☆14Updated 2 years ago