pomonam / kronfluence
Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature
β120Updated 5 months ago
Alternatives and similar repositories for kronfluence:
Users that are interested in kronfluence are comparing it to the libraries listed below
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.β55Updated 2 weeks ago
- β41Updated this week
- AI Logging for Interpretability and Explainabilityπ¬β97Updated 7 months ago
- A fast, effective data attribution method for neural networks in PyTorchβ187Updated last month
- β82Updated 11 months ago
- β58Updated 3 years ago
- A library for efficient patching and automatic circuit discovery.β46Updated last month
- A simple PyTorch implementation of influence functions.β84Updated 7 months ago
- Influence Analysis and Estimation - Survey, Papers, and Taxonomyβ66Updated 10 months ago
- β63Updated 2 years ago
- β114Updated last year
- Using sparse coding to find distributed representations used by neural networks.β207Updated last year
- Efficient empirical NTKs in PyTorchβ18Updated 2 years ago
- β201Updated 3 months ago
- Sparse probing paper full code.β53Updated last year
- β106Updated 5 months ago
- β17Updated last month
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)β57Updated 3 months ago
- β210Updated 8 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".β92Updated last year
- nanoGPT-like codebase for LLM trainingβ83Updated this week
- β179Updated this week
- Sparse Autoencoder Training Libraryβ38Updated 2 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.β61Updated 2 months ago
- Steering Llama 2 with Contrastive Activation Additionβ113Updated 7 months ago
- β53Updated 2 months ago
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"β20Updated last year
- β168Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Leβ¦β89Updated 3 years ago
- β75Updated 5 months ago