nrimsky / InfluenceFunctionsLinks
Implementation of Influence Function approximations for differently sized ML models, using PyTorch
☆15Updated last year
Alternatives and similar repositories for InfluenceFunctions
Users that are interested in InfluenceFunctions are comparing it to the libraries listed below
Sorting:
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- ☆54Updated 2 years ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated 6 months ago
- Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"☆12Updated last year
- ☆27Updated 5 months ago
- ☆45Updated last year
- ☆35Updated 2 years ago
- Sparse Autoencoder Training Library☆54Updated 3 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- Minimum Description Length probing for neural network representations☆18Updated 6 months ago
- The repository contains code for Adaptive Data Optimization☆25Updated 7 months ago
- ☆60Updated 3 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆29Updated 2 years ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 3 months ago
- ☆44Updated 8 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆27Updated last year
- ☆75Updated last year
- ☆50Updated last year
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated last year
- Simple and scalable tools for data-driven pretraining data selection.☆24Updated last month
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆38Updated 2 years ago
- Understanding how features learned by neural networks evolve throughout training☆36Updated 9 months ago
- ☆89Updated last year
- ☆26Updated 2 years ago