dtsip / in-context-learningLinks
☆240Updated last year
Alternatives and similar repositories for in-context-learning
Users that are interested in in-context-learning are comparing it to the libraries listed below
Sorting:
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆166Updated 4 months ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- ☆107Updated 8 months ago
- ☆185Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆220Updated 11 months ago
- ☆98Updated last year
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆18Updated 11 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆105Updated 2 years ago
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆90Updated last week
- Using sparse coding to find distributed representations used by neural networks.☆280Updated last year
- ☆126Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆181Updated 6 months ago
- AI Logging for Interpretability and Explainability🔬☆129Updated last year
- ☆83Updated 2 years ago
- ☆181Updated 11 months ago
- ☆34Updated 2 years ago
- ☆62Updated 3 years ago
- ☆102Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆82Updated 7 months ago
- ☆77Updated 3 years ago
- ☆69Updated 3 years ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆76Updated last year
- Bayesian low-rank adaptation for large language models☆25Updated last year
- An Open Source Implementation of Anthropic's Paper: "Towards Monosemanticity: Decomposing Language Models with Dictionary Learning"☆49Updated last year
- ☆71Updated 10 months ago
- ☆45Updated last year
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆64Updated 7 months ago
- Efficient empirical NTKs in PyTorch☆22Updated 3 years ago
- ☆234Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆97Updated 4 years ago