dtsip / in-context-learningLinks
☆233Updated last year
Alternatives and similar repositories for in-context-learning
Users that are interested in in-context-learning are comparing it to the libraries listed below
Sorting:
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆157Updated 3 weeks ago
- ☆183Updated last year
- ☆99Updated 5 months ago
- A fast, effective data attribution method for neural networks in PyTorch☆212Updated 7 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆17Updated 7 months ago
- ☆95Updated last year
- AI Logging for Interpretability and Explainability🔬☆124Updated last year
- ☆83Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated 2 years ago
- ☆121Updated 11 months ago
- Bayesian low-rank adaptation for large language models☆23Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆259Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- ☆70Updated 3 years ago
- ☆216Updated last year
- ☆140Updated 7 months ago
- Efficient empirical NTKs in PyTorch☆18Updated 3 years ago
- ☆93Updated last year
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆78Updated last month
- ☆43Updated last year
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆18Updated 10 months ago
- ☆231Updated 9 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆94Updated 3 years ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆59Updated 4 months ago
- A curated list of papers of interesting empirical study and insight on deep learning. Continually updating...☆330Updated last week
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆71Updated 9 months ago
- ☆105Updated last month
- Function Vectors in Large Language Models (ICLR 2024)☆170Updated 2 months ago
- LLM-Merging: Building LLMs Efficiently through Merging☆201Updated 9 months ago