mansheej / icl-task-diversityLinks
Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"
☆23Updated 2 years ago
Alternatives and similar repositories for icl-task-diversity
Users that are interested in icl-task-diversity are comparing it to the libraries listed below
Sorting:
- ☆34Updated 2 years ago
- ☆62Updated 3 years ago
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆166Updated 4 months ago
- ☆240Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated 2 years ago
- Efficient empirical NTKs in PyTorch☆22Updated 3 years ago
- ☆31Updated last year
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- ☆71Updated 10 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆19Updated 11 months ago
- ☆107Updated 8 months ago
- Official Repository for ICML 2023 paper "Can Neural Network Memorization Be Localized?"☆20Updated 2 years ago
- Influence Analysis and Estimation - Survey, Papers, and Taxonomy☆83Updated last year
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆90Updated last week
- Bayesian low-rank adaptation for large language models☆25Updated last year
- An Investigation of Why Overparameterization Exacerbates Spurious Correlations☆30Updated 5 years ago
- ☆77Updated 3 years ago
- A simple PyTorch implementation of influence functions.☆91Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆220Updated 11 months ago
- A library for efficient patching and automatic circuit discovery.☆78Updated 3 months ago
- ☆69Updated 3 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆97Updated 4 years ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆27Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated 2 years ago
- ☆62Updated 4 years ago
- ☆37Updated 10 months ago
- ☆83Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago