goldblum / free-lunchLinks
Implementation of experiments from The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
☆17Updated 2 years ago
Alternatives and similar repositories for free-lunch
Users that are interested in free-lunch are comparing it to the libraries listed below
Sorting:
- ☆53Updated last year
- ☆26Updated 4 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆17Updated 7 months ago
- A centralized place for deep thinking code and experiments☆84Updated last year
- ☆68Updated 6 months ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Sparse Autoencoder Training Library☆52Updated last month
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated 2 years ago
- ☆29Updated 3 months ago
- Pytorch code for experiments on Linear Transformers☆21Updated last year
- nanoGPT-like codebase for LLM training☆98Updated last month
- ☆12Updated 3 months ago
- Source code of "What can linearized neural networks actually say about generalization?☆20Updated 3 years ago
- ☆53Updated 8 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆37Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 9 months ago
- ☆20Updated 11 months ago
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Code associated to papers on superposition (in ML interpretability)☆28Updated 2 years ago
- ☆60Updated 3 years ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆89Updated 7 months ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 4 years ago
- ☆48Updated last year
- Deep Learning & Information Bottleneck☆60Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 7 months ago
- Implementation of Influence Function approximations for differently sized ML models, using PyTorch☆15Updated last year
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆44Updated 4 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆27Updated last year
- Efficient empirical NTKs in PyTorch☆18Updated 3 years ago