RylanSchaeffer / Stanford-AI-Alignment-Double-Descent-TutorialLinks
Code for Arxiv Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle
☆61Updated 2 years ago
Alternatives and similar repositories for Stanford-AI-Alignment-Double-Descent-Tutorial
Users that are interested in Stanford-AI-Alignment-Double-Descent-Tutorial are comparing it to the libraries listed below
Sorting:
- Understanding how features learned by neural networks evolve throughout training☆40Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- Omnigrok: Grokking Beyond Algorithmic Data☆62Updated 2 years ago
- ☆73Updated last year
- Automatic identification of regions in the latent space of a model that correspond to unique concepts, namely to concepts with a semantic…☆14Updated 2 years ago
- Portfolio REgret for Confidence SEquences☆20Updated last year
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆130Updated 3 years ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆174Updated 2 years ago
- Unofficial implementation of Conformal Language Modeling by Quach et al☆29Updated 2 years ago
- Quantification of Uncertainty with Adversarial Models☆29Updated 2 years ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆149Updated 2 months ago
- ☆27Updated 2 years ago
- ☆16Updated 3 months ago
- Automatic Integration for Neural Spatio-Temporal Point Process models (AI-STPP) is a new paradigm for exact, efficient, non-parametric inf…☆25Updated last year
- ☆68Updated 8 months ago
- Sparse and discrete interpretability tool for neural networks☆65Updated last year
- ☆32Updated last year
- ☆147Updated 3 months ago
- PyTorch library for Active Fine-Tuning☆95Updated 2 months ago
- Course Materials for Interpretability of Large Language Models (0368.4264) at Tel Aviv University☆268Updated this week
- ☆15Updated 2 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago
- A reading list of relevant papers and projects on foundation model annotation☆28Updated 9 months ago
- Deep Networks Grok All the Time and Here is Why☆38Updated last year
- Parameter-Free Optimizers for Pytorch☆130Updated last year
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆71Updated 2 years ago
- ☆169Updated last week
- Deep Learning, an Energy Approach☆226Updated 6 months ago
- About A collection of AWESOME things about information geometry Topics☆172Updated last year