google-research / fooling-feature-visualizationsLinks
Code for "Don't trust your eyes: on the (un)reliability of feature visualizations" (ICML 2024)
☆32Updated last year
Alternatives and similar repositories for fooling-feature-visualizations
Users that are interested in fooling-feature-visualizations are comparing it to the libraries listed below
Sorting:
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- An official PyTorch implementation for CLIPPR☆29Updated last year
- ☆13Updated 2 years ago
- The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns …☆16Updated last month
- Code for T-MARS data filtering☆35Updated last year
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆21Updated last year
- Official PyTorch Implementation of "Rosetta Neurons: Mining the Common Units in a Model Zoo"☆30Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- ☆22Updated 6 months ago
- Github code for the paper Maximum Class Separation as Inductive Bias in One Matrix. Arxiv link: https://arxiv.org/abs/2206.08704☆29Updated 2 years ago
- ☆51Updated last year
- ☆26Updated 3 years ago
- Official code for the paper "Image generation with shortest path diffusion" accepted at ICML 2023.☆23Updated 2 years ago
- ☆23Updated 2 years ago
- Code of CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping☆17Updated 2 years ago
- Repository for the paper Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning☆36Updated 2 years ago
- Pytorch Implementation of CLIP-Lite | Accepted at AISTATS 2023☆13Updated 2 years ago
- Code for the paper Self-Supervised Learning of Split Invariant Equivariant Representations☆28Updated last year
- Official code and data for NeurIPS 2023 paper "ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial …☆39Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 7 months ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 10 months ago
- ☆57Updated 2 weeks ago
- ☆33Updated last year
- ☆18Updated 3 years ago
- Directed masked autoencoders☆14Updated 2 years ago
- ☆38Updated last year
- ☆29Updated 2 years ago
- Official code for `Visual Attention Emerges from Recurrent Sparse Reconstruction' (ICML 2022)☆36Updated 3 years ago
- ☆12Updated 2 years ago
- A benchmark dataset and simple code examples for measuring the perception and reasoning of multi-sensor Vision Language models.☆18Updated 6 months ago