naver / tldrLinks
TLDR is an unsupervised dimensionality reduction method that combines neighborhood embedding learning with the simplicity and effectiveness of recent self-supervised learning losses
☆126Updated 3 years ago
Alternatives and similar repositories for tldr
Users that are interested in tldr are comparing it to the libraries listed below
Sorting:
- ☆96Updated 3 years ago
- A Domain-Agnostic Benchmark for Self-Supervised Learning☆106Updated 2 years ago
- codebase for the SIMAT dataset and evaluation☆38Updated 3 years ago
- Code for paper "Can contrastive learning avoid shortcut solutions?" NeurIPS 2021.☆47Updated 3 years ago
- understanding model mistakes with human annotations☆106Updated 2 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 3 years ago
- The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.☆40Updated 3 years ago
- Code to reproduce the results for Compositional Attention☆59Updated 3 years ago
- ☆81Updated last year
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Command-line tool for downloading and extending the RedCaps dataset.☆50Updated last year
- Fine-grained ImageNet annotations☆30Updated 5 years ago
- [NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows …☆133Updated 3 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago
- ☆42Updated 2 years ago
- ☆57Updated 3 years ago
- Code for "Supermasks in Superposition"☆124Updated 2 years ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆41Updated 4 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Official code for the paper: "Metadata Archaeology"☆19Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"☆81Updated 3 years ago
- [CogSci'21] Study of human inductive biases in CNNs and Transformers.☆43Updated 4 years ago
- ☆209Updated 3 years ago
- Compressing Representations for Self-Supervised Learning☆78Updated 4 years ago
- Stochastic Optimization for Global Contrastive Learning without Large Mini-batches☆20Updated 2 years ago
- ☆32Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago