rmccorm4 / PyTorch-LMDBLinks
Scripts to work with LMDB + PyTorch for Imagenet training
☆59Updated 5 years ago
Alternatives and similar repositories for PyTorch-LMDB
Users that are interested in PyTorch-LMDB are comparing it to the libraries listed below
Sorting:
- [ECCV 2020] DADA: Differentiable Automatic Data Augmentation☆190Updated 2 years ago
- Official PyTorch implementation of "Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity" (ICLR'21 Oral)☆103Updated 3 years ago
- Code for our paper "Informative Dropout for Robust Representation Learning: A Shape-bias Perspective" (ICML 2020)☆125Updated 2 years ago
- Use lmdb to speed up imagenet dataset☆100Updated 4 years ago
- MoEx (Moment Exchange)☆141Updated 4 years ago
- Convert image folder to lmdb, adapted from Efficient-PyTorch☆69Updated 2 years ago
- Unofficial implementation with pytorch DistributedDataParallel for "MoCo: Momentum Contrast for Unsupervised Visual Representation Learni…☆150Updated 5 years ago
- Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation Learning.☆151Updated 2 years ago
- [CVPR 2020] Code for paper "AdversarialNAS: Adversarial Neural Architecture Search for GANs".☆71Updated last year
- The implementation of our paper: Towards Robust Vision Transformer (CVPR2022)☆142Updated 2 years ago
- ☆93Updated 4 years ago
- Pretrained SimCLRv2 models in Pytorch☆105Updated 4 years ago
- Code for the paper "A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses" (ECCV 2020 - Spotlight)☆168Updated 2 years ago
- PyTorch implementation of "Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning" with DDP and Apex AMP☆81Updated 4 years ago
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆188Updated last year
- Differentiable Data Augmentation Library☆123Updated 2 years ago
- Self-supervised Label Augmentation via Input Transformations (ICML 2020)☆105Updated 4 years ago
- Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better perfo…