AdeelH / pytorch-multi-class-focal-loss
An (unofficial) implementation of Focal Loss, as described in the RetinaNet paper, generalized to the multi-class case.
☆235Updated last year
Alternatives and similar repositories for pytorch-multi-class-focal-loss:
Users that are interested in pytorch-multi-class-focal-loss are comparing it to the libraries listed below
- Unofficial PyTorch implementation of "Meta Pseudo Labels"☆386Updated last year
- PyTorch Implementation of Focal Loss and Lovasz-Softmax Loss☆333Updated 3 years ago
- ☆457Updated 2 years ago
- Learning Rate Warmup in PyTorch☆409Updated last month
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆426Updated 6 months ago
- Pytorch implementation of the paper "Class-Balanced Loss Based on Effective Number of Samples"☆796Updated last year
- Gradually-Warmup Learning Rate Scheduler for PyTorch☆988Updated 6 months ago
- Self-supervised vIsion Transformer (SiT)☆327Updated 2 years ago
- A PyTorch Implementation of Focal Loss.☆983Updated 5 years ago
- Implementation of Transformer in Transformer, pixel level attention paired with patch level attention for image classification, in Pytorc…☆304Updated 3 years ago
- Unofficial PyTorch Reimplementation of RandAugment.☆636Updated 2 years ago
- Official Pytorch Implementation of: "Asymmetric Loss For Multi-Label Classification"(ICCV, 2021) paper☆755Updated last year
- Reproduce Results for ICCV2019 "Symmetric Cross Entropy for Robust Learning with Noisy Labels" https://arxiv.org/abs/1908.06112☆186Updated 4 years ago
- Official Implementation of Early-Learning Regularization Prevents Memorization of Noisy Labels☆295Updated last year
- Code for the Convolutional Vision Transformer (ConViT)☆466Updated 3 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆522Updated 5 months ago
- 🛠 Toolbox to extend PyTorch functionalities☆418Updated 11 months ago
- Experiments with supervised contrastive learning methods with different loss functions☆220Updated 2 years ago
- EsViT: Efficient self-supervised Vision Transformers☆410Updated last year
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆457Updated 2 years ago
- Implementation of Pixel-level Contrastive Learning, proposed in the paper "Propagate Yourself", in Pytorch☆258Updated 4 years ago
- ☆51Updated 4 years ago
- A LARS implementation in PyTorch☆344Updated 5 years ago
- PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning☆491Updated 2 years ago
- (ICCV 2021 Oral) CoaT: Co-Scale Conv-Attentional Image Transformers☆231Updated 3 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆377Updated 3 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)