Ascend-Research / AIO-P
Code repo for the paper "AIO-P: Expanding Neural Performance Predictors Beyond Image Classification", accepted to AAAI-23.
☆10Updated 3 months ago
Related projects: ⓘ
- ☆20Updated last year
- ☆11Updated last year
- [CVPR2021] Code for Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search☆9Updated 3 years ago
- ☆13Updated 2 years ago
- ☆19Updated last year
- Self-Distillation with weighted ground-truth targets; ResNet and Kernel Ridge Regression☆17Updated 2 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Updated last year
- ☆11Updated 2 years ago
- Bag of MLP☆20Updated 3 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆19Updated last year
- Official implementation of "Continual Learning by Modeling Intra-Class Variation" (MOCA). [TMLR 2023]☆16Updated last year
- Directed masked autoencoders☆13Updated last year
- Code for the ICML 2021 paper "Sharing Less is More: Lifelong Learning in Deep Networks with Selective Layer Transfer"☆11Updated 3 years ago
- [ICML 2022] "Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness" by Tianlong Chen*, Huan Zhang*, Zhenyu Zhang, Shiyu…☆15Updated 2 years ago
- We investigated corruption robustness across different architectures including Convolutional Neural Networks, Vision Transformers, and th…☆15Updated 2 years ago
- Code for RepNAS☆13Updated 2 years ago
- ☆12Updated 3 months ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆21Updated 6 months ago
- ☆13Updated 3 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆16Updated last year
- ☆12Updated last year
- ☆16Updated this week
- Official implementation for "Pruning Randomly Initialized Neural Networks with Iterative Randomization"☆10Updated 2 years ago
- [ICLR 2023] “ Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Better Representations”, Ziyu Jian…☆23Updated last year
- ☆15Updated last year
- The implementation of paper ''Efficient Attention Network: Accelerate Attention by Searching Where to Plug''.☆20Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆12Updated 6 months ago
- Codes for DATA: Differentiable ArchiTecture Approximation.☆11Updated 3 years ago
- PyTorch implementation for our paper EvidentialMix: Learning with Combined Open-set and Closed-set Noisy Labels☆26Updated 3 years ago
- ☆11Updated 6 months ago