ChunbiaoZhu / PDF-eXpress-check
Fixing "font not embedded" issue to pass the IEEE PDF eXpress check
☆35Updated 6 years ago
Alternatives and similar repositories for PDF-eXpress-check:
Users that are interested in PDF-eXpress-check are comparing it to the libraries listed below
- Document the demo and a series of documents for learning the diffusion model.☆39Updated last year
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆18Updated last year
- [ICLR2023] Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning (https://arxiv.org/abs/2210.0022…☆40Updated 2 years ago
- [CIKM-2024] Official code for work "ERASE: Error-Resilient Representation Learning on Graphs for Label Noise Tolerance"☆18Updated 7 months ago
- ☆16Updated 4 months ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆24Updated last year
- [NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data☆44Updated 2 years ago
- [NeurIPS 2022] Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach -- Official Implementation☆44Updated last year
- (CVPR 2024) "Unsegment Anything by Simulating Deformation"☆27Updated 10 months ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- Official repo for the WACV 2023 paper: Federated Domain Generalization for Image Recognition via Cross-Client Style Transfer.☆27Updated last year
- A Comprehensive and Versatile Open-Source Federated Learning Framework☆32Updated last year
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆21Updated 2 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆71Updated 2 years ago
- ChatGPT - Review & Rebuttal: A browser extension for generating reviews and rebuttals, powered by ChatGPT. 利用 ChatGPT 生成审稿意见和回复的浏览器插件☆249Updated 2 years ago
- Official PyTorch Code for "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?" (https://arxiv.org/abs/2305.12954)☆46Updated last year
- The official github repo for "Test-Time Training with Masked Autoencoders"☆81Updated last year
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆77Updated 11 months ago
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆33Updated 9 months ago
- Respect to the input tensor instead of paramters of NN☆18Updated 2 years ago
- Moved to https://github.com/NUS-HPC-AI-Lab/InfoBatch☆6Updated last year
- [TKDE 2024, CIKM 2022] SLA²P: Self-supervised Anomaly Detection with Adversarial Perturbation.☆37Updated 3 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Towards Meta-Pruning via Optimal Transport, ICLR 2024 (Spotlight)☆16Updated 3 months ago
- ☆20Updated last year
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆91Updated 2 years ago
- [CVPR-DD 2024 (Oral)] ATOM: Attention Mixer for Efficient Dataset Distillation☆8Updated 10 months ago
- Scala(NeurIPS 2024)☆10Updated 3 months ago
- [NeurIPS-2022] Annual Conference on Neural Information Processing Systems☆18Updated last year
- Data-Free Knowledge Distillation☆20Updated 2 years ago