shaoshitong / diffusion-model-learning
Document the demo and a series of documents for learning the diffusion model.
☆39Updated last year
Alternatives and similar repositories for diffusion-model-learning:
Users that are interested in diffusion-model-learning are comparing it to the libraries listed below
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- [ECCV 2022] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers☆28Updated 2 years ago
- Paper List for In-context Learning 🌷☆20Updated 2 years ago
- Official PyTorch Code for "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?" (https://arxiv.org/abs/2305.12954)☆46Updated last year
- Official implementation of LaVin-DiT☆24Updated last month
- ☆43Updated last year
- Benchmarking Attention Mechanism in Vision Transformers.☆17Updated 2 years ago
- ☆106Updated last year
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothin…☆26Updated 2 years ago
- [ICCV 2021] Official PyTorch Code for "Online Knowledge Distillation for Efficient Pose Estimation"☆43Updated last year
- Anytime Dense Prediction with Confidence Adaptivity (ICLR 2022)☆50Updated 7 months ago
- ☆52Updated 2 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆93Updated last year
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆76Updated 11 months ago
- Large-batch Optimization for Dense Visual Predictions (NeurIPS 2022)☆56Updated 2 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆27Updated last year
- ☆45Updated last year
- https://hyperbox-doc.readthedocs.io/en/latest/☆25Updated last year
- ☆29Updated 4 years ago
- ☆43Updated last year
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆90Updated 2 years ago
- Pytorch implementation of our paper accepted by ECCV2022 -- Knowledge Condensation Distillation https://arxiv.org/abs/2207.05409☆30Updated 2 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆71Updated 2 years ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆81Updated last year
- 一个mmcv 的logger hook, 可以用来把模型结果推送到微信上☆20Updated 2 years ago
- [CVPR 2025] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers☆40Updated 6 months ago
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model☆17Updated 11 months ago
- Official Codes and Pretrained Models for RecursiveMix☆22Updated last year
- Distilling the powerful segment anything models into lightweight ones for efficient segmentation.☆29Updated last year
- (CVPR 2024) "Unsegment Anything by Simulating Deformation"☆27Updated 9 months ago