shinkyo0513 / Towards-Visually-Explaining-Video-Understanding-Networks-With-PerturbationLinks
Attribution (or visual explanation) methods for understanding video classification networks. Demo codes for WACV2021 paper: Towards Visually Explaining Video Understanding Networks with Perturbation.
☆21Updated 2 years ago
Alternatives and similar repositories for Towards-Visually-Explaining-Video-Understanding-Networks-With-Perturbation
Users that are interested in Towards-Visually-Explaining-Video-Understanding-Networks-With-Perturbation are comparing it to the libraries listed below
Sorting:
- Implementation of the paper Video Action Transformer Network☆138Updated 4 years ago
- [Codes of paper]: PAN: Towards Fast Action Recognition via Learning Persistence of Appearance☆104Updated 5 years ago
- Code for : [Pattern Recognit. Lett. 2021] "Learn to cycle: Time-consistent feature discovery for action recognition" and [IJCNN 2021] "Mu…☆69Updated 3 years ago
- Official PyTorch Implementation of MotionSqueeze, ECCV 2020☆139Updated 4 years ago
- This is the pytorch implementation of some representative action recognition approaches including I3D, S3D, TSN and TAM.☆257Updated 4 years ago
- PyTorch implementation of X3D models with Multigrid training.☆101Updated 4 years ago
- STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)☆252Updated 6 years ago
- Code of the STAGE module for video action detection☆48Updated 4 years ago
- I3D and 3D-ResNets in PyTorch☆194Updated 7 years ago
- MARS: Motion-Augmented RGB Stream for Action Recognition☆163Updated 2 years ago
- The Pytorch code of the TEA module (Temporal Excitation and Aggregation for Action Recognition)