wl-zhao / VPDLinks

[ICCV 2023] VPD is a framework that leverages the high-level and low-level knowledge of a pre-trained text-to-image diffusion model to downstream visual perception tasks.
526Updated last year

Alternatives and similar repositories for VPD

Users that are interested in VPD are comparing it to the libraries listed below

Sorting: