hila-chefer / Transformer-MM-ExplainabilityView on GitHub
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
903Aug 24, 2023Updated 2 years ago

Alternatives and similar repositories for Transformer-MM-Explainability

Users that are interested in Transformer-MM-Explainability are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?