Linfeng-Tang / M2VD
M2VD:Multi-modal Multi-scene Video Dataset
☆23Updated 3 weeks ago
Alternatives and similar repositories for M2VD:
Users that are interested in M2VD are comparing it to the libraries listed below
- Official code of multimodal feature RedFeat: Recoupling detection and description for multimodal feature learning (TIP2023)☆32Updated 2 years ago
- Information Fusion 2023 | Semantics lead all: Towards unified image registration and fusion from a semantic perspective.☆44Updated last year
- Infrared and visible remote sensing image registration algorithms and dataset containing sparse texture areas☆10Updated last year
- [IEEE TCSVT2024] Improving Misaligned Multi-modality Image Fusion with One-stage Progressive Dense Registration☆58Updated 3 months ago
- The code of STDFusionNet☆30Updated 2 years ago
- Datasets: road-scene-infrared-visible-images, for feature matching, image registration, and image fusion☆31Updated 5 years ago
- Demo code of the paper "Deep Image Registration With Depth-Aware Homography Estimation"☆9Updated 2 years ago
- Code of SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image Fusion.☆40Updated 3 years ago
- Datasets: road-scene-infrared-visible-images, for feature matching, image registration, and image fusion☆91Updated 4 years ago
- This is official tensorflow implementation of “PIAFusion: A Progressive Infrared and Visible Image Fusion Network Based on Illumination A…☆92Updated 9 months ago
- XoFTR is a cross-modal cross-view method for local feature matching between thermal infrared (TIR) and visible images.☆101Updated 6 months ago
- This is official Pytorch implementation of "DRMF: Degradation-Robust Multi-Modal Image Fusion via Composable Diffusion Prior" (ACM MM 24)…☆40Updated 4 months ago
- ☆16Updated last year
- ☆12Updated 2 months ago
- This is the open source implementation of the CVPR2022 paper "Iterative Deep Homography Estimation"☆90Updated 2 years ago
- Source Code for paper "Infrared and Visible Image Fusion via Parallel Scene and Texture Learning".