xinke-wang / Awesome-Text-VQAView external linksLinks
☆188May 8, 2024Updated last year
Alternatives and similar repositories for Awesome-Text-VQA
Users that are interested in Awesome-Text-VQA are comparing it to the libraries listed below
Sorting:
- Official code for paper "Spatially Aware Multimodal Transformers for TextVQA" published at ECCV, 2020.☆65Sep 15, 2021Updated 4 years ago
- TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)☆72May 22, 2023Updated 2 years ago
- RUArt: A Novel Text-Centered Solution for Text-Based Visual Question Answering☆10Nov 27, 2022Updated 3 years ago
- Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answer…☆55Oct 30, 2024Updated last year
- The imdb files with SBD-Trans OCR for TextVQA dataset.☆11Nov 30, 2021Updated 4 years ago
- Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]☆57Apr 5, 2022Updated 3 years ago
- A modular framework for Visual Question Answering research by the FAIR A-STAR team☆45Aug 26, 2021Updated 4 years ago
- baselines for DocVQA dataset☆21Apr 11, 2021Updated 4 years ago
- A curated list of Visual Question Answering(VQA)(Image/Video Question Answering),Visual Question Generation ,Visual Dialog ,Visual Common…☆671Jul 6, 2023Updated 2 years ago
- Used in M4C feature extraction script: https://github.com/facebookresearch/mmf/blob/project/m4c/projects/M4C/scripts/extract_ocr_frcn_fea…☆13Jan 30, 2020Updated 6 years ago
- An unofficial PyTorch implementation of "Lin et al. ViBERTgrid: A Jointly Trained Multi-Modal 2D Document Representation for Key Informat…☆53Jan 9, 2024Updated 2 years ago
- STVQA and TextVQA OCR results from Amazon Text in Image pipeline☆11Jul 18, 2022Updated 3 years ago
- Code release for Hu et al., Language-Conditioned Graph Networks for Relational Reasoning. in ICCV, 2019☆92Aug 9, 2019Updated 6 years ago
- ☆31Dec 18, 2025Updated last month
- [ACM MM 2020] Exploring Font-independent Features for Scene Text Recognition☆44Nov 30, 2020Updated 5 years ago
- ☆22Dec 8, 2022Updated 3 years ago
- The dataset used in the CVPR 2022 paper (SimAN: Exploring Self-Supervised Representation Learning of Scene Text via Similarity-Aware Norm…☆34Jun 21, 2022Updated 3 years ago
- VisualMRC: Machine Reading Comprehension on Document Images (AAAI2021)☆57Mar 31, 2025Updated 10 months ago
- OCR Annotations from Amazon Textract for Industry Documents Library☆103Aug 20, 2022Updated 3 years ago
- Counterfactual Samples Synthesizing for Robust VQA☆79Nov 24, 2022Updated 3 years ago
- [AAAI 2021] Confidence-aware Non-repetitive Multimodal Transformers for TextCaps☆24Mar 29, 2023Updated 2 years ago
- Towards Video Text Visual Question Answering: Benchmark and Baseline☆40Feb 26, 2024Updated last year
- ☆38Feb 4, 2023Updated 3 years ago
- (ICCV 2023) ESTextSpotter: Towards Better Scene Text Spotting with Explicit Synergy in Transformer☆78Apr 9, 2024Updated last year
- [MM'2024] Official release of RFUND introduced in the MM'2024 paper "PEneo: Unifying Line Extraction, Line Grouping, and Entity Linking f…☆20Dec 4, 2024Updated last year
- Grid features pre-training code for visual question answering☆269Sep 17, 2021Updated 4 years ago
- Document Visual Question Answering☆131Jul 30, 2020Updated 5 years ago
- ☆38Jan 20, 2023Updated 3 years ago
- A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)☆5,616Jan 12, 2026Updated last month
- ☆16Jan 30, 2022Updated 4 years ago
- Omnidirectional Scene Text Detection with Sequential-free Box Discretization (IJCAI 2019). Including competition model, online demo, etc.☆271Apr 23, 2020Updated 5 years ago
- Official implementation of SPTS: Single-Point Text Spotting (ACM MM 2022 Oral)☆144Jul 26, 2023Updated 2 years ago
- (CVPR 2022) Text Spotting Transformers☆190Jan 30, 2023Updated 3 years ago
- ☆14May 26, 2023Updated 2 years ago
- Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"☆187Apr 15, 2021Updated 4 years ago
- Turning a CLIP Model into a Scene Text Detector (CVPR2023) | Turning a CLIP Model into a Scene Text Spotter (TPAMI)☆200Jun 17, 2024Updated last year
- Data Release for VALUE Benchmark☆30Feb 16, 2022Updated 4 years ago
- Code for CVPR'18 "Grounding Referring Expressions in Images by Variational Context"☆30Jul 4, 2018Updated 7 years ago
- A pytroch reimplementation of "Bilinear Attention Network", "Intra- and Inter-modality Attention", "Learning Conditioned Graph Structures…☆297Jan 6, 2026Updated last month