usydnlp / vdocLinks
☆15Updated 3 years ago
Alternatives and similar repositories for vdoc
Users that are interested in vdoc are comparing it to the libraries listed below
Sorting:
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- ☆32Updated 3 years ago
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated 2 years ago
- PyTorch code for Improving Commonsense in Vision-Language Models via Knowledge Graph Riddles (DANCE)☆23Updated 3 years ago
- ☆30Updated 2 years ago
- Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answer…☆55Updated last year
- [ICPRAI 2024] DocumentCLIP: Linking Figures and Main Body Text in Reflowed Documents☆16Updated last year
- MLPs for Vision and Langauge Modeling (Coming Soon)☆27Updated 4 years ago
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 2 years ago
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆33Updated 3 years ago
- ☆11Updated 5 years ago
- ☆26Updated 4 years ago
- Code for AAAI 2023 Paper : “Alignment-Enriched Tuning for Patch-Level Pre-trained Document Image Models”☆18Updated 3 years ago
- Repository for the paper "Data Efficient Masked Language Modeling for Vision and Language".☆18Updated 4 years ago
- [EMNLP 2021] Code and data for our paper "Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers…☆20Updated 3 years ago
- Data and code for NeurIPS 2021 Paper "IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning".☆54Updated last year
- source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT☆72Updated 3 years ago
- MaXM is a suite of test-only benchmarks for multilingual visual question answering in 7 languages: English (en), French (fr), Hindi (hi),…☆13Updated last year
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆43Updated 3 years ago
- Textual Visual Semantic Dataset for Text Spotting. CVPRW 2020☆11Updated 3 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆44Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- [BMVC22] Official Implementation of ViCHA: "Efficient Vision-Language Pretraining with Visual Concepts and Hierarchical Alignment"☆55Updated 3 years ago
- Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"☆19Updated 3 years ago
- [AAAI 2021] Confidence-aware Non-repetitive Multimodal Transformers for TextCaps☆24Updated 2 years ago
- ☆22Updated 3 years ago
- Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]☆57Updated 3 years ago