google-research / pix2structLinks
☆657Updated 3 months ago
Alternatives and similar repositories for pix2struct
Users that are interested in pix2struct are comparing it to the libraries listed below
Sorting:
- ☆249Updated 2 years ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- Official repo for MM-REACT☆955Updated last year
- GIT: A Generative Image-to-text Transformer for Vision and Language☆573Updated last year
- On the Hidden Mystery of OCR in Large Multimodal Models (OCRBench)☆707Updated 2 months ago
- Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for the task…☆284Updated 2 years ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆939Updated 6 months ago
- Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".☆690Updated last year
- ☆218Updated 5 months ago
- ☆123Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆482Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆354Updated last month
- [Open-Source Project] Combining MMOCR with Segment Anything & Stable Diffusion. Automatically detect, recognize and segment text instance…☆573Updated last year
- DataComp: In search of the next generation of multimodal datasets☆743Updated 4 months ago
- ☆712Updated last year
- Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understan…☆355Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- OpenAI CLIP text encoders for multiple languages!☆811Updated 2 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,261Updated 2 years ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆758Updated last year
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal …☆363Updated last year
- Salesforce open-source LLMs with 8k sequence length.☆723Updated 7 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆524Updated last year
- ☆67Updated last year
- My implementation of Kosmos2.5 from the paper: "KOSMOS-2.5: A Multimodal Literate Model"☆73Updated last week
- The HierText dataset contains ~12k images from the Open Images dataset v6 with large amount of text entities. We provide word, line and p…☆294Updated 9 months ago
- ScreenQA dataset was introduced in the "ScreenQA: Large-Scale Question-Answer Pairs over Mobile App Screenshots" paper. It contains ~86K …☆128Updated 7 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆463Updated last year
- The Screen Annotation dataset consists of pairs of mobile screenshots and their annotations. The annotations are in text format, and desc…☆76Updated last year
- [Image 2 Text Para] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆820Updated 2 years ago