huizhang0110 / catvisionLinks
A multimodal large-scale model, which performs close to the closed-source Qwen-VL-PLUS on many datasets and significantly surpasses the performance of the open-source model Qwen-VL-7B-Chat.
☆14Updated last year
Alternatives and similar repositories for catvision
Users that are interested in catvision are comparing it to the libraries listed below
Sorting:
- Chinese CLIP models with SOTA performance.☆55Updated last year
- Our 2nd-gen LMM☆33Updated last year
- paddle code convert toolkit☆22Updated 2 years ago
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆28Updated last year
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- ☆14Updated 2 years ago
- ☆57Updated last year
- ☆22Updated 3 years ago
- PyTorch implementation of BMVC2022 paper Masked Vision-Language Transformers for Scene Text Recognition☆29Updated 2 years ago
- ☆69Updated 2 years ago
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆101Updated last year
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆25Updated 2 weeks ago
- Vision-Language Pre-Training for Boosting Scene Text Detectors (CVPR2022)☆12Updated 3 years ago
- ☆28Updated 3 years ago
- Various test models in WNNX format. It can view with `pip install wnetron && wnetron`☆12Updated 3 years ago
- A Dead Simple and Modularized Multi-Modal Training and Finetune Framework. Compatible to any LLaVA/Flamingo/QwenVL/MiniGemini etc series …☆19Updated last year
- VimTS: A Unified Video and Image Text Spotter☆77Updated 8 months ago
- A tiny, didactical implementation of LLAMA 3☆41Updated 7 months ago
- ☆15Updated 6 months ago
- WikiTableSet: A largest publicly available image-based table recognition dataset in three languages built from Wikipedia☆30Updated last month
- TIoU metric in python3. Forked from https://github.com/Yuliang-Liu/TIoU-metric.☆26Updated 5 years ago
- Facebook Image Similarity Challenge 2021☆19Updated 3 years ago
- ☆41Updated 5 years ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- Large Multimodal Model☆15Updated last year
- ☆29Updated 11 months ago
- Official code for the paper: "Perception and Semantic Aware Regularization for Sequential Confidence Calibration (CVPR2023)"☆10Updated last year
- BTS: A Bi-lingual Benchmark for Text Segmentation in the Wild☆31Updated last year
- Official implementation of Generative Colorization of Structured Mobile Web Pages, WACV 2023.☆22Updated last year
- Code of AAAI2025 Paper 《VIoTGPT: Learning to Schedule Vision Tools in LLMs towards Intelligent Video Internet of Things》☆14Updated 6 months ago