taogoddd / GPT-4V-APILinks
Self-hosted GPT-4V api
☆27Updated 2 years ago
Alternatives and similar repositories for GPT-4V-API
Users that are interested in GPT-4V-API are comparing it to the libraries listed below
Sorting:
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆62Updated last year
- ☆17Updated last year
- Multimodal-Procedural-Planning☆93Updated 2 years ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆106Updated last year
- ☆50Updated 2 years ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆31Updated last year
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆124Updated 7 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆28Updated 6 months ago
- Visual and Embodied Concepts evaluation benchmark☆21Updated 2 years ago
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated 2 years ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated last year
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 6 months ago
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation☆134Updated 2 years ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆49Updated 2 years ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated 2 years ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated 2 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- Code for our ACL 2025 paper "Language Repository for Long Video Understanding"☆34Updated last year
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- ☆11Updated last year
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆36Updated 10 months ago
- Official Code of IdealGPT☆35Updated 2 years ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆36Updated last year
- Official implementation of Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning☆28Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆85Updated 11 months ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆139Updated 2 years ago
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆68Updated 2 years ago
- ☆83Updated 2 years ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆72Updated last year
- Multimodal language model benchmark, featuring challenging examples☆182Updated last year