taogoddd / GPT-4V-APILinks
Self-hosted GPT-4V api
☆30Updated last year
Alternatives and similar repositories for GPT-4V-API
Users that are interested in GPT-4V-API are comparing it to the libraries listed below
Sorting:
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆58Updated 9 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆25Updated last month
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated 10 months ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆29Updated last year
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated 2 months ago
- ☆50Updated last year
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆20Updated last year
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated last year
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆16Updated 9 months ago
- A Comprehensive Benchmark for Robust Multi-image Understanding☆12Updated 11 months ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 5 months ago
- Multimodal-Procedural-Planning☆92Updated 2 years ago
- Public code repo for EMNLP 2024 Findings paper "MACAROON: Training Vision-Language Models To Be Your Engaged Partners"☆14Updated 10 months ago
- ☆51Updated 2 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search"☆26Updated 3 months ago
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆31Updated 5 months ago
- Official Code of IdealGPT☆35Updated last year
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆68Updated last year
- ☆14Updated last year
- ☆66Updated 2 years ago
- Official repo for StableLLAVA☆95Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆52Updated 7 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆33Updated last year
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆73Updated last month
- ☆16Updated 9 months ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 9 months ago