taogoddd / GPT-4V-API
Self-hosted GPT-4V api
☆29Updated last year
Alternatives and similar repositories for GPT-4V-API:
Users that are interested in GPT-4V-API are comparing it to the libraries listed below
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆48Updated 3 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆45Updated 3 months ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆41Updated this week
- Official Code of IdealGPT☆34Updated last year
- ☆15Updated 9 months ago
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆26Updated 6 months ago
- Paper collections of methods that using language to interact with environment, including interact with real world, simulated world or WWW…☆124Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"☆91Updated last month
- ☆58Updated 4 months ago
- Multimodal-Procedural-Planning☆91Updated last year
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆117Updated last month
- [EMNLP 2023, Findings] GRACE: Discriminator-Guided Chain-of-Thought Reasoning☆46Updated 3 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆62Updated 8 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 7 months ago
- Preference Learning for LLaVA☆35Updated 2 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- [ACL 2024] The project of Symbol-LLM☆47Updated 6 months ago
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆58Updated 6 months ago
- Sotopia-π: Interactive Learning of Socially Intelligent Language Agents (ACL 2024)☆57Updated 8 months ago
- Official github repo of G-LLaVA☆122Updated 8 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆76Updated 7 months ago
- m&ms: A Benchmark to Evaluate Tool-Use for multi-step multi-modal tasks☆35Updated 4 months ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated last year
- A curated list of resources about long-context in large-language models and video understanding.☆29Updated last year
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆101Updated 10 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆128Updated 2 months ago
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆67Updated last year