HyperGAI / HPT
HPT - Open Multimodal LLMs from HyperGAI
☆314Updated 10 months ago
Alternatives and similar repositories for HPT:
Users that are interested in HPT are comparing it to the libraries listed below
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆371Updated 2 weeks ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆737Updated last year
- Long Context Transfer from Language to Vision☆371Updated 3 weeks ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆412Updated last month
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆227Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆235Updated 8 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆240Updated 3 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆337Updated 3 months ago
- ☆368Updated last month
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆294Updated 2 months ago
- a family of highly capabale yet efficient large multimodal models☆178Updated 7 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆348Updated last month
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last week
- Rethinking Step-by-step Visual Reasoning in LLMs☆287Updated 2 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆266Updated 10 months ago
- ☆607Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆338Updated 3 weeks ago
- Explore the Multimodal “Aha Moment” on 2B Model☆561Updated 3 weeks ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆276Updated last year
- ☆169Updated 9 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆499Updated 10 months ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆192Updated 9 months ago
- Aligning LMMs with Factually Augmented RLHF☆362Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆214Updated 3 weeks ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆520Updated last year
- LLaVA-Interactive-Demo☆368Updated 8 months ago
- [AAAI-25] Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆272Updated 3 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆260Updated 9 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆449Updated last year
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆284Updated last month