llava-rlhf / LLaVA-RLHFLinks
Aligning LMMs with Factually Augmented RLHF
☆385Updated 2 years ago
Alternatives and similar repositories for LLaVA-RLHF
Users that are interested in LLaVA-RLHF are comparing it to the libraries listed below
Sorting:
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆356Updated 10 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆298Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆317Updated 10 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆357Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆291Updated last year
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆313Updated last month
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆269Updated 6 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆424Updated 6 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆318Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆277Updated last year
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆523Updated 6 months ago
- ☆355Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆231Updated 3 months ago
- A RLHF Infrastructure for Vision-Language Models☆187Updated last year
- The official repository of "Video assistant towards large language model makes everything easy"☆232Updated 11 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆356Updated last week
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆384Updated last year
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆397Updated this week
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆231Updated 8 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆635Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆406Updated 6 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆460Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆470Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆524Updated last year
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆519Updated 10 months ago
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models☆641Updated 11 months ago
- ☆215Updated last year
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆598Updated last year
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆149Updated last month
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆549Updated last year