lupantech / ScienceQALinks
Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".
☆686Updated 11 months ago
Alternatives and similar repositories for ScienceQA
Users that are interested in ScienceQA are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆524Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆535Updated last year
- MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts☆330Updated 9 months ago
- Research Trends in LLM-guided Multimodal Learning.☆356Updated last year
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆485Updated 3 months ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆570Updated last year
- ☆761Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆649Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆353Updated last year
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆861Updated 3 months ago
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models☆639Updated 8 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆307Updated 7 months ago
- Official repo for MM-REACT☆955Updated last year
- ☆793Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆936Updated 5 months ago
- Aligning LMMs with Factually Augmented RLHF☆373Updated last year
- ☆218Updated 4 months ago
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆817Updated 2 years ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆509Updated 7 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering".☆277Updated 2 months ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆774Updated last year
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆499Updated last year
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆377Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆461Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆346Updated 7 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆760Updated last year
- Prod Env☆428Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆509Updated last year
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆271Updated last year