lupantech / ScienceQA
Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".
☆630Updated 5 months ago
Alternatives and similar repositories for ScienceQA:
Users that are interested in ScienceQA are comparing it to the libraries listed below
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆515Updated last year
- LOMO: LOw-Memory Optimization☆980Updated 7 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,127Updated 11 months ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆760Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆478Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆614Updated 6 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆860Updated 2 months ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆546Updated last year
- [NIPS2023] RRHF & Wombat☆799Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆344Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆917Updated 8 months ago
- ☆903Updated 8 months ago
- ☆765Updated 7 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆494Updated 10 months ago
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆781Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆285Updated last month
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆481Updated 10 months ago
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆592Updated this week
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages☆307Updated last year
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆781Updated 10 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆498Updated 3 weeks ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆447Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆794Updated 7 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆328Updated last month
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,109Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,083Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆463Updated last month
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆262Updated 8 months ago
- ☆728Updated 8 months ago
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" (ICLR 2024)☆360Updated 5 months ago