microsoft / MM-REACT
Official repo for MM-REACT
☆935Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for MM-REACT
- Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".☆1,088Updated 10 months ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆761Updated 11 months ago
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆765Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆705Updated 9 months ago
- Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".☆606Updated 2 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆509Updated 9 months ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆765Updated 7 months ago
- Multimodal-GPT☆1,477Updated last year
- Transform Video as a Document with ChatGPT, CLIP, BLIP2, GRIT, Whisper, LangChain.☆538Updated last year
- This is the official repository for the LENS (Large Language Models Enhanced to See) system.☆351Updated 11 months ago
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆906Updated 5 months ago
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆886Updated 3 weeks ago
- ☆755Updated 3 months ago
- ☆1,674Updated last month
- [NeurIPS'23 Spotlight] "Mind2Web: Towards a Generalist Agent for the Web"☆716Updated 3 months ago
- ☆746Updated 4 months ago
- ☆1,022Updated last year
- Official Repository of ChatCaptioner☆452Updated last year
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,080Updated 10 months ago
- Research Trends in LLM-guided Multimodal Learning.☆355Updated last year
- An Open-source Toolkit for LLM Development☆2,721Updated 5 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆466Updated 7 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆812Updated last year
- The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".☆1,298Updated 10 months ago
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆478Updated last year
- Set-of-Mark Prompting for GPT-4V and LMMs☆1,185Updated 3 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆851Updated 8 months ago
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆503Updated last year
- OpenAGI: When LLM Meets Domain Experts☆1,966Updated 2 months ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆507Updated 5 months ago