showlab / assistgptLinks
☆66Updated 2 years ago
Alternatives and similar repositories for assistgpt
Users that are interested in assistgpt are comparing it to the libraries listed below
Sorting:
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆90Updated 2 years ago
- Multimodal-Procedural-Planning☆92Updated 2 years ago
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated 4 months ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆193Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- A curated list of the papers, repositories, tutorials, and anythings related to the large language models for tools☆68Updated 2 years ago
- Official Code of IdealGPT☆35Updated last year
- Official repo for StableLLAVA☆94Updated last year
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆228Updated last year
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆270Updated last year
- ☆74Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆103Updated last year
- VideoLLM: Modeling Video Sequence with Large Language Models☆158Updated 2 years ago
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆135Updated last year
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆65Updated 11 months ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆144Updated last year
- HPT - Open Multimodal LLMs from HyperGAI☆315Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆52Updated 9 months ago
- ☆50Updated last year
- Langchain implementation of HuggingGPT☆133Updated 2 years ago
- A curated list of resources about long-context in large-language models and video understanding.☆31Updated 2 years ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆355Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction…☆78Updated 2 years ago
- ☆84Updated 2 years ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆142Updated 11 months ago
- Research Trends in LLM-guided Multimodal Learning.☆355Updated last year
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆103Updated last year
- ☆65Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year