EvolvingLMMs-Lab / OtterLinks
𦦠Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
β3,263Updated last year
Alternatives and similar repositories for Otter
Users that are interested in Otter are comparing it to the libraries listed below
Sorting:
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integrationβ1,578Updated 7 months ago
- [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.β5,182Updated 2 months ago
- An Open-source Toolkit for LLM Developmentβ2,786Updated 6 months ago
- Tool Learning for Big Models, Open-Source Solutions of ChatGPT-Pluginsβ2,793Updated last year
- Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)β1,901Updated 2 weeks ago
- One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasksβ2,804Updated this week
- Large-scale, Informative, and Diverse Multi-round Chat Data (and Models)β2,648Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"β2,330Updated 5 months ago
- The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"β745Updated 2 months ago
- [CVPR'23] Universal Instance Perception as Object Discovery and Retrievalβ1,275Updated 2 years ago
- [ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | εΊδΊCPMεΊη‘樑εηδΈθ±εθ―ε€ζ¨‘ζ倧樑εη³»εβ1,064Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)β2,674Updated 11 months ago
- Multimodal-GPTβ1,506Updated 2 years ago
- [IJCV] Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generationβ1,131Updated 8 months ago
- An open-source framework for training large multimodal models.β3,988Updated 11 months ago
- Emu Series: Generative Multimodal Models from BAAIβ1,737Updated 10 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ5,889Updated last year
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.β3,280Updated 6 months ago
- OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]β1,314Updated 2 months ago
- ηΎδΊΏεζ°ηδΈθ±ζεθ―εΊεΊ§ε€§ζ¨‘εβ2,434Updated 2 years ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Familyβ2,502Updated 3 months ago
- Your Automatic Prompt Engineering Assistant for GenAI Applicationsβ2,111Updated last year
- Collection of AWESOME vision-language models for vision tasksβ2,857Updated 2 months ago
- [CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scaleβ1,141Updated 9 months ago
- Align Anything: Training All-modality Model with Feedbackβ4,402Updated 2 months ago
- Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Modelβ3,538Updated 2 months ago
- A family of lightweight multimodal models.β1,023Updated 8 months ago
- Official repo for MM-REACTβ953Updated last year
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them Allβ809Updated 2 years ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understandingβ3,045Updated last year