This is the official repository for the LENS (Large Language Models Enhanced to See) system.
☆356Jul 22, 2025Updated 7 months ago
Alternatives and similar repositories for lens
Users that are interested in lens are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆319Jun 3, 2024Updated last year
- An open-source framework for training large multimodal models.☆4,068Aug 31, 2024Updated last year
- 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing imp…☆3,338Mar 5, 2024Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆505Aug 9, 2024Updated last year
- 🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".☆486Oct 30, 2023Updated 2 years ago
- Salesforce open-source LLMs with 8k sequence length.☆725Jan 31, 2025Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.☆952Mar 19, 2025Updated 11 months ago
- (ECCVW 2025)GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆551Jun 3, 2025Updated 8 months ago
- Multimodal-GPT☆1,518Jun 4, 2023Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- Official Repository of ChatCaptioner☆468Apr 13, 2023Updated 2 years ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Mar 22, 2024Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆289Jan 14, 2024Updated 2 years ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,124Jun 4, 2024Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Jun 12, 2024Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆471Jan 19, 2024Updated 2 years ago
- Repository for the paper "Data Efficient Masked Language Modeling for Vision and Language".☆18Sep 17, 2021Updated 4 years ago
- ☆1,709Sep 27, 2024Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆360Dec 18, 2023Updated 2 years ago
- Inverse DALL-E for Optical Character Recognition☆38Oct 14, 2022Updated 3 years ago
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆20Sep 15, 2023Updated 2 years ago
- ☆19Dec 6, 2023Updated 2 years ago
- ☆101May 16, 2024Updated last year
- The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".☆1,311Jan 17, 2024Updated 2 years ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,811Nov 27, 2025Updated 3 months ago
- Official code for VisProg (CVPR 2023 Best Paper!)☆760Aug 26, 2024Updated last year
- ☆44Jun 2, 2024Updated last year
- Official implementation and data release of the paper "Visual Prompting via Image Inpainting".☆318Aug 7, 2023Updated 2 years ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,539Apr 2, 2025Updated 11 months ago
- ImageBind One Embedding Space to Bind Them All☆8,980Nov 21, 2025Updated 3 months ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Jun 20, 2023Updated 2 years ago
- ☆805Jul 8, 2024Updated last year
- ☆352May 25, 2024Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).☆640Sep 21, 2024Updated last year
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,402Aug 4, 2025Updated 6 months ago
- Official JAX implementation of MAGVIT: Masked Generative Video Transformer☆995Jan 17, 2024Updated 2 years ago