OpenGVLab / DiffAgent
[CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model
☆17Updated last year
Alternatives and similar repositories for DiffAgent:
Users that are interested in DiffAgent are comparing it to the libraries listed below
- Benchmarking Attention Mechanism in Vision Transformers.☆17Updated 2 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆27Updated last year
- Paper List for In-context Learning 🌷☆20Updated 2 years ago
- ☆38Updated last year
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 10 months ago
- BESA is a differentiable weight pruning technique for large language models.☆16Updated last year
- ☆19Updated last year
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆32Updated last year
- VidKV: Plug-and-Play 1.x-Bit KV Cache Quantization for Video Large Language Models☆19Updated last month
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆36Updated last year
- GIFT: Generative Interpretable Fine-Tuning☆20Updated 7 months ago
- Official Pytorch implementation for Distilling Image Classifiers in Object detection (NeurIPS2021)☆31Updated 3 years ago
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆12Updated 4 months ago
- ☆41Updated 6 months ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 2 months ago
- i-mae Pytorch Repo☆20Updated last year
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆21Updated 2 years ago
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆17Updated 3 weeks ago
- Repository for the paper "Data Efficient Masked Language Modeling for Vision and Language".☆18Updated 3 years ago
- ☆17Updated last year
- [WIP@Oct 13] 质衡-基准测试 (Q-Bench in Chinese),包含中文版【底层视觉问答】和【底层视觉描述】数据集,以及中文提示下的图片质量评价。 We will release Q-Bench in more languages in the futu…☆20Updated last year
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆90Updated 2 years ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆24Updated last month
- OpenMMLab Detection Toolbox and Benchmark for V3Det☆15Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆42Updated 10 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆15Updated 2 months ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".☆16Updated last year
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆24Updated last year
- ☆52Updated 2 years ago