Optimization-AI / fast_clip
☆22Updated 5 months ago
Alternatives and similar repositories for fast_clip:
Users that are interested in fast_clip are comparing it to the libraries listed below
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated last year
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆14Updated this week
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆15Updated 4 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆24Updated 4 months ago
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their…☆12Updated 2 months ago
- A big_vision inspired repo that implements a generic Auto-Encoder class capable in representation learning and generative modeling.☆34Updated 8 months ago
- Code for T-MARS data filtering☆35Updated last year
- ☆17Updated last month
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- ☆38Updated 3 months ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆29Updated 5 months ago
- Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆14Updated 3 months ago
- ☆19Updated last year
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆27Updated 11 months ago
- ☆30Updated 2 years ago
- Project for SNARE benchmark☆10Updated 8 months ago
- ☆31Updated last year
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆38Updated 2 months ago
- ☆41Updated last month
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- ☆10Updated 4 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 8 months ago
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆21Updated 6 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆51Updated 2 months ago
- Preference Learning for LLaVA☆38Updated 3 months ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆27Updated 2 months ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆20Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year