ghchen18 / acl23_mclipLinks
The official code and model for ACL 2023 paper 'mCLIP: Multilingual CLIP via Cross-lingual Transfer'
☆10Updated last year
Alternatives and similar repositories for acl23_mclip
Users that are interested in acl23_mclip are comparing it to the libraries listed below
Sorting:
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆48Updated 9 months ago
- Offical PyTorch implementation of Clover: Towards A Unified Video-Language Alignment and Fusion Model (CVPR2023)☆40Updated 2 years ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆35Updated 10 months ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆101Updated 4 months ago
- Official Pytorch implementation of LinCIR: Language-only Training of Zero-shot Composed Image Retrieval (CVPR 2024)☆134Updated 10 months ago
- [CVPR 2023] VoP: Text-Video Co-operative Prompt Tuning for Cross-Modal Retrieval☆38Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated 10 months ago
- ☆91Updated last year
- A Unified Framework for Video-Language Understanding☆57Updated last year
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- [ICLR2024] The official implementation of paper "UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling", by …☆74Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year
- Turning to Video for Transcript Sorting☆48Updated last year
- Source code of our CVPR2024 paper TeachCLIP for Text-to-Video Retrieval☆35Updated last month
- [ECCVW'24] Long-form Video Understanding by Bridging Episodic Memory and Semantic Knowledge☆27Updated 8 months ago
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆57Updated 11 months ago
- NegCLIP.☆32Updated 2 years ago
- ☆108Updated 2 years ago
- Official Pytorch implementation of "Improved Probabilistic Image-Text Representations" (ICLR 2024)☆58Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 6 months ago
- MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models (CVPR 2023)☆33Updated last year
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆101Updated last year
- ☆83Updated 3 years ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆39Updated 2 months ago
- ☆17Updated last month
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 5 months ago
- (ICML 2024) Improve Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning☆27Updated 8 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆51Updated last year
- ICLR‘24 Offical Implementation of Composed Image Retrieval with Text Feedback via Multi-grained Uncertainty Regularization☆72Updated last year