An Enhanced CLIP Framework for Learning with Synthetic Captions
☆40Apr 18, 2025Updated 11 months ago
Alternatives and similar repositories for CLIPS
Users that are interested in CLIPS are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] This repository includes the official implementation our paper "Revisiting Adversarial Training at Scale"☆20Apr 21, 2024Updated last year
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆149Jun 13, 2024Updated last year
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆150Oct 10, 2025Updated 5 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆33Oct 12, 2024Updated last year
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆87Nov 28, 2023Updated 2 years ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Dec 1, 2024Updated last year
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆20Sep 15, 2023Updated 2 years ago
- ☆59Jun 18, 2024Updated last year
- An official PyTorch implementation for CLIPPR☆30Jul 22, 2023Updated 2 years ago
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆23Dec 4, 2024Updated last year
- This repository includes the official project of L2B, from our paper "Learning to Bootstrap for Combating Label Noise".☆33Mar 16, 2025Updated last year
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆31May 16, 2024Updated last year
- Using LLMs and pre-trained caption models for super-human performance on image captioning.☆42Oct 13, 2023Updated 2 years ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆27Oct 28, 2024Updated last year
- ☆25Jun 17, 2025Updated 9 months ago
- Official repo of Knowledge or Reasoning? A Close Look at How LLMs Think Across Domains.☆44Jun 6, 2025Updated 9 months ago
- [NLPCC'23] ZeroGen: Zero-shot Multimodal Controllable Text Generation with Multiple Oracles PyTorch Implementation☆14Oct 7, 2023Updated 2 years ago
- [TMLR'24] This repository includes the official implementation our paper "Unleashing the Power of Visual Prompting At the Pixel Level"☆42Apr 30, 2024Updated last year
- The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv…☆89May 6, 2022Updated 3 years ago
- Counterfactual Reasoning VQA Dataset☆28Nov 23, 2023Updated 2 years ago
- [TACL] Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Nov 22, 2024Updated last year
- [ICLR 2025] HQ-Edit: A High-Quality and High-Coverage Dataset for General Image Editing☆113Apr 18, 2024Updated last year
- AlignCLIP: Improving Cross-Modal Alignment in CLIP (ICLR 2025)☆60Mar 1, 2025Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆64Jul 22, 2025Updated 8 months ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Nov 30, 2022Updated 3 years ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 10 months ago
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆13Sep 30, 2023Updated 2 years ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆90May 30, 2025Updated 9 months ago
- This repository includes the official implementation our paper "Scaling White-Box Transformers for Vision"☆47Jun 3, 2024Updated last year
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆157Feb 19, 2026Updated last month
- ☆15Nov 30, 2023Updated 2 years ago
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆62Nov 30, 2025Updated 3 months ago
- Official PyTorch Implementation of ParGo: Bridging Vision-Language with Partial and Global Views. (AAAI 2025)☆16Jan 7, 2025Updated last year
- ☆17Jul 23, 2025Updated 7 months ago
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆55Mar 28, 2024Updated last year
- ☆18Sep 23, 2024Updated last year
- [NeurIPS 2023] Bootstrapping Vision-Language Learning with Decoupled Language Pre-training☆27Dec 5, 2023Updated 2 years ago
- CLAIR: A (surprisingly) simple semantic text metric with large language models.☆22Jan 28, 2024Updated 2 years ago