apple / ml-vfm-ktLinks
☆14Updated last year
Alternatives and similar repositories for ml-vfm-kt
Users that are interested in ml-vfm-kt are comparing it to the libraries listed below
Sorting:
- ☆59Updated last year
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models".☆102Updated last year
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆246Updated 6 months ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated last year
- ☆13Updated last year
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆76Updated 3 years ago
- A light-weight implementation of ICCV2023 paper "Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Rei…☆79Updated last year
- [CVPR'24 Highlight] PyTorch Implementation of Object Recognition as Next Token Prediction☆180Updated 3 months ago
- ☆86Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 10 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 5 months ago
- Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll…☆27Updated last year
- Timm model explorer☆41Updated last year
- ☆27Updated last month
- ☆69Updated last year
- ☆65Updated last year
- EdgeSAM model for use with Autodistill.☆27Updated last year
- Command-line tool for extracting DINO, CLIP, and SigLIP2 features for images and videos☆28Updated last month
- ☆86Updated last year
- ☆76Updated last month
- Video-LlaVA fine-tune for CinePile evaluation☆51Updated last year
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆158Updated last year
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆87Updated last year
- Video descriptions of research papers relating to foundation models and scaling☆31Updated 2 years ago
- Projects based on SigLIP (Zhai et. al, 2023) and Hugging Face transformers integration 🤗☆264Updated 5 months ago
- Official code repository for ICML 2025 paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Doma…☆38Updated 3 weeks ago
- [NeurIPS 2023] HASSOD: Hierarchical Adaptive Self-Supervised Object Detection☆57Updated last year
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 5 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆60Updated 8 months ago