kdr / videoRAG-mrr2024Links
Supporting code for: Video Enriched Retrieval Augmented Generation Using Aligned Video Captions
☆26Updated 11 months ago
Alternatives and similar repositories for videoRAG-mrr2024
Users that are interested in videoRAG-mrr2024 are comparing it to the libraries listed below
Sorting:
- ☆20Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated this week
- Visual RAG using less than 300 lines of code.☆28Updated last year
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated 10 months ago
- Clipora is a powerful toolkit for fine-tuning OpenCLIP models using Low Rank Adapters (LoRA).☆22Updated 10 months ago
- ☆43Updated last month
- "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs" 2023☆14Updated 7 months ago
- Video-LlaVA fine-tune for CinePile evaluation☆51Updated 10 months ago
- ☆26Updated last year
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 4 months ago
- [WACV 2025] Official implementation of "Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation" by Xiwen Wei, Guihong L…☆41Updated 7 months ago
- EfficientSAM + YOLO World base model for use with Autodistill.☆10Updated last year
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆26Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆24Updated 2 months ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 10 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆93Updated 6 months ago
- Official code repository for paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts"☆31Updated 8 months ago
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆30Updated last year
- Chat with Phi 3.5/3 Vision LLMs. Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which includ…☆33Updated 5 months ago
- ☆56Updated 7 months ago
- ☆48Updated last year
- ☆62Updated 11 months ago
- [CVPR 2025 🔥] ALM-Bench is a multilingual multi-modal diverse cultural benchmark for 100 languages across 19 categories. It assesses the…☆40Updated last month
- A minimal yet unstoppable blueprint for multi-agent AI—anchored by the rare, far-reaching “Multi-Agent AI DAO” (2017 Prior Art)—empowerin…☆27Updated 5 months ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆34Updated last year
- ☆15Updated 3 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 4 months ago
- An open source implementation of CLIP (With TULIP Support)☆157Updated last month
- ☆24Updated last year