kdr / videoRAG-mrr2024Links
Supporting code for: Video Enriched Retrieval Augmented Generation Using Aligned Video Captions
☆27Updated last year
Alternatives and similar repositories for videoRAG-mrr2024
Users that are interested in videoRAG-mrr2024 are comparing it to the libraries listed below
Sorting:
- Visual RAG using less than 300 lines of code.☆28Updated last year
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆94Updated 7 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆83Updated last week
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆82Updated last year
- Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll…☆27Updated last year
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆102Updated 7 months ago
- BUD-E (Buddy) is an open-source voice assistant framework that facilitates seamless interaction with AI models and APIs, enabling the cre…☆22Updated 10 months ago
- Video-LlaVA fine-tune for CinePile evaluation☆51Updated last year
- Graph learning framework for long-term video understanding☆65Updated last month
- [WACV 2025] Official implementation of "Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation" by Xiwen Wei, Guihong L…☆47Updated 8 months ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated 3 weeks ago
- ☆56Updated 8 months ago
- ☆67Updated last year
- ☆13Updated 3 years ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 5 months ago
- LoRA fine-tuned Stable Diffusion Deployment☆31Updated 2 years ago
- Chat with Phi 3.5/3 Vision LLMs. Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which includ…☆34Updated 7 months ago
- Multi-Modal Language Modeling with Image, Audio and Text Integration, included multi-images and multi-audio in a single multiturn.☆18Updated last year
- Parameter-efficient finetuning script for Phi-3-vision, the strong multimodal language model by Microsoft.☆58Updated last year
- ☆69Updated last year
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆87Updated last year
- A minimal yet unstoppable blueprint for multi-agent AI—anchored by the rare, far-reaching “Multi-Agent AI DAO” (2017 Prior Art)—empowerin…☆29Updated 7 months ago
- Building LLMs from scratch following the book from S. Raschka☆31Updated 4 months ago
- This is the repo for the paper "PANGEA: A FULLY OPEN MULTILINGUAL MULTIMODAL LLM FOR 39 LANGUAGES"☆110Updated last month
- Official Pytorch Implementation of Self-emerging Token Labeling☆35Updated last year
- Verifiers for LLM Reinforcement Learning☆69Updated 3 months ago
- a family of highly capabale yet efficient large multimodal models☆187Updated 11 months ago
- ☆86Updated last year
- Small Multimodal Vision Model "Imp-v1-3b" trained using Phi-2 and Siglip.☆17Updated last year
- ☆44Updated 2 months ago