kdr / videoRAG-mrr2024
Supporting code for: Video Enriched Retrieval Augmented Generation Using Aligned Video Captions
☆24Updated 8 months ago
Alternatives and similar repositories for videoRAG-mrr2024:
Users that are interested in videoRAG-mrr2024 are comparing it to the libraries listed below
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆23Updated 2 weeks ago
- ☆19Updated last year
- [WACV 2025] Official implementation of "Online-LoRA: Task-free Online Continual Learning via Low Rank Adaptation" by Xiwen Wei, Guihong L…☆35Updated 4 months ago
- Multi-Modal Language Modeling with Image, Audio and Text Integration, included multi-images and multi-audio in a single multiturn.☆17Updated last year
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated 4 months ago
- Visual RAG using less than 300 lines of code.☆27Updated last year
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆58Updated last month
- Video-LlaVA fine-tune for CinePile evaluation☆51Updated 7 months ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 8 months ago
- ☆56Updated 4 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆50Updated this week
- ☆42Updated last year
- "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs" 2023☆14Updated 4 months ago
- ☆17Updated last week
- ☆11Updated 2 years ago
- This repository includes the code to download the curated HuggingFace papers into a single markdown formatted file☆14Updated 8 months ago
- Chat with Qwen2-VL. Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆10Updated 6 months ago
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated 7 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆50Updated 3 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated last month
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆36Updated last year
- LLM reads a paper and produce a working prototype☆51Updated 2 weeks ago
- A list of language models with permissive licenses such as MIT or Apache 2.0☆24Updated last month
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆24Updated 2 weeks ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆90Updated 3 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- [Under Review] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with enla…☆56Updated 5 months ago
- Official Pytorch Implementation of Self-emerging Token Labeling☆32Updated last year
- BUD-E (Buddy) is an open-source voice assistant framework that facilitates seamless interaction with AI models and APIs, enabling the cre…☆18Updated 5 months ago
- Tools for merging pretrained large language models.☆19Updated 9 months ago