hhnqqq / py_hfdLinks
A python script for downloading huggingface datasets and models.
☆20Updated 8 months ago
Alternatives and similar repositories for py_hfd
Users that are interested in py_hfd are comparing it to the libraries listed below
Sorting:
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆97Updated this week
- A tiny paper rating web☆38Updated 8 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆106Updated 7 months ago
- Official codebase for the paper Latent Visual Reasoning☆54Updated last month
- 📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.☆335Updated last month
- This is a collection of recent papers on reasoning in video generation models.☆76Updated last week
- Official implementation of MC-LLaVA.☆139Updated last month
- Survey: https://arxiv.org/pdf/2507.20198☆235Updated last month
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆213Updated last month
- This repository is the official implementation of "Look-Back: Implicit Visual Re-focusing in MLLM Reasoning".☆72Updated 5 months ago
- ViewSpatial-Bench:Evaluating Multi-perspective Spatial Localization in Vision-Language Models☆66Updated 6 months ago
- A framework for unified personalized model, achieving mutual enhancement between personalized understanding and generation. Demonstrating…☆126Updated 2 months ago
- We introduce 'Thinking with Video', a new paradigm leveraging video generation for multimodal reasoning. Our VideoThinkBench shows that S…☆219Updated this week
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆154Updated 9 months ago
- [NeurIPS 2025] RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video.☆27Updated 2 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆99Updated 5 months ago
- Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (arXiv 2025)☆205Updated 4 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆107Updated 4 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆195Updated 7 months ago
- A Collection of Papers on Diffusion Language Models☆148Updated 2 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆133Updated 9 months ago
- Imagine While Reasoning in Space: Multimodal Visualization-of-Thought (ICML 2025)☆59Updated 8 months ago
- ☆110Updated 3 months ago
- R1-like Video-LLM for Temporal Grounding☆126Updated 5 months ago
- ☆152Updated 10 months ago
- The official repository for the paper "ThinkMorph: Emergent Properties in Multimodal Interleaved Chain-of-Thought Reasoning"☆119Updated 2 weeks ago
- TStar is a unified temporal search framework for long-form video question answering☆76Updated 3 months ago
- A paper list of Awesome Latent Space.☆190Updated this week
- MR. Video: MapReduce is the Principle for Long Video Understanding☆28Updated 7 months ago
- A collection of awesome think with videos papers.☆72Updated last week