huggingface / fineVideoLinks
☆90Updated last year
Alternatives and similar repositories for fineVideo
Users that are interested in fineVideo are comparing it to the libraries listed below
Sorting:
- ☆78Updated 5 months ago
- Implementation of the proposed MaskBit from Bytedance AI☆82Updated 11 months ago
- Video-LlaVA fine-tune for CinePile evaluation☆51Updated last year
- ☆79Updated last year
- 🦾 EvalGIM (pronounced as "EvalGym") is an evaluation library for generative image models. It enables easy-to-use, reproducible automatic…☆86Updated 10 months ago
- Official PyTorch implementation of TokenSet.☆126Updated 7 months ago
- Implementation of TiTok, proposed by Bytedance in "An Image is Worth 32 Tokens for Reconstruction and Generation"☆181Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 9 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆77Updated 10 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆98Updated last week
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆54Updated 9 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆37Updated last year
- Offical code for the CVPR 2024 Paper: Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language☆84Updated last year
- ☆78Updated 7 months ago
- This is the repo for the paper "PANGEA: A FULLY OPEN MULTILINGUAL MULTIMODAL LLM FOR 39 LANGUAGES"☆113Updated 4 months ago
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆118Updated last month
- Implementation of a multimodal diffusion transformer in Pytorch☆106Updated last year
- An open source implementation of CLIP (With TULIP Support)☆163Updated 5 months ago
- Recaption large (Web)Datasets with vllm and save the artifacts.☆52Updated 11 months ago
- M4 experiment logbook☆57Updated 2 years ago
- Quick Long Video Understanding☆67Updated 4 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆90Updated 3 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆217Updated last week
- LL3M: Large Language and Multi-Modal Model in Jax☆74Updated last year
- Data release for the ImageInWords (IIW) paper.☆220Updated 11 months ago
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation, arXiv 2024☆64Updated 2 weeks ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆48Updated 7 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆82Updated 2 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 7 months ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆67Updated 6 months ago