huggingface / fineVideoLinks
☆94Updated last year
Alternatives and similar repositories for fineVideo
Users that are interested in fineVideo are comparing it to the libraries listed below
Sorting:
- ☆78Updated 7 months ago
- Implementation of TiTok, proposed by Bytedance in "An Image is Worth 32 Tokens for Reconstruction and Generation"☆183Updated last year
- ☆81Updated last year
- 🦾 EvalGIM (pronounced as "EvalGym") is an evaluation library for generative image models. It enables easy-to-use, reproducible automatic…☆90Updated last year
- Official PyTorch implementation of TokenSet.☆127Updated 9 months ago
- Video-LlaVA fine-tune for CinePile evaluation☆51Updated last year
- This is the repo for the paper "PANGEA: A FULLY OPEN MULTILINGUAL MULTIMODAL LLM FOR 39 LANGUAGES"☆117Updated 5 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆79Updated last year
- Implementation of the proposed MaskBit from Bytedance AI☆83Updated last year
- Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.☆129Updated 3 months ago
- ☆80Updated 9 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 11 months ago
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆54Updated 10 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆37Updated last year
- LL3M: Large Language and Multi-Modal Model in Jax☆74Updated last year
- An open source implementation of CLIP (With TULIP Support)☆163Updated 7 months ago
- Recaption large (Web)Datasets with vllm and save the artifacts.☆52Updated last year
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆128Updated 2 months ago
- Supercharged BLIP-2 that can handle videos☆123Updated 2 years ago
- Quick Long Video Understanding☆70Updated last month
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆39Updated last year
- Data release for the ImageInWords (IIW) paper.☆223Updated last year
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆220Updated 2 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆58Updated 9 months ago
- Multimodal language model benchmark, featuring challenging examples☆181Updated last year
- M4 experiment logbook☆58Updated 2 years ago
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆169Updated last year
- [NeurIPS 2024] Official PyTorch Implementation of "FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner"☆70Updated 2 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 4 months ago
- Multimodal Representation Alignment for Image Generation: Text-Image Interleaved Control Is Easier Than You Think!☆120Updated 9 months ago