rongyaofang / PUMA
Empowering Unified MLLM with Multi-granular Visual Generation
☆119Updated 2 months ago
Alternatives and similar repositories for PUMA:
Users that are interested in PUMA are comparing it to the libraries listed below
- WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation☆53Updated last week
- official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]☆82Updated last month
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆167Updated this week
- The official implementation for "MonoFormer: One Transformer for Both Diffusion and Autoregression"☆86Updated 5 months ago
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆83Updated 8 months ago
- T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation☆72Updated this week
- Video Generation, Physical Commonsense, Semantic Adherence, VideoCon-Physics☆88Updated last week
- [CVPR 2025] 🔥 Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".☆292Updated 3 weeks ago
- Official Implementation of VideoDPO☆68Updated 2 months ago
- A collection of vision foundation models unifying understanding and generation.☆47Updated 2 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆86Updated 2 months ago
- Official implementation of LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.☆70Updated 3 weeks ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆61Updated 3 weeks ago
- [ICLR 2025] Reconstructive Visual Instruction Tuning☆73Updated 3 weeks ago
- The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation☆95Updated 5 months ago
- ☆139Updated 2 months ago
- Official Implementation of ICLR'24: Kosmos-G: Generating Images in Context with Multimodal Large Language Models☆68Updated 10 months ago
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆51Updated 5 months ago
- FQGAN: Factorized Visual Tokenization and Generation☆45Updated 2 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆68Updated 6 months ago
- This is a repo to track the latest autoregressive visual generation papers.☆178Updated this week
- [CVPR2025] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-project/☆127Updated last week
- [ICLR2025]☆140Updated 2 months ago
- Official implementation of Unified Reward Model for Multimodal Understanding and Generation.☆214Updated last week
- [NeurIPS 2024] The official implement of research paper "FreeLong : Training-Free Long Video Generation with SpectralBlend Temporal Atten…☆40Updated last month
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆53Updated last month
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆110Updated 3 months ago
- ☆53Updated this week
- The collection of awesome papers on alignment of diffusion models.☆149Updated 3 weeks ago
- RichHF-18K dataset contains rich human feedback labels we collected for our CVPR'24 paper: https://arxiv.org/pdf/2312.10240, along with t…☆123Updated 9 months ago