OpenGVLab / PhyGenBench
The code and data of Paper: Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation
☆78Updated 2 months ago
Alternatives and similar repositories for PhyGenBench:
Users that are interested in PhyGenBench are comparing it to the libraries listed below
- Empowering Unified MLLM with Multi-granular Visual Generation☆114Updated this week
- Video Generation, Physical Commonsense, Semantic Adherence, VideoCon-Physics☆70Updated 3 months ago
- The collection of awesome papers on alignment of diffusion models.☆72Updated last month
- T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation☆59Updated this week
- The official implementation for "MonoFormer: One Transformer for Both Diffusion and Autoregression"☆80Updated 3 months ago
- official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]☆67Updated last month
- Implementation of Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding☆26Updated 2 months ago
- 🔥 Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".☆222Updated 2 weeks ago
- [ICML 2024] On Discrete Prompt Optimization for Diffusion Models - Google☆42Updated 5 months ago
- [Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller☆31Updated last month
- Codes accompanying the paper "Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment"☆23Updated 2 months ago
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆83Updated 6 months ago
- FQGAN: Factorized Visual Tokenization and Generation☆39Updated last week
- A collection of vision foundation models unifying understanding and generation.☆40Updated 2 weeks ago
- This is a repo to track the latest autoregressive visual generation papers.☆103Updated 2 weeks ago
- Official implementation of LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.☆52Updated last week
- Source code for "A Dense Reward View on Aligning Text-to-Image Diffusion with Preference" (ICML'24).☆35Updated 8 months ago
- Liquid: Language Models are Scalable Multi-modal Generators☆60Updated last month
- SpeeD: A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training☆162Updated 2 months ago
- Implements VAR+CLIP for text-to-image (T2I) generation☆112Updated 2 weeks ago
- [NeurIPS 2024] The official implement of research paper "FreeLong : Training-Free Long Video Generation with SpectralBlend Temporal Atten…☆34Updated last month
- CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆75Updated last month
- ☆128Updated last month
- VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation☆95Updated last week
- [CVPR 2024] On the Content Bias in Fréchet Video Distance☆101Updated 3 months ago
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆45Updated 3 months ago
- Official implementation for BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way☆20Updated 3 months ago
- Open implementation of "RandAR"☆48Updated this week
- Code for ROICtrl: Boosting Instance Control for Visual Generation☆99Updated last month