WindVChen / Sitcom-CrafterLinks
A system for generating diverse, physically compliant 3D human motions across multiple motion types, guided by plot contexts to streamline creative workflows in anime and game design.
☆76Updated 11 months ago
Alternatives and similar repositories for Sitcom-Crafter
Users that are interested in Sitcom-Crafter are comparing it to the libraries listed below
Sorting:
- Official repository for "BAMM: Bidirectional Autoregressive Motion Model (ECCV 2024)"☆56Updated 3 months ago
- ☆92Updated 9 months ago
- ☆102Updated 4 months ago
- ☆87Updated 9 months ago
- ☆116Updated last year
- ☆65Updated 6 months ago
- [NeurIPS 2023] FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing☆132Updated 2 years ago
- Official repository for "MMM: Generative Masked Motion Model" (CVPR 2024 -- Highlight)☆128Updated 6 months ago
- ☆72Updated 7 months ago
- Official repository for "MaskControl: Spatio-Temporal Control for Masked Motion Synthesis" ICCV 2025 (Oral & Award Candidate)☆162Updated 2 months ago
- [CVPR 2025] HumanMM: Global Human Motion Recovery from Multi-shot Videos☆116Updated 9 months ago
- Code for ReMoS: 3D-Motion Conditioned Reaction Synthesis for Two-person Interactions (ECCV 2024)☆33Updated 10 months ago
- The official Pytorch implementation of “BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation”☆51Updated last year
- [CVPRW 2024] Official Implementation of "in2IN: Leveraging individual Information to Generate Human INteractions".☆57Updated last year
- Official Code for "Digital Life Project: Autonomous 3D Characters with Social Intelligence"☆43Updated last year
- GUESS: GradUally Enriching SyntheSis for Text-Driven Human Motion Generation ( IEEE Transactions on Visualization and Computer Graphics, …☆32Updated last year
- Official code for the paper "Monkey See, Monkey Do: Harnessing Self-attention in Motion Diffusion for Zero-shot Motion Transfer"☆55Updated 10 months ago
- HOI-Diff: Text-Driven Synthesis of 3D Human-Object Interactions using Diffusion Models, arXiv 2023☆156Updated last month
- The official implementation of the paper "MAS: Multiview Ancestral Sampling for 3D Motion Generation Using 2D Diffusion"☆126Updated 2 years ago
- MotionChain: Conversational Motion Controllers via Multimodal Prompts☆68Updated last year
- [ECCV 2024] Towards High-Quality 3D Motion Transfer with Realistic Apparel Animation - MMDMC Dataset☆74Updated last year
- Official implementation for the NeurIPS 2024 spotlight paper "Skinned Motion Retargeting with Dense Geometric Interaction Perception".☆59Updated 11 months ago
- Code for ICLR 2024 paper "Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment"☆112Updated last year
- [CVPR 2024] Generating Human Motion in 3D Scenes from Text Descriptions☆99Updated last year
- Light-T2M: A Lightweight and Fast Model for Text-to-motion Generation (AAAI 2025)☆37Updated 10 months ago
- [NeurIPS 2023] InsActor: Instruction-driven Physics-based Characters☆135Updated last year
- Official Code of CVPR 2025 paper "SOLAMI: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Autonomous Characters"☆49Updated 6 months ago
- ☆49Updated last year
- [arXiv'24] Holistic-Motion2D: Scalable Whole-body Human Motion Generation in 2D Space☆46Updated last year
- [Arxiv 2024] MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms☆14Updated last year