showlab / Show-Anything-3D
Edit and Generate Anything in 3D world!
☆13Updated 2 years ago
Alternatives and similar repositories for Show-Anything-3D
Users that are interested in Show-Anything-3D are comparing it to the libraries listed below
Sorting:
- A curated list of papers and resources for text-to-image evaluation.☆29Updated last year
- An interactive demo based on Segment-Anything for stroke-based painting which enables human-like painting.☆35Updated 2 years ago
- Complex-Edit: CoT-Like Instruction Generation for Complexity-Controllable Image Editing Benchmark☆16Updated 3 weeks ago
- TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation☆30Updated 5 months ago
- Accepted by AAAI2022☆21Updated 3 years ago
- Awesome-DragGAN: A curated list of papers, tutorials, repositories related to DragGAN☆83Updated last year
- ☆23Updated 10 months ago
- ☆10Updated 10 months ago
- The official repository for CVPRW2024 paper "What’s in a Name? Beyond Class Indices for Image Recognition"☆12Updated 8 months ago
- Toolbox for GTA-Human Datasets☆16Updated 7 months ago
- A visual LLM for image region description or QA.☆15Updated last year
- Code for paper <Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation> in ICCV 2021.☆13Updated 3 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆27Updated last year
- DDS: Delta Denoising Score PyTorch implementation☆19Updated last year
- Official repo of 3DYoga90 dataset☆12Updated last year
- This repository is associated with the research paper titled ImageChain: Advancing Sequential Image-to-Text Reasoning in Multimodal Large…☆12Updated 2 months ago
- ☆13Updated 8 months ago
- Code and Data for Paper: SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data☆34Updated last year
- [ICCV 2021] Click to Move: Controlling Video Generation with Sparse Motion☆11Updated 2 years ago
- ☆9Updated 11 months ago
- ☆14Updated 2 years ago
- [NCA] Official implementation of the paper Motion2Language, Unsupervised learning of synchronized semantic motion segmentation☆11Updated 8 months ago
- Official repo for the TMLR paper "Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners"☆29Updated last year
- ☆21Updated 5 months ago
- A one-stop library to standardize the inference and evaluation of all the conditional video generation models.☆48Updated 3 months ago
- Democratising RGBA Image Generation With No $$$ (AI4VA@ECCV24)☆26Updated 8 months ago
- My implementation of the model KosmosG from "KOSMOS-G: Generating Images in Context with Multimodal Large Language Models"☆14Updated 6 months ago
- Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation☆28Updated 4 months ago
- Motion-conditional image animation for video editing☆20Updated last year
- Code for Paper 'Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep Approach'☆17Updated 7 months ago