Yuan-ManX / ai-multimodal-timelineLinks
Here we will track the latest AI Multimodal Models, including Multimodal Foundation Models, LLM, Agent, Audio, Image, Video, Music and 3D content. π₯
β36Updated 9 months ago
Alternatives and similar repositories for ai-multimodal-timeline
Users that are interested in ai-multimodal-timeline are comparing it to the libraries listed below
Sorting:
- A streamlined implementation of Grounding DINO and SAM for advanced image segmentation. This lightweight solution simplifies the integratβ¦β64Updated last year
- [ACL2025 Oral & Award] Evaluate Image/Video Generation like Humans - Fast, Explainable, Flexibleβ107Updated 3 months ago
- Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.β198Updated last year
- Implementation of the premier Text to Video model from OpenAIβ55Updated last year
- Implementation for the paper "ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI Systems".β193Updated 8 months ago
- β208Updated last year
- β35Updated 2 years ago
- A one-stop library to standardize the inference and evaluation of all the conditional video generation models.β50Updated 9 months ago
- Controllable Animation Video Generation with Large Models-based Multimodal Agentsβ215Updated 2 weeks ago
- β18Updated 7 months ago
- β13Updated last year
- Enhancement in Multimodal Representation Learning.β40Updated last year
- β17Updated last year
- An open source community implementation of the model from the paper: "Movie Gen: A Cast of Media Foundation Models". Join our community β¦β58Updated last week
- Video-Infinity generates long videos quickly using multiple GPUs without extra training.β186Updated last year
- β55Updated 11 months ago
- β41Updated last year
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editingβ69Updated last year
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructionsβ130Updated last year
- Community ComfyUI workflows running on fal.aiβ57Updated last year
- Visual RAG using less than 300 lines of code.β29Updated last year
- Small Multimodal Vision Model "Imp-v1-3b" trained using Phi-2 and Siglip.β17Updated last year
- β69Updated last year
- Official PyTorch implementation of TokenSet.β127Updated 7 months ago
- β29Updated last year
- β194Updated last year
- β24Updated last year
- Fashion-VDM: Video Diffusion Model for Virtual Try-Onβ19Updated last year
- β35Updated 9 months ago
- [arXiv] On-device Sora: Enabling Diffusion-Based Text-to-Video Generation for Mobile Devicesβ126Updated 4 months ago