vgthengane / Awesome-FMs-in-3DView external linksLinks
A comprehensive surevy on Multimodal Models in 3D
☆74Jul 23, 2024Updated last year
Alternatives and similar repositories for Awesome-FMs-in-3D
Users that are interested in Awesome-FMs-in-3D are comparing it to the libraries listed below
Sorting:
- [NeurIPS'23] ConDaFormer: Disassembled Transformer with Local Structure Enhancement for 3D Point Cloud Understanding☆12Dec 9, 2023Updated 2 years ago
- [NeurIPS2023] Implementation of the paper: Explore In-Context Learning for 3D Point Cloud Understanding☆73Nov 28, 2024Updated last year
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆373Oct 21, 2025Updated 3 months ago
- ☆35Apr 4, 2024Updated last year
- [MM 2024] [Need only a 3090] MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors☆125Sep 11, 2024Updated last year
- [NeurIPS 2024] A Unified Framework for 3D Scene Understanding☆171Jul 7, 2025Updated 7 months ago
- [NeurIPS DB 2025] IR3D-Bench: Evaluating Vision-Language Model Scene Understanding as Agentic Inverse Rendering☆43Oct 15, 2025Updated 4 months ago
- Official implementation of the paper "MAENet: Boost Image-guided Point Cloud Completion More Accurate and Even" (Information Fusion 2025)☆16Jun 4, 2025Updated 8 months ago
- [NeurIPS 2024] PointMamba: A Simple State Space Model for Point Cloud Analysis☆519Mar 19, 2025Updated 10 months ago
- ☆29Updated this week
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆81Oct 10, 2024Updated last year
- ☆73Mar 29, 2025Updated 10 months ago
- [CVPR 2025] Official implementation of the paper "Point-Cache: Test-time Dynamic and Hierarchical Cache for Robust and Generalizable Poin…☆16Dec 24, 2025Updated last month
- Unifying 2D and 3D Vision-Language Understanding☆121Jul 23, 2025Updated 6 months ago
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Langu…☆311Jul 17, 2024Updated last year
- CVPR2025☆21Aug 16, 2025Updated 6 months ago
- (ICCV2023) Official implementation of 'ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding with GPT and Prototype Guidance'…☆59Apr 18, 2024Updated last year
- A collection of papers about domain adaptation for 3D object detection. Welcome to PR the works (papers, repositories) that are missed by…☆23Mar 27, 2025Updated 10 months ago
- Template repository for generating semantic maps☆16Feb 4, 2019Updated 7 years ago
- Official implementation of the WACV 2025 paper "3D Part Segmentation via Geometric Aggregation of 2D Visual Features"☆25Jun 8, 2025Updated 8 months ago
- ☆43Nov 1, 2024Updated last year
- ☆39Jul 19, 2024Updated last year
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆104Jul 9, 2025Updated 7 months ago
- The code for the paper "Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models" (ICCV'23).☆111Dec 19, 2023Updated 2 years ago
- KT-Net: Knowledge Transfer for Unpaired 3D Shape Completion(point cloud completion)☆25Oct 20, 2024Updated last year
- ☆113Sep 3, 2024Updated last year
- This is the official implementation of "Clustering based Point Cloud Representation Learning for 3D Analysis" (Accepted at ICCV 2023).☆44Mar 7, 2024Updated last year
- GBlobs: Explicit Local Structure via Gaussian Blobs for Improved Cross-Domain LiDAR-based 3D Object Detection☆31Nov 15, 2025Updated 3 months ago
- [CVPR 2025] Learning Class Prototypes for Unified Sparse Supervised 3D Object Detection☆26Apr 28, 2025Updated 9 months ago
- Point Mamba☆132May 7, 2024Updated last year
- Repo for visualization of MCSS outputs and its evaluation☆17Apr 26, 2021Updated 4 years ago
- [ECCV2024] Any2Point: Empowering Any-modality Large Models for Efficient 3D Understanding☆125Jul 2, 2024Updated last year
- [ECCV 2024 Best Paper Candidate & TPAMI 2025] PointLLM: Empowering Large Language Models to Understand Point Clouds☆973Aug 14, 2025Updated 6 months ago
- ☆25Nov 6, 2024Updated last year
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆53Jun 13, 2024Updated last year
- ☆10Jan 1, 2024Updated 2 years ago
- The Most Faithful Implementation of Segment Anything (SAM) in 3D☆353Sep 11, 2024Updated last year
- [ICLR'25] City-scale 3D Visual Grounding with Multi-modality LLMs☆64Oct 29, 2025Updated 3 months ago
- [ICLR 2025 Spotlight] Multimodality Helps Few-shot 3D Point Cloud Semantic Segmentation☆68May 7, 2025Updated 9 months ago