sisl / Double-Prong-OccupancyLinks
Double-Prong ConvLSTM for Spatiotemporal Occupancy Prediction in Dynamic Environments
☆18Updated 4 years ago
Alternatives and similar repositories for Double-Prong-Occupancy
Users that are interested in Double-Prong-Occupancy are comparing it to the libraries listed below
Sorting:
- ☆11Updated last year
 - End-To-End Optimization of LiDAR Beam Configuration☆34Updated 3 years ago
 - ☆29Updated 3 years ago
 - TensorFlow training pipeline and dataset for prediction of evidential occupancy grid maps from lidar point clouds.☆49Updated last year
 - Lidar Point Cloud Super Resolution☆13Updated 4 years ago
 - Occupancy grid mapping using Python - KITTI dataset☆29Updated 6 years ago
 - [ICRA2024] Official code of the paper "Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous Driving via Differentiable Multi…☆71Updated last year
 - [RA-L & IROS'22] Self-Supervised Scene Flow Estimation with 4-D Automotive Radar☆73Updated 2 years ago
 - Multi-Modal Sensor Fusion and Object Tracking for Autonomous Racing☆92Updated 2 years ago
 - PCPNet: An Efficient and Semantic-Enhanced Transformer Network for Point Cloud Prediction.☆32Updated 2 years ago
 - [IEEE TIM] Det6D: A Ground-Aware Full-Pose 3D Object Detector for Improving Terrain Robustness☆35Updated 2 years ago
 - Tutorial on Autonomous Vehicles' mapping algorithm with Occupancy Grid Map and Dynamic Grid Map using KITTI Dataset☆47Updated last year
 - A Ground Mobile Robot Perception Dataset, IEEE RA-L & IEEE T-CYB☆36Updated 4 years ago
 - [IEEE RAL] Fast and Robust Registration of Partially Overlapping Point Clouds in Driving Applications☆63Updated 2 years ago
 - Semantic Guided Depth Estimation with Transformers Using Monocular Camera and Sparse Radar☆41Updated 11 months ago
 - A naive ROS interface for visualDet3D.☆28Updated 2 years ago
 - A summary and list of open source 3D multi object tracking and datasets at this stage.☆41Updated 3 years ago
 - 无人驾驶资源收集☆32Updated 6 years ago
 - Regular voxelization on point clouds to acquire different information of each voxel.