Citation: | J. Yang, X. Cao, X. Zhang, Y. Cheng, Z. Qi, and S. Quan, “Instance by instance: An iterative framework for multi-instance 3D registration,” IEEE/CAA J. Autom. Sinica, 2024. doi: 10.1109/JAS.2024.125058 |
[1] |
A. P. Bustos and T.-J. Chin, “Guaranteed outlier removal for point cloud registration with correspondences,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp. 2868–2882, 2017.
|
[2] |
D. Barath and J. Matas, “Graph-cut ransac,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2018, pp. 6733–6741.
|
[3] |
C. Choy, W. Dong, and V. Koltun, “Deep global registration,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2020, pp. 2514–2523.
|
[4] |
J. Lee, S. Kim, M. Cho, and J. Park, “Deep hough voting for robust global registration,” in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021, pp. 15994–16003.
|
[5] |
J. Yang, Z. Huang, S. Quan, Z. Qi, and Y. Zhang, “Sac-cot: Sample consensus by sampling compatibility triangles in graphs for 3-d point cloud registration,” IEEE Trans. Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2021.
|
[6] |
X. Bai, Z. Luo, L. Zhou, H. Chen, L. Li, Z. Hu, H. Fu, and C.-L. Tai, “Pointdsc: Robust point cloud registration using deep spatial consistency,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021, pp. 15859–15869.
|
[7] |
Z. Chen, K. Sun, F. Yang, and W. Tao, “Sc2-pcr: A second order spatial compatibility for efficient and robust point cloud registration,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2022, pp. 13221–13231.
|
[8] |
X. Zhang, J. Yang, S. Zhang, and Y. Zhang, “3d registration with maximal cliques,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2023, pp. 17745–17754.
|
[9] |
B. Drost, M. Ulrich, N. Navab, and S. Ilic, “Model globally, match locally: Efficient and robust 3d object recognition,” in IEEE Computer Society Conf. on Computer Vision and Pattern Recognition. IEEE, 2010, pp. 998–1005.
|
[10] |
J. Guo, X. Xing, W. Quan, D.-M. Yan, Q. Gu, Y. Liu, and X. Zhang, “Efficient center voting for object detection and 6d pose estimation in 3d point cloud,” IEEE Trans. Image Processing, vol. 30, pp. 5072–5084, 2021. doi: 10.1109/TIP.2021.3078109
|
[11] |
W. Tang and D. Zou, “Multi-instance point cloud registration by efficient correspondence clustering,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2022, pp. 6667–6676.
|
[12] |
M. Yuan, Z. Li, Q. Jin, X. Chen, and M. Wang, “Pointclm: A contrastive learning-based framework for multi-instance point cloud registration,” in Proc. of the European Conf. on Computer Vision. Springer, 2022, pp. 595–611.
|
[13] |
Z. Yu, Z. Qin, L. Zheng, and K. Xu, “Learning instance-aware correspondences for robust multi-instance point cloud registration in cluttered scenes,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2024, pp. 19605–19614.
|
[14] |
C. Choy, J. Park, and V. Koltun, “Fully convolutional geometric features,” in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 8958–8966.
|
[15] |
X. Bai, Z. Luo, L. Zhou, H. Fu, L. Quan, and C.-L. Tai, “D3feat: Joint learning of dense detection and description of 3d local features,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2020, pp. 6359–6367.
|
[16] |
S. Huang, Z. Gojcic, M. Usvyatsov, A. Wieser, and K. Schindler, “Predator: Registration of 3d point clouds with low overlap,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021, pp. 4267–4276.
|
[17] |
R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (fpfh) for 3d registration,” in Proc. of the IEEE Int. Conf. on Robotics and Automation. IEEE, 2009, pp. 3212–3217.
|
[18] |
A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser, “3dmatch: Learning local geometric descriptors from rgb-d reconstructions,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 1802–1811.
|
[19] |
S. Ao, Q. Hu, B. Yang, A. Markham, and Y. Guo, “Spinnet: Learning a general surface descriptor for 3d point cloud registration,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021, pp. 11753–11762.
|
[20] |
J. Yang, K. Xian, P. Wang, and Y. Zhang, “A performance evaluation of correspondence grouping methods for 3d rigid data matching,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 43, no. 6, pp. 1859–1874, 2021. doi: 10.1109/TPAMI.2019.2960234
|
[21] |
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. Journal of Computer Vision, vol. 60, pp. 91–110, 2004. doi: 10.1023/B:VISI.0000029664.99615.94
|
[22] |
A. Glent Buch, Y. Yang, N. Kruger, and H. Gordon Petersen, “In search of inliers: 3d correspondence by local and global voting,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2014, pp. 2067–2074.
|
[23] |
J. Yang, Y. Xiao, Z. Cao, and W. Yang, “Ranking 3d feature correspondences via consistency voting,” Pattern Recognition Letters, vol. 117, pp. 1–8, 2019. doi: 10.1016/j.patrec.2018.11.018
|
[24] |
J. Yang, X. Zhang, S. Fan, C. Ren, and Y. Zhang, “Mutual voting for ranking 3d correspondences,” IEEE Trans. Pattern Analysis and Machine Intelligence, 2023.
|
[25] |
M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. doi: 10.1145/358669.358692
|
[26] |
M. Leordeanu and M. Hebert, “A spectral technique for correspondence problems using pairwise constraints,” in Proc. Int. Conf. on Computer Vision, vol. 2. IEEE, 2005, pp. 1482–1489.
|
[27] |
F. Tombari and L. Di Stefano, “Object recognition in 3d scenes with occlusions and clutter by hough voting,” in Pacific-Rim Symposium on Image and Video Technology. IEEE, 2010, pp. 349–355.
|
[28] |
J. Yang, H. Li, D. Campbell, and Y. Jia, “Go-icp: A globally optimal solution to 3d icp point-set registration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 38, no. 11, pp. 2241–2254, 2015.
|
[29] |
A. Parra, T.-J. Chin, F. Neumann, T. Friedrich, and M. Katzmann, “A practical maximum clique algorithm for matching with pairwise constraints,” arXiv preprint arXiv: 1902.01534, 2019.
|
[30] |
K. Fu, S. Liu, X. Luo, and M. Wang, “Robust point cloud registration framework based on deep graph matching,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021, pp. 8893–8902.
|
[31] |
R. Yao, S. Du, W. Cui, A. Ye, F. Wen, H. Zhang, Z. Tian, and Y. Gao, “Hunter: Exploring high-order consistency for point cloud registration with severe outliers,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 45, no. 12, pp. 14 760–14 776, 2023. doi: 10.1109/TPAMI.2023.3312592
|
[32] |
Q.-Y. Zhou, J. Park, and V. Koltun, “Fast global registration,” in Proc. of the European Conf. on Computer Vision. Springer, 2016, pp. 766–782.
|
[33] |
H. Yang, J. Shi, and L. Carlone, “Teaser: Fast and certifiable point cloud registration,” IEEE Trans. Robotics, vol. 37, no. 2, pp. 314–333, 2020.
|
[34] |
J. Yang, J. Chen, S. Quan, W. Wang, and Y. Zhang, “Correspondence selection with loose-tight geometric voting for 3-d point cloud registration,” IEEE Trans. Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2022.
|
[35] |
S. Quan and J. Yang, “Compatibility-guided sampling consensus for 3-d point cloud registration,” IEEE Trans. Geoscience and Remote Sensing, vol. 58, no. 10, pp. 7380–7392, 2020. doi: 10.1109/TGRS.2020.2982221
|
[36] |
Y. Cheng, Z. Huang, S. Quan, X. Cao, S. Zhang, and J. Yang, “Sampling locally, hypothesis globally: accurate 3d point cloud registration with a ransac variant,” Visual Intelligence, vol. 1, no. 1, p. 20, 2023. doi: 10.1007/s44267-023-00022-x
|
[37] |
X. Huang, G. Mei, and J. Zhang, “Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2020, pp. 11366–11374.
|
[38] |
H. Yu, F. Li, M. Saleh, B. Busam, and S. Ilic, “Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration,” Advances in Neural Information Processing Systems, vol. 34, pp. 23 872–23 884, 2021.
|
[39] |
Z. Qin, H. Yu, C. Wang, Y. Guo, Y. Peng, and K. Xu, “Geometric transformer for fast and robust point cloud registration,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2022, pp. 11143–11152.
|
[40] |
S. Ao, Q. Hu, H. Wang, K. Xu, and Y. Guo, “Buffer: Balancing accuracy, efficiency, and generalizability in point cloud registration,” in 2023 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2023, pp. 1255–1264.
|
[41] |
T. Birdal and S. Ilic, “Point pair features based object detection and pose estimation revisited,” in Int. Conf. on 3D Vision. IEEE, 2015, pp. 527–535.
|
[42] |
S. Hinterstoisser, V. Lepetit, N. Rajkumar, and K. Konolige, “Going further with point pair features,” in Proc. of the European Conf. on Computer Vision. Springer, 2016, pp. 834–848.
|
[43] |
J. Vidal, C.-Y. Lin, and R. Martí, “6d pose estimation using an improved method based on point pair features,” in 2018 4th Int. Conf. on Control, Automation and Robotics. IEEE, 2018, pp. 405–409.
|
[44] |
L. Magri and A. Fusiello, “T-linkage: A continuous relaxation of j-linkage for multi-model fitting,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2014, pp. 3954–3961.
|
[45] |
L. magri and A. Fusiello, “Multiple model fitting as a set coverage problem,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 3318–3326.
|
[46] |
L. Magri, F. Andrea et al., “Robust multiple model fitting with preference analysis and low-rank approximation,” in Proc. of the British Machine Vision Conf. 2015, 2015, pp. 20–1.
|
[47] |
D. Barath and J. Matas, “Progressive-x: Efficient, anytime, multi-model fitting algorithm,” in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 3780–3788.
|
[48] |
D. Barath, D. Rozumny, I. Eichhardt, L. Hajder, and J. Matas, “Progressive-x+: Clustering in the consensus space,” arXiv preprint arXiv: 2103.13875, vol. 1, no. 2, 2021.
|
[49] |
D. Barath and J. Matas, “Multi-class model fitting by energy minimization and mode-seeking,” in Proc. of the European Conf. on Computer Vision, 2018, pp. 221–236.
|
[50] |
F. Kluger, E. Brachmann, H. Ackermann, C. Rother, M. Y. Yang, and B. Rosenhahn, “Consac: Robust multi-model fitting by conditional sample consensus,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2020, pp. 4634–4643.
|
[51] |
Z. Li, J. Ma, and G. Xiao, “Density-guided incremental dominant instance exploration for two-view geometric model fitting,” IEEE Trans. Image Processing, vol. 32, pp. 5408–5422, 2023. doi: 10.1109/TIP.2023.3318945
|
[52] |
W. Yin, S. Lin, Y. Lu, and H. Wang, “Diverse consensuses paired with motion estimation-based multi-model fitting,” in ACM Multimedia 2024, 2024.
|
[53] |
E. Rodolà, A. Albarelli, F. Bergamasco, and A. Torsello, “A scale independent selection process for 3d object recognition in cluttered scenes,” Int. Journal of Computer Vision, vol. 102, pp. 129–145, 2013. doi: 10.1007/s11263-012-0568-x
|
[54] |
A. Albarelli, E. Rodola, and A. Torsello, “A game-theoretic approach to fine surface registration without initial motion estimation,” in 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition. IEEE, 2010, pp. 430–437.
|
[55] |
J. W. Weibull, Evolutionary game theory. MIT press, 1997.
|
[56] |
J. Yang, Z. Huang, S. Quan, Q. Zhang, Y. Zhang, and Z. Cao, “Toward efficient and robust metrics for ransac hypotheses and 3d rigid registration,” IEEE Trans. Circuits and Systems for Video Technology, vol. 32, no. 2, pp. 893–906, 2021.
|
[57] |
S. Quan, J. Ma, F. Hu, B. Fang, and T. Ma, “Local voxelized structure for 3d binary feature representation and robust registration of point clouds from low-cost sensors,” Information Sciences, vol. 444, pp. 153–171, 2018. doi: 10.1016/j.ins.2018.02.070
|
[58] |
C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in Neural Information Processing Systems, vol. 30, 2017.
|
[59] |
A. Avetisyan, M. Dahnert, A. Dai, M. Savva, A. X. Chang, and M. Nießner, “Scan2cad: Learning cad model alignment in rgb-d scans,” in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2019, pp. 2609–2618.
|
[60] |
M. Savva, F. Yu, H. Su, M. Aono, B. Chen, D. Cohen-Or, W. Deng, H. Su, S. Bai, X. Bai et al., “Shrec16 track: largescale 3d shape retrieval from shapenet core55,” in Proc. of the Eurographics Workshop on 3D Object Retrieval, vol. 10, 2016.
|
[61] |
A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner, “Scannet: Richly-annotated 3d reconstructions of indoor scenes,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 2432–2443.
|
[62] |
N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. doi: 10.1109/TSMC.1979.4310076
|
[63] |
J. Yang, Z. Huang, S. Quan, Z. Cao, and Y. Zhang, “Ransacs for 3d rigid registration: A comparative evaluation,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 10, pp. 1861–1878, 2022. doi: 10.1109/JAS.2022.105500
|