Feature point extraction and matching method of humanoid-eye binocular images
CSTR:
Author:
Affiliation:

(Key Laboratory of Road Construction Technology and Equipment (Changan University), Ministry of Education, Xian 710064, China)

Clc Number:

TP391

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Imitating the human visual characteristics has become a research hot and challenging research topic for machines to move towards intelligent perception and intelligent cognition. Human eyes are more sensitive to edge of objects in the scene because edge contains abundant information. To realize this visual characteristic in machines, a feature point extraction and matching method of humanoid-eye binocular images is proposed. Firstly, smallest univalue segment assimilating nucleus (SUSAN) operator with outstanding edge feature extraction capability is selected as feature detector. Then, the sampling neighborhood of scale invariant feature transform (SIFT) descriptor is improved to reduce the matching error of gradient information far away from feature points due to viewpoint and view direction differences, and the main gradient information close to feature points is retained. Whereafter, a multi-scale structure is established for the input image, and the main gradient information of the same feature is computed at different scales. Finally, the square root kernel is used to compare the similarity of the gradient information, and the multi-scale descriptor is generated to enhance the uniqueness of the description vector. In the experiment, a variety of evaluation indexes are used to evaluate the proposed multi-scale descriptor and overall algorithm respectively, and compared with the classical SIFT, speeded up robust features (SURF), Root-SIFT and the advanced boosted efficient binary local image descriptor (BEBLID), SuperGlue, DFM algorithms. The results show that the proposed multi-scale descriptor improves the matching accuracy of edge feature points and has stronger adaptability to illumination changes, thereby demonstrating better matching stability. Compared with other algorithms, the proposed algorithm has higher matching accuracy.

    Reference
    Related
    Cited by
Get Citation
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:May 17,2023
  • Revised:
  • Adopted:
  • Online: April 12,2024
  • Published:
Article QR Code