期刊检索

  • 2024年第56卷
  • 2023年第55卷
  • 2022年第54卷
  • 2021年第53卷
  • 2020年第52卷
  • 2019年第51卷
  • 2018年第50卷
  • 2017年第49卷
  • 2016年第48卷
  • 2015年第47卷
  • 2014年第46卷
  • 2013年第45卷
  • 2012年第44卷
  • 2011年第43卷
  • 2010年第42卷
  • 第1期
  • 第2期

主管单位 中华人民共和国
工业和信息化部
主办单位 哈尔滨工业大学 主编 李隆球 国际刊号ISSN 0367-6234 国内刊号CN 23-1235/T

期刊网站二维码
微信公众号二维码
引用本文:任秉银,魏坤,代勇.3D混杂场景中机械臂自主分拣小目标的方法[J].哈尔滨工业大学学报,2019,51(7):42.DOI:10.11918/j.issn.0367-6234.201808133
REN Bingyin,WEI Kun,DAI Yong.A novel method for small object autonomous sorting of robotic manipulator in 3D clutter scene[J].Journal of Harbin Institute of Technology,2019,51(7):42.DOI:10.11918/j.issn.0367-6234.201808133
【打印本页】   【HTML】   【下载PDF全文】   查看/发表评论  下载PDF阅读器  关闭
过刊浏览    高级检索
本文已被:浏览 1824次   下载 1728 本文二维码信息
码上扫一扫!
分享到: 微信 更多
3D混杂场景中机械臂自主分拣小目标的方法
任秉银,魏坤,代勇
(哈尔滨工业大学 机电工程学院, 哈尔滨 150001)
摘要:
为解决机械臂在大小目标共存的3D混杂场景中无法利用3D视觉传感器直接感知分布于操作视场范围内的小目标这一难题,提出一种基于“固定安装的全局Kinect深度相机”与“安装在机械臂末端执行器上的移动相机(手眼相机)”相结合的视觉系统混合配置方法. 固定的全局Kinect深度相机用于感知并获取视场范围内的大目标点云,进而识别估计其位姿,然后借助路径规划技术引导机械臂到达大目标的上方,启动手眼相机近距离获取小目标的图像;离线阶段获取小目标的CAD模型,虚拟2D相机在以目标中心为球心的虚拟球表面的不同位姿和不同半径处拍摄目标的一系列二维视图,并且储存在目标的3D形状模板数据库中;在线阶段从真实手眼相机拍摄的场景图像中基于图像金字塔分层逐一搜索匹配,找到与目标模板相匹配的所有实例并计算其二维位姿,经过一系列转换后得到在相机坐标系下的初始三维位姿,应用非线性最小二乘法对其进行位姿修正. 由ABB机械臂和微软Kinect V2传感器以及维视图像公司的工业相机进行位姿估计精度实验和混杂目标分拣实验,利用棋盘标定板来测定目标真实的位姿. 实验结果表明,位置精度0.48 mm,姿态精度0.62°,平均识别时间1.85 s,识别率达到98%,远高于传统的基于特征和基于描述符的位姿估计方法,从而证明了提出方法的有效性和可行性.
关键词:  机械臂  3D感知  小目标  手眼相机  CAD模型  模板匹配  自主分拣
DOI:10.11918/j.issn.0367-6234.201808133
分类号:TP391
文献标识码:A
基金项目:
A novel method for small object autonomous sorting of robotic manipulator in 3D clutter scene
REN Bingyin,WEI Kun,DAI Yong
(School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150001, China)
Abstract:
It is very difficult for a robotic manipulator to perceive and manipulate small objects directly using 3D visual sensors within its vision range in the scene where big and small targets are co-existed in 3D clutter scene. To solve the problem, a method for hybrid configuration of vision system based on fixed globally Kinect depth camera and fixed in robotic end effector moving camera (eye-in-hand camera) is proposed, in which the fixed globally Kinect depth camera is adopted to perceive and obtain the point clouds of big targets within its vision range, and their poses are recognized and estimated, which is utilized to guide the manipulator to move and arrive at big targets using path planning technology. An eye-in-hand camera is launched to capture the images of small object. In offline phase, the CAD model of a small object is created. A set of 2D view images are captured by a virtual 2D camera which is located at the surface of a sphere whose center is pointed into an object at different pose and radius, and stored in a database of 3D shape template of the object. In online phase, the scene image captured by a real eye-in-hand camera is explored and matched hierarchically one by one in details based on image pyramid to find all the instances matching with object templates and to compute their 2D poses. Initial 3D pose is obtained with respect to camera frame coordinate through a series of transformations. Rough pose is refined based on nonlinear least squares method. Experiments of pose estimation accuracy and industrial clutter objects sorting application are performed with ABB robotic manipulator, Microsoft Kinect V2 sensor and Micro Vision industrial camera. A checkerboard is employed to determine the true pose of the object. The results show that the position and orientation accuracy is 0.48 mm and 0.62°, respectively, and the recognition rate is 98% with average time 1.85 s, which is much higher than those of traditional feature-based and descriptor-based pose estimation methods.
Key words:  robotic manipulator  3D perception  small object  eye-in-hand camera  CAD model  temple matching  autonomous sorting

友情链接LINKS