引用本文: | 苏统华,李松泽,邓胜春,于洋,白薇.面向异构并行架构的大规模原型学习算法[J].哈尔滨工业大学学报,2016,48(11):53.DOI:10.11918/j.issn.0367-6234.2016.11.009 |
| SU Tonghua,LI Songze,DENG Shengchun,YU Yang,BAI Wei.Massively scalable prototype learning for heterogeneous parallel computing architecture[J].Journal of Harbin Institute of Technology,2016,48(11):53.DOI:10.11918/j.issn.0367-6234.2016.11.009 |
|
摘要: |
为解决当前原型学习算法在大规模、大类别机器学习和模式识别领域的计算密集瓶颈问题, 提出一种采用GPU和CPU异构并行计算架构的可扩展原型学习算法框架.一是通过分解和重组算法的计算任务, 将密集的计算负载转移到GPU上,而CPU只需进行少量的流程控制.二是根据任务类型自适应地决定是采用分块策略还是并行归约策略来实现.采用大规模手写汉字样本库验证本框架, 在消费级显卡GTX680上使用小批量处理模式进行模型学习时,最高可得到194倍的加速比,升级到GTX980显卡,加速比可提升到638倍; 算法甚至在更难以加速的随机梯度下降模式下, 也至少能获得30倍的加速比.该算法框架在保证识别精度的前提下具有很高的可扩展性, 能够有效解决原有原型学习的计算瓶颈问题.
|
关键词: 原型学习 学习矢量量化 手写汉字识别 并行归约 异构并行计算 |
DOI:10.11918/j.issn.0367-6234.2016.11.009 |
分类号:TP181 |
文献标识码:A |
基金项目:国家自然科学基金(61203260);黑龙江省自然科学基金重点项目(ZD2015017);哈尔滨工业大学科研创新基金 (HIT.NSRIF.2015083) |
|
Massively scalable prototype learning for heterogeneous parallel computing architecture |
SU Tonghua1, LI Songze1, DENG Shengchun1, YU Yang2, BAI Wei3
|
(1. School of Software, Harbin Institute of Technology, Harbin 150001, China; 2. Dalian Branch China Construction Eighth Engineering Division Corp. Ltd, Dalian 116021, Liaoning, China; 3.Nokia Solutions and Networks, Hangzhou 310053, China)
|
Abstract: |
Current learning algorithms for prototype learning require intensive computation burden for large category machine learning and pattern recognition fields. To solve this bottleneck problem, a principled scalable prototype learning method is proposed based on heterogeneous parallel computing architecture of GPUs and CPUs. The method can transfer the intense workload to the GPU side instead of CPU side through splitting and rearranging the computing task, so that only a few control process is needed to be managed by the CPU. Meanwhile, the method has the ability to adaptively choose the strategies between tiling and reduction depending on its workload. Our evaluations on a large Chinese character database show that up to 194X speedup can be achieved in the case of mini-batch when evaluated on a consumer-level card of GTX 680. When a new GTX980 card is used, it can scale up to 638X. Even to the more difficult SGD occasion, a more than 30-fold speedup is observed. The proposed framework possess a high scalability while preserving its performance precision, and can effectively solve the bottleneck problems in prototype learning.
|
Key words: prototype learning learning vector quantization Chinese character recognition parallel reduction heterogeneous parallel computing |