• Volume 50,Issue 11,2018 Table of Contents
    Select All
    Display Type: |
    • A tissue-specific protein interaction network construction method for rice

      2018, 50(11):1-9. DOI: 10.11918/j.issn.0367-6234.201803117

      Abstract (3397) HTML (213) PDF 2.94 M (1897) Comment (0) Favorites

      Abstract:The expression pattern of genes and protein interactions in specific tissues are important frameworks for studying gene regulation, protein function, and cellular processes.Compared with the research progress of other model organisms in the interactome, the tissue-specific protein interaction research and development in higher plants is very slow, especially in rice.With this motivation, we have proposed a computing framework to predict tissue-specific protein-protein networks for rice.This framework consists of three parts:(a) identification of tissue-specific genes by integrating multiple dataset under a unified criterion; (b) prediction and evaluation of the protein interaction network based on the resource of six model organisms by using the proposed novel Interolog mapping method; (c) tissue-specific subnet construction in each tissue and high reliable interactions filter based on co-expression correlation.To evaluate the effectiveness of our framework, PTSN4R (Predicted Tissue-Specific Networks for Rice) is constructed and analyzed.PTSN4R is the first integrated database for tissue-specific protein interactions of rice, which contains tissue-specific genes and the interaction networks of 23 rice tissues.And, it provides a tissue-specific perspective to conveniently analyze the gene expression and protein interaction.These resources can help researchers understand the intrinsic regulatory mechanisms of rice growth and development and provide clues for rice yield increase.In addition, the proposed framework can extend to other species easily to improve the research of tissue-specific protein interactions.

    • Study on the springback and quality of S355J2W steel in resistance heating bending

      2018, 50(11):10-16. DOI: 10.11918/j.issn.0367-6234.201805117

      Abstract (1710) HTML (193) PDF 6.19 M (1252) Comment (0) Favorites

      Abstract:In order to investigate the influence of the current density on the springback and quality of S355J2W steel parts formed by resistance heating bending, the V-shaped and cap-shaped resistance heating bending experiments were designed.The springback of steel parts after forming was studied by varying current densities in resistance heating.According to the test results, the quality of the parts with the highest forming accuracy and the smallest springback were selected to compare with the parts formed at room temperature to study the advantage of resistance heating bending.The selected two parts were divided into 9 different regions.The microstructure and mechanical properties of these regions, such as strength, hardness, elongation, and impact toughness were used as the standard to evaluate the quality of the formed parts.Moreover, the differences in mechanical properties of the two parts were explained by the results of microstructural observation.The results showed that the springback of S355J2W steel in resistance heating bending was improved with the increase current density.Compared to the parts formed at room temperature, resistance heating bending process can greatly improve the springback and quality.At the same time, the mechanical properties were also improved in resistance heating bending.

    • DFR value testing and theoretical calculation on laser beam welded structures of fuselage panel

      2018, 50(11):17-22. DOI: 10.11918/j.issn.0367-6234.201709120

      Abstract (2001) HTML (332) PDF 1.10 M (1566) Comment (0) Favorites

      Abstract:The aircraft fuselage panel is the key position of fatigue.To evaluate the fatigue performance of the laser beam welded (LBW) structures which are first applied in aircraft fuselage panel by Airbus Company instead of traditional riveted structure, two methods of experimental determination and statistical theoretical calculation were used.The fatigue tests, fracture analyses and detail fatigue rating (DFR) analyses were carried out on two groups of laser beam welded structures which were obtained by different post weld heat treatment (PWHT) processes(solid solution + aging treatment, and aging treatment) with the new type of aluminum lithium alloy 2060-T3, and the test DFR value was acquired as well as the theoretical DFR value.The analysis results show that the test DFR value of the solid solution + aging treatment is 118.0 MPa and the one of aging treatment is 113.7 MPa.The theoretical DFR values for them are 115.8 MPa (the relative error is 1.86%) and 112.2 MPa (the relative error is 1.32%), respectively.The source of cracks in the two structures is produced both at the weld toe.The crack propagates along the thickness of the plate and the crack propagation area is characterized by brittle cleavage river pattern.The difference is that final fracture region of the former is characterized by micropore aggregation fracture with many small dimples in the fracture, and the one of the latter is intergranular fracture with many grains in the fracture.The theoretical analysis can be used to restore the Weibull shape parameter, to verify the applicability of the theory of statistical analysis method, and to provide the reference for the DFR value of laser beam welded structures of fuselage panel.

    • Image Keyframe-based Visual-Depth Map Establishing Method

      2018, 50(11):23-31. DOI: 10.11918/j.issn.0367-6234.201804142

      Abstract (1835) HTML (151) PDF 2.25 M (1418) Comment (0) Favorites

      Abstract:In the indoor visual positioning system, the offline Visual Map database is usually used to store the database images, and then user position is estimated by comparing with the images in the Visual Map database in the online phase.The database establishment in the offline phase can be realized with point-by-point sampling method or video stream sampling method.However, taking into account the similarity of database images, redundancy exists between the images in the database, which leads to more positioning time consumption in the online phase.Therefore, owing to the similarity between successive images, a Visual-Depth Map method which reduces the database scale is proposed based on image keyframes.In the offline phase, the method adopts a Kinect sensor which acquires image information and depth information simultaneously.And then, for establishing the Visual-Depth Map, the keyframe sequence is selected from original database images by the image keyframe algorithm that is based on the image similarity.In the online phase, the query image captured by the user is retrieved and matched with the database image in the Visual-Depth Map with higher similarity.The EPnP algorithm is employed to estimate the query camera pose so as to achieve the user position.Experimental results show that the proposed Visual-Depth Map method based on the image keyframe reduces the database scale and positioning time consumption comparing with traditional methods, under the precondition of the high positioning accuracy.

    • A sentiment analysis model with the combination of deep learning and ensemble learning

      2018, 50(11):32-39. DOI: 10.11918/j.issn.0367-6234.201709078

      Abstract (2784) HTML (363) PDF 1.94 M (1433) Comment (0) Favorites

      Abstract:With the development of social media, users' evaluations have become a key factor in network decision-making.Owing to the necessity of making a more accurate analysis on the emotional tendency of social media users' evaluations as well as promoting public opinion analysis and recommendation algorithms, a sentiment analysis model called Bi-LSTMM-B (Bi-directional Long Short Term Memory Model with Maxout neurons in Bagging algorithm) is proposed.With the feature of combining deep learning model and the idea of ensemble learning, the model improves the Bi-LSTM model and the Bagging algorithm.On the one hand, the Bi-LSTMM model introduces the Maxout neural into the Bi-LSTM model to solve the vanishing gradient problem during the stochastic gradient descent training and optimize the training process.On the other hand, multiple emotional classifiers were trained at the foundation of the Bagging algorithm.The out of bag data assigns the weight for each classifier on specified category according to their performance.Hence the voting strategy is improved to enhance the generalization ability of the model.The experimental results indicate that the accuracy of the Bi-LSTMM-B model is improved by 12.08% compared to the traditional LSTM model.It is also superior to other contrast models in the recall rate and F value.Therein, the introduction of Maxout neurons has a relative improvement effect of 8.28% on the accuracy of sentiment analysis, while the improved voting strategy accounts for 4.06% on the accuracy.Thus, it proves that combining deep learning and ensemble learning contributes to the improvement of the accuracy of sentiment analysis, which shows some value in research.

    • An improved hybrid grey wolf optimization algorithm based on Tent mapping

      2018, 50(11):40-49. DOI: 10.11918/j.issn.0367-6234.201806096

      Abstract (3187) HTML (656) PDF 3.18 M (1856) Comment (0) Favorites

      Abstract:As the grey wolf algorithm is easy to fall into local optimum and lack of consideration of own experience, this paper proposes a grey wolf optimization algorithm based on particle swarm optimization (PSO_GWO).Firstly, it generates the initial population through Tent chaotic map, which increases the diversity of the population.Then, this paper adopts non-linear control parameters.Its decline speed is slow in the early stage, which can increase the global search ability and prevent the algorithm from falling into the local optimum.The decline speed is quick in the later stage, which can increase the algorithm's local search ability and improve the overall convergence speed.Finally, the idea of particle swarm optimization is introduced to update the position information of individual wolves by combining the best value of the individual with the best value of the population, so as to preserve the best position information of the wolves.In order to verify the effectiveness of the algorithm, this paper compared it with three other algorithms.The experimental results suggested that the solution searched by this paper is more ideal than the other three algorithms on the unimodal function and the multimodal function.The PSO_GWO algorithm worked better than the IGWO algorithm (the improved grey wolf optimization algorithm) in calculating the time complexity; as the population size increased, the convergence value of the PSO_GWO algorithm gradually approached the ideal value.So the proposed algorithm can quickly search the global optimal solution and has better robustness.

    • Quantum particle swarm optimization algorithm for high-dimensional multi-modal optimization

      2018, 50(11):50-58. DOI: 10.11918/j.issn.0367-6234.201806065

      Abstract (2202) HTML (240) PDF 1.66 M (1190) Comment (0) Favorites

      Abstract:To solve the problem of high-dimensional multi-modal optimization in practical engineering, a multi-strategy evolutionary quantum particle swarm optimization (QPSO) algorithm based on dynamic neighborhood is proposed.For the "premature" problem of particles in QPSO algorithm, this paper first defines a dynamic neighborhood selection mechanism to maintain the "activity" of the population and then combines the dynamic neighborhood mechanism to define the local attractor update equations of three different strategies to maintain the "diversity" of population evolution.In order to prevent the evolutionary direction of the algorithm from diverging, the local attractor update strategy converging to the global optimal solution is given greater weight.In the end, the comprehensive evaluation method of the wolves optimization algorithm is introduced to expand the optimal solution space.The experimental based on different types of high-dimensional multi-modal benchmark functions show that compared to the other four optimization algorithms, the proposed optimization algorithm has obvious advantages in convergence accuracy and stability, and this advantage becomes more prominent with the increase of testing dimensions, which shows a better performance in solving high-dimensional multi-modal optimization problems.The comprehensive evaluation method introduced in this paper has a high effective number of times in all test functions.The comprehensive evaluation of the effective means to find a more favorable evolution direction for the next evolution and further enhance the accuracy convergence.

    • Multi-focus image fusion based on SIFT dictionary learning and guided filtering

      2018, 50(11):59-66. DOI: 10.11918/j.issn.0367-6234.201806014

      Abstract (1632) HTML (179) PDF 3.21 M (1642) Comment (0) Favorites

      Abstract:In order to solve the problem that the local detail retention ability, spatial continuity and non-registration problems of most multi-focus image fusion algorithms cannot be improved at the same time, this paper proposes a multi-focus image fusion algorithm based on SIFT dictionary learning and guided filtering. The algorithm overcomes the problem that the low rank representation of image can capture the global structure but could not preserve the local structure by learning sub-dictionaries. The classification of the sub-dictionaries utilizes the translation invariance and the scale invariance etc. of SIFT to eliminate the fusion artifacts of unregistered images. In addition, the adaptive-window guided filtering is performed during the low rank representation coefficients fusion progress, which increases the spatial continuity of fused image. Pixels with rich texture assign to small window, while weak texture pixels choose large window. We select 6 groups of data, including 3 groups of widely used images and 3 groups of real-world images for verifying the validity of the proposed algorithm. Experimental results show that this algorithm outperforms the current mainstream multi-focus image fusion algorithms from qualitative analysis and quantitative analysis.

    • Ultrasound image fixed point measurement based on landmark detection method

      2018, 50(11):67-73. DOI: 10.11918/j.issn.0367-6234.201711029

      Abstract (1602) HTML (269) PDF 2.47 M (1520) Comment (0) Favorites

      Abstract:The measurement by fixed point of ultrasound image is very important in clinical medicine. Because of the large speckle noise and blurred edges in ultrasound images, landmark detection in echocardiography is quite challenging. Meanwhile, current landmark detection algorithm optimizes a single landmark position, and it's difficult to get accurate distance with guaranteeing the accuracy of each landmark. To get more accuracy result of landmark and the distance between two landmarks in ultrasound, cascaded convolution neural network is proposed to automatic detection landmark in echocardiography, our framework adopts two stages of carefully designed deep convolution networks that predict landmark location in a coarse-to-fine manner. Firstly, networks at the first level estimate positions of two landmarks coarsely, the patch includes two landmarks as input of the second stage. Then a loss function with distance correction term is proposed to optimize the second network, and gets the final landmark position based on the output of first stage. Experimental results show that compared to traditional network and regression tree, the proposed method could not only guarantee the accuracy of landmark position but also increase the accuracy of distance, compared with traditional cascaded convolution neural network, and the accuracy of distance is nearly 30% higher than the traditional method.

    • New scheme for decoy state quantum key distribution with the two-mode state source

      2018, 50(11):74-82. DOI: 10.11918/j.issn.0367-6234.201709076

      Abstract (1290) HTML (155) PDF 1.84 M (1074) Comment (0) Favorites

      Abstract:A new and universal scheme for passive decoy-state quantum key distribution (QKD) with the two-mode state source is proposed. As one of the two-mode, the trigger state can obtain four types of detection events after splitting and detecting at the transmitter. Based on those events, the other mode as the signal state is divided into four sets of pulses which can be used to estimate parameters and extract key. Besides, the performance analysis is carried out on the scheme with the heralded single-photon source (HSPS) and the heralded pair coherent state (HPCS). The impact of different detection efficiency is discussed. Moreover, the statistical fluctuation in the practical system is numerically studied. Our simulation results show that the performance of our scheme is superior to the existing three-intensity decoy-state QKD schemes with different sources in terms of bit error rate and secure transmission distance (up to 198.6 km). The performance of using HPCS is better than that of using HSPS, and the key generation rate increases with the rising of detection efficiency at the transmitter. Considering the statistical fluctuation, the efficiency of HPCS is also better than HSPS, and the maximum secure distance of our scheme can reach 164 km when the data length is 109. Furthermore, our scheme only needs to use a single intensity pulse, which reduces the difficulty of the system implementation while improving the system performance. It has certain reference value to the implementation of QKD system.

    • A fast all digital phase-locked loop design with high resolution TDC

      2018, 50(11):83-88. DOI: 10.11918/j.issn.0367-6234.201803148

      Abstract (1644) HTML (301) PDF 2.03 M (1970) Comment (0) Favorites

      Abstract:Aiming at the problems that time-to-digital converter(TDC)suffers a low resolution and all digital phase-locked loop(ADPLL)takes a longer time to lock reference signal, this paper proposes a fast locking ADPLL based on high precision TDC. The new TDC employs tapped delay line method and double-channel differential delay line method to improve quantization accuracy, and uses symmetric hierarchical structure to quantize negative time interval. Meanwhile, the proposed phase modulation circuit recovers quantitative pulse signal into time span signal, and advances or delays phase of feedback signal by state machine to achieve fast locking on reference signal. Moreover, it applys detection circuit of falling edge to turn off phase frequency detector and TDC, which reduce the power consumption of the entire circuit at proper time. Simulation and verification are carried out in the Xilinx KC705 development kit, and contrastive analysis on power consumption is provided between new design and traditional ADPLL based on vernier chain in Xpower software. The results show that quantization error of new ADPLL is restricted within 0.2 ns. Besides, feedback signal can rapidly lock the reference signal in three reference signal clock periods. Power consumption of the overall circuit is cut down by approximately 18.1% compared to traditional ADPLL based on vernier chain. The proposed ADPLL has the advantages of strong real-time performance, high locking speed, high quantitative precision and low power consumption, and it is more applicable for modern digital communication systems that demand for high speed and low power consumption.

    • Correcting ICP-SLAM strategy with double loop closure detection

      2018, 50(11):89-93. DOI: 10.11918/j.issn.0367-6234.201708086

      Abstract (1763) HTML (192) PDF 1.75 M (1213) Comment (0) Favorites

      Abstract:With the rapid development of automatic driving and artificial intelligence, SLAM has been paid more and more attention. However, more research on SLAM has not been widely used in life and production because of errors in the algorithms and sensors involved in SLAM. The errors accumulate over time and space that directly results the deformation of the map, which severely limits the practical application of SLAM. Based on the loop closure detection of visual SLAM and the advantages of lidar, correcting ICP-SLAM strategy with double loop closure detection is presented to deal with the accumulated errors. The strategy firstly extracts and matches the point cloud data collected by lidar, and then analyzes the transformation matrix produced by ICP to determine loop closure detection, and finally, uses the mean distribution method to correct the map deformation. Experimental verification shows that: the accuracy of the closed-loop detection is improved, which is about 13.33% higher than that of the depth neural network; this strategy can calculate the accumulated error that leading to the deformation of the map; it can effectively correct the map deformation caused by ICP-SLAM, and the deformation has a 54.61% improvement in the rotation direction.

    • The application ofstacked denoising autoencoders in waveform unit identification

      2018, 50(11):94-100. DOI: 10.11918/j.issn.0367-6234.201706021

      Abstract (1545) HTML (174) PDF 2.18 M (1240) Comment (0) Favorites

      Abstract:To improve the accuracy and robustness of the Multi-Function Radar (MFR) waveform unit identification, a waveform unit identification method combining Stacked Denoising Autoencoders (SDAE) with Support Vector Machine (SVM) is proposed. The traditional MFR signal processing method relying on pulse sequence analysis is abandoned. A MFR waveform unit segmental identification model is proposed by analyzing the structure of waveform unit and the union variation characteristics of parameters. By using this model, the traditional identification of pulse sequences can be converted to the identification of MFR waveform unit. On the basis of this model, SDAE algorithm is introduced. The training sample data and SDAE hidden layer nodes are processed by noise adding. Using the training sample data, the SDAE network model is trained, and the deep robust feature of the sample data is extracted. Lastly, SVM algorithm is introduced. By using the deep feature of the SDAE output, the SVM model is optimized, and the final waveform unit identification model can be obtained. Simulation results show that under the same condition of sample number and test error, the proposed method can achieve high recognition accuracy, and has better identification results than SVM algorithm. The MFR waveform unit segmental identification model is verified. Besides, through the introduction of SDAE, the proposed SDAE-SVM method can autonomously dig up the deep feature of original data, and improves the robustness and accuracy of the waveform unit identification.

    • Massive face image retrieval based on depth feature clustering

      2018, 50(11):101-109. DOI: 10.11918/j.issn.0367-6234.201803047

      Abstract (1998) HTML (195) PDF 4.62 M (1946) Comment (0) Favorites

      Abstract:A massive face image retrieval method based on depth feature clustering is proposed to overcome the long time retrieving in a huge scale face image database. Firstly, the convolutional neural network model is trained for face classification by face image training set. Based on it, the triplet loss method is used to fine-tune the model so that the network can be more efficient to extract high-level semantic features and construct a more representative depth features of face image. Secondly, the K-means clustering algorithm is used to cluster the extracted depth features, so that face images of the same person can be divided into the same cluster, and then similarity matching of face images is performed in the corresponding clusters to perform the retrieval task. In order to further improve the performance of system retrieval, the face image feature fusion query expansion method is proposed to fuse the depth features of the face image to be retrieved. Through exhaustive experimental verification on two face retrieval datasets (Celebrities Face Set and Labeled Faces in the Wild dataset), the results show that the proposed method can significantly reduce the retrieval range of massive face image database, thus effectively increasing the face image retrieval speed while ensuring similar retrieval accuracy.

    • File delivery protocol based fountain code for Beta-Binomial channels with erasures

      2018, 50(11):110-115. DOI: 10.11918/j.issn.0367-6234.201711129

      Abstract (1683) HTML (161) PDF 1.37 M (1133) Comment (0) Favorites

      Abstract:Because NDN network has the characteristics of acyclic graph and hop-by-hop packet delivery, reduced packet transmission success rate, the ARQ and ACK mechanisms are no longer applicable to multicast sessions. Since the transmission channel in the NDN can be equivalent to a binary deletion channel, so reliable transmission of files must be achieved through application layer coding. Traditional channel coding techniques such as convolutional codes, concatenated codes, and RS codes are highly complex, while the combination of NDN and fountain code can realize distributed storage architecture, therefore, reliable rectifying mechanism can be formed based on the application layer protocol encoded by the fountain code, to ensure the overall file transmission reliability. Existing studies are generally based on a deterministic deletion probability channel, however, due to the heterogeneity of the network and channel noise, it may cause random distribution of packet loss probability. According to the Bayesian statistical prior information and the central limit theorem, this paper theoretically deduces the reliable file transmission protocol for deleted channels with Random Probability based on the Beta-Binomial distribution model. The simulations verify the following results. The proposed model is more universal. The file delivery protocol can theoretically determine the minimum number of packets under the condition of unknown channel state, and reduce redundant encoding packets. Therefore, it can improve the overall success rate of documents and the transmission efficiency of protocol while guaranteeing the reliability of transmission.

    • Heterogeneous cost oriented cloud resource scheduling algorithm for stochastic demand

      2018, 50(11):116-121. DOI: 10.11918/j.issn.0367-6234.201711104

      Abstract (2103) HTML (94) PDF 953.93 K (1127) Comment (0) Favorites

      Abstract:It is recommended to use a distributed cloud to construct a high resource consumption system such as streaming media service, which not only meets the requirements of multi-region deployment, but can also make full use of the resources in the cloud to ensure the service quality while controlling system budget costs. Make the difference of cost functions in cloud centers sitting in different regions, the heterogeneous cost model should be introduced in distributed cloud oriented scheduling. Given the highly dynamic and random characteristics of user requests in streaming media applications, it is expected to respond to as many user requests as possible under a given cost budget. Mean demand model ignores details of changes in resource requirements over short time intervals, and leads to inefficient use of resources. To overcome the disadvantages of mean demand model, we use a stochastic model to capture the fine-grained information of resource demand, and use a common cost function to describe the heterogeneous cost model, and also establish a general nonlinear programming problem model. To reduce the algorithm complexity, the lower bound of the solution is quickly obtained based on dynamic programming, and then the near optimal solution is obtained by iteratively approximation. The simulation results show that, compared with the classical average demand model based scheduling algorithm, the proposed algorithm can additionally meet 15% more of the user requests when the number of regions is large, and can additionally satisfy up to nearly 40% of the user requests as the budget decreases. The algorithm is not affected by differences in price functions or by user requests statistics across different regions. As a result, when used in globally deployed large-scale streaming media system, the proposed algorithm can significantly increase the number of satisfied user requests with limited computing time, thus is adaptable to a wide range of cloud infrastructure service providers.

    • Prediction method of permanent deformation of asphalt pavement under full temperature conditions

      2018, 50(11):122-130. DOI: 10.11918/j.issn.0367-6234.201711084

      Abstract (1468) HTML (104) PDF 3.10 M (1093) Comment (0) Favorites

      Abstract:In order to predict the asphalt pavement permanent deformation accurately and quickly, a prediction method of asphalt pavement permanent deformation based on superposition principle is established under full temperature domain. In this prediction method, taking Quanzhou-Nanning Highway of Ji-lian in Jiangxi province as the test section, from the point of view of the full temperature field, the idea of zoning classification layering is introduced, that is, temperature zoning, axle load classification, pavement layering. The annual temperature distribution of asphalt pavement, the temperature distribution curve along the depth of asphalt pavement and the annual axle load distribution are obtained by actual measurement and model calculation. According to triaxial load tests, the three kinds of asphalt mixture (SMA-13, AC-20, AC-25) permanent strain viscoelastic mechanics correction model is established, and the average correction factor of different temperature range is determined, which can be used to calculate the permanent strain of asphalt mixture under different temperature and different deviatoric stress. The permanent deformation of asphalt pavement surface is calculated by two methods of the origin growth accumulation and the counter calculating cumulative superposition. The calculation results show that the allowable permanent deformation life of Quanzhou-Nanning Highway of Jilian section is 6 years and 12 years, respectively, by the origin method and the counter calculating method. The counter calculating method is more accurate and closer to reality. At the same time, a new modified model for predicting permanent deformation of asphalt pavement based on superposition principle is proposed under the condition of full temperature range. The relation equation between the number of loads and average permanent deformation in temperature range of asphalt pavement and the relation equation among permanent deformation, average permanent deformation in temperature range and temperature distribution frequency are established based on temperature distribution in the full temperature range, which provides a new method of rapid prediction of permanent deformation of asphalt pavement under full temperature conditons.

    • Relative orientation calibration of a depth camera and an inertial measurement unit

      2018, 50(11):131-136. DOI: 10.11918/j.issn.0367-6234.201801130

      Abstract (1381) HTML (197) PDF 2.01 M (1367) Comment (0) Favorites

      Abstract:In order to solve the problem that the relative orientation between depth camera and inertial measurement unit is difficult to measure directly, a non-contact calibration method based on the construction of displacement vectors by sensing the same hand motion is proposed to compute the relative orientation. Firstly, the relative orientation calibration problem of a depth camera and an inertial measurement unit with time-varying pose relationship is described and analyzed. Then the depth camera and the inertial measurement unit are used to simultaneously capture the movement of a hand swinging in arbitrary direction in the space, to construct displacement vectors. Finally, leveraging these displacement vectors, a model is built up based on the rotation invariance principle of a rigid body, and then solved with the least square method to obtain the calibration result. To verify the accuracy and effectiveness of the proposed method, in the one hand, a comparison between the measured data and the white noise simulation data is conducted. The simulation result shows that the relative orientation deviation after calibration is less than ±4°. On the other hand, an experiment to capture the human arm motion with a depth camera and an inertial measurement unit is also conducted. The experimental result shows that parameters of the human arm could be only correctly reflected after calibration. The calibration method presented in this paper is simple in principle, easy to operate, and without requirement of contact measurement or other auxiliary equipment, which is applicable to sensor calibration in scenarios such as robot remote control and motion sensing games.

    • A survey of medical knowledge graph based on EHR

      2018, 50(11):137-144. DOI: 10.11918/j.issn.0367-6234.201806001

      Abstract (2284) HTML (912) PDF 1.32 M (2327) Comment (0) Favorites

      Abstract:Electronic health record (EHR) as a means of medical informatization has stored and accumulated big data of medical processes and results. Knowledge graph as a means to extract structured knowledge from massive data has recently shown broad application prospects in many industries. The application of knowledge graph technology to the analysis of EHR data can help obtain valuable medical knowledge and experience which could be used to improve the effectiveness and efficiency of medical industry significantly. This paper firstly introduced the development status of EHR, the existing famous EHR data sets and their application results. Secondly, on the basis of summarizing the current developments of knowledge graph, it analyzed the trend and hot-spot migration of knowledge graph in medical field as well. Then, it summarized the research and application progresses of EHR based medical knowledge graph which include the information extraction, data integration, query expansion, clinical diagnostic support and disease prediction of EHR, etc. Finally, this paper also gave the future development directions and challenges of EHR based medical knowledge graph.

    • NSST and guided filtering for multi-focus image fusion algorithm

      2018, 50(11):145-152. DOI: 10.11918/j.issn.0367-6234.201805006

      Abstract (1747) HTML (167) PDF 4.25 M (1215) Comment (0) Favorites

      Abstract:In order to further improve the contrast and sharpness of the fused image, a multi-focus image fusion algorithm based on the non-subsampled shearlet transform (NSST) and guided filtering is proposed in this paper. Firstly, multi-scale and multi-directional decomposition of multi-focus source images are performed by using NSST transform. Then, the low-frequency sub-band coefficients are used to construct the initial fusion weights by calculating the local region Sum-Modified-Laplacian energy sum. The initial fusion weights are corrected by the guided filter. A weighted fusion rule based on local region Sum-Modified-Laplacian energy sum and guided filtering is proposed. For the high-frequency sub-band coefficients, combined with human visual characteristics, the initial fusion weights are constructed by the significant information, the local region average gradient, edge information, and local region Sum-Modified-Laplacian energy sum; the initial fusion weights are modified according to guided filtering, and a guided filtering weighted fusion rule based on human visual characteristics is proposed. Finally, the inverse NSST is used to produce the fused image. The simulation results of the four groups of multi-focus source images demonstrate that, not matter it is the subjective evaluation or the objective evaluation, the proposed algorithm not only preserves the details such as edge contour, and texture of the source images, but also improves the contrast and clarity of the fused image.

    • Study on spinning forming process of multi-wedge belt pulley with transverse outer ribs

      2018, 50(11):153-159. DOI: 10.11918/j.issn.0367-6234.201803060

      Abstract (1466) HTML (173) PDF 2.92 M (1139) Comment (0) Favorites

      Abstract:In order to solve the multi-wedge belt pulley (MWBP) with transverse outer ribs was difficult to forming ribs and simultaneous forming three ribs, the MWBP with transverse outer ribs was used to study the formation of multi-steps spinning forming process by the SIMUFACT software. The finite element model of spinning forming was established based on the analysis of spinning deformation. The stress field distribution, the deformation of the sheet metal and the metal flowing of the sheet during the 1st to 4th step spinning forming process were analyzed. In this study, it generalized integral quality of parts under different spinning equipment technics parameter, and prove the roller feed speed and the friction to have important impact in filling quality of ribs the integral quality of parts. Three factors and three levels orthogonal experiment were numerically simulated with the independent variables of roller feed speed, friction factor and rotation rate, the data of the maximum forming load and thickening rate of transverse ribs and those data were obtained by the orthogonal experiment. The grey system theory was used in the parameters optimization though the spinning thickening forming of MWBP with transverse outer ribs. The smaller roller feed speed and the friction can ensure filling quality of ribs the integral quality of parts. Those experiments were conducted on the CDC-S100E/R4 spinning machine. The results showed that the original plate thickness was thickened from 3.0 mm to 3.4 mm and the rib region was thickened to 6.9 mm which verified the feasibility of the grey system theory based optimization scheme.

    • Speech emotionestimation in PAD 3D emotion space

      2018, 50(11):160-166. DOI: 10.11918/j.issn.0367-6234.201806131

      Abstract (1858) HTML (319) PDF 1.43 M (1464) Comment (0) Favorites

      Abstract:The discrete emotional description model labels human emotions as discrete adjectives. The model can only represent limited types of single and explicit emotion. The dimensional emotional model quantifies the implied state of complex emotions from the multiple dimensions. In addition, conventional speech emotion feature, Mel Frequency Cepstral Coefficient (MFCC), has the problem of neglecting the correlation between the adjacent frame spectral features due to frame division processing, making it susceptible to loss of much useful information. To solve this problem, this paper proposes an improved method, which extracts the time firing series feature and the firing position information feature from the spectrogram to supplement the MFCC, and applies them in speech emotion estimation respectively. Based on the predicted values, the proposed method calculates the correlation coefficients of each feature from three dimensions, P (Pleasure-displeasure), A (Arousal-nonarousal), and D (Dominance-submissiveness), as feature weights and obtains the final values of PAD in emotion speech after the weighted fusion, and finally maps it to PAD 3D emotion space. The experiments showed that the two added features could not only detect the emotional state of the speaker, but also consider the correlation between the adjacent frame spectral features, complementing to MFCC features. On the basis of improving the effect of discrete estimation of basic emotional types, this method represents the estimation results as coordinate points in PAD 3D emotion space, adopts the quantitative method to reveal the position and connection of various emotions in the emotion space, and indicates the emotion content mixed in the emotion speech. This study lays a foundation for subsequent research on classification estimation of complex speech emotions.

    • Transmission time analysis of wireless network slicing based on fountain codes

      2018, 50(11):167-170. DOI: 10.11918/j.issn.0367-6234.201805014

      Abstract (1557) HTML (133) PDF 1.28 M (1916) Comment (0) Favorites

      Abstract:To increase the throughput and reduce the feedback latency of wireless network slicing transmission, an error correction scheme based on no-retransmission fountain codes was proposed. The scheme used fountain codes to replace the traditionally automatic retransmission request (ARQ) in network slicing transmission. An improved slicing method for the error correction scheme was also given. Because of its no-retransmission and small redundancy, this scheme can provide reduced complexity and transmission latency for network slicing transmission at the same time. Moreover, it can provide raised throughput when packet loss probability is increased, thus it gives a high efficient network slicing transmission scheme. Analysis and simulation were given for the fountain code-based transmission and the ARQ-based transmission. Simulation showed the transmission latency with fountain codes was lower than that with ARQ when the packet loss probability was beyond 5×10-2. With the increase of packet loss probability in wireless network, the advantage was more evident and the latency could reduce up to 22% when loss probability was 10-1.

    • IRI predictive revised model for cement pavement in seasonal frozen region using MEPDG

      2018, 50(11):171-177. DOI: 10.11918/j.issn.0367-6234.201709056

      Abstract (1530) HTML (184) PDF 2.32 M (1100) Comment (0) Favorites

      Abstract:To solve the problem of flatness prediction of cement pavement in seasonal frozen region, the international roughness index (IRI) prediction revised model of cement pavement in frozen season was constructed using MEPDG. The model takes into account the traffic conditions, climatic conditions, and the characteristics of the materials of the pavement layers. The trends of CRK, TFAULT, SPALL, and SF were analyzed and verified with IRI correlation on the basis of the original prediction model. SPSS analysis software was used to obtain the prediction revised model of cement pavement roughness index in seasonal frozen region. The cement pavement survey data of Heilongjiang Province was used to verify this model. The results show that climate, traffic, and material parameters had significant influence on the four indexes CRK, TFAULT, SPALL, and SF. These indicators and international flatness index were linearly fitted with high reliability. The number of traffic, precipitation, rainfall, wet days, the number of freeze-thaw cycles, and the performance of pavement materials can be used to predict the international flatness index of cement pavement in seasonal frozen region. The accuracy of the model was high, with good practicability and high predictive performance.

    • Reactive characteristics of sulfur dioxide in the application of calcium chloride absorption method

      2018, 50(11):178-184. DOI: 10.11918/j.issn.0367-6234.201712012

      Abstract (1802) HTML (213) PDF 5.16 M (1182) Comment (0) Favorites

      Abstract:The main objectives are to explore the possible SO2 conversion reactions under the condition of calcium chloride absorption method (CCAM) and the effects of these reactions on SO3 measurement by the CCAM. The vertical-horizontal tube furnace and variable-controlling approach were used to study the influences of water vapor, oxygen atmosphere, temperature, reactant concentration on the chemical reaction between SO2 and CaCl2. On this basis, the possible chemical reaction mechanisms of SO2 and CaCl2 and the effects on SO3 measurement were analyzed theoretically. The results showed that SO2 could not undergo the homogeneous reactions in the absence of CaCl2. Under typical conditions of CCAM, only when H2O and O2 existed simultaneously, an obvious chemical reaction between SO2and CaCl2could occur. The conversion rate of SO2 can be improved with the increase of temperature and reactant concentration and the conversion rate is 2% to 8% approximately. Finally, the theoretical analysis demonstrated that the heterogeneous reaction was extremely week at 200 ℃and under typical sampling atmosphere, and this reaction was severely inhibited in the presence of high concentration of gaseous H2SO4. Therefore, the effect of SO2 interference on SO3 measurement could be ignored, which proved the feasibility of the CCAM to some extent.

    • Simulation and experiment of the stiffness of tendon connector flexjoint

      2018, 50(11):185-191. DOI: 10.11918/j.issn.0367-6234.201711069

      Abstract (1901) HTML (148) PDF 3.80 M (1173) Comment (0) Favorites

      Abstract:Five flexjoints of tendon connector with different rubber material or structure were simulated by FEM to study the effect of parameters on the stiffness of flexjoint. The flexjoints were also tested under compression and rotation load. The material tests are taken on the specified rubber, and a half 3D model and a 2D model are built by Abaqus/Standard software. Neo-Hooke, Mooney-Rivlin and Yeoh constitutive model are used in the simulation. The experiments and simulation results are compared to analyze the validity and accuracy of different constitutive models, and the results show that the compression stiffness of flexjoints will increase as displacement increases, and the rotational stiffness will decrease as the angle increases. The increase in the number of the rubber layers will improve the compression stiffness but has little effect on the rotation stiffness. The results of simulation with Yeoh model can fit the experimental results very well. The Mooney-Rivlin model can also provide a relative accurate results under small deformation. The compression simulation results of Mooney-Rivlin model will be smaller than test results because that model cannot reflect the "upturn" of rubber material. The Neo-Hooke model provides the biggest error.

    • Study on hydrodynamic force and propulsive efficiency of flexible flapping foils

      2018, 50(11):192-198. DOI: 10.11918/j.issn.0367-6234.201803051

      Abstract (1615) HTML (289) PDF 2.08 M (1461) Comment (0) Favorites

      Abstract:To investigate the hydrodynamic performance of flexible flapping foils and provide guidance for the design of bio-inspired propulsors, numerical simulations are used to investigate the unsteady flow around a three-dimensional (3D) flexible foil based on an in-house developed immersed boundary method CFD code. The effect of complex moving deformation boundary on the flow on a fixed Cartesian gird is imposed by a boundary condition reconstruction algorithm based on ghost-cells. A numerical method calculating propulsive efficiency which can apply to both rigid and flexible flapping foils is proposed in views of energy. Moreover, the effect of various parameters on the propulsive performance of the 3D flexible foil has been systematically investigated. The results indicate that the propulsive efficiency of the flexible flapping foil is higher than its rigid counterpart when deformation parameter ε lies between 2.0 and 3.5 together with chordwise deformation coefficient δ of 0.1. The variation of the flexible foil performance with deformation parameter depends on the vortex dynamics underlying the force production, including the control of the trailing-edge deformation on the vortex shedding and the mechanism of spanwise transport of vorticity. The mean thrust of the flexible flapping foil increases monotonically with the dimensionless frequency k, which is consistent with the experimental phenomenon that fish propelled by pectoral fins enhance the thrust through the increase of the flapping frequency of pectoral fins. There exists an optimal k to get maximum propulsive efficiency of the flexible foil, and the simulation reproduces the characteristic of fish propelled by caudal fin in experiments, i.e., they cruise at a Strouhal number tuned for optimal efficiency.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

Most Read

Most Cited

Download Ranking