• Volume 51,Issue 11,2019 Table of Contents
    Select All
    Display Type: |
    • Research on recognition method of transportation modes based on deep learning

      2019, 51(11):1-7. DOI: 10.11918/j.issn.0367-6234.201902039

      Abstract (2606) HTML (571) PDF 2.10 M (1526) Comment (0) Favorites

      Abstract:Resident travel information can reflect the activity routines of residents and urban traffic problems, which is an important basis for formulating transportation planning and management. Although the trajectory information acquired by GPS has a lot of spatio-temporal information, it cannot directly express transportation modes. Data processing and mining algorithms are needed to extract hidden knowledge to infer transportation modes, while recognition has great challenges due to the high degree of non-linearity and complexity of residents’ travel patterns. In this study, the advantages of deep learning were utilized to solve difficult calculation features or missing extraction features. After pre-processing of the trajectory information, kinematic features of the trajectory segments were calculated to form the input data. A method that combines convolutional neural network with gate recurrent unit was proposed to recognize transportation modes. By utilizing the advantages of convolutional neural networks, the deep features and the ability of gate recurrent unit were characterized to mine time series characteristics, improve the learning ability of nonlinear classification problems, and increase the accuracy of transportation modes recognition. In order to verify the effectiveness of the proposed method, separate convolutional neural network and gate recurrent unit were designed, which was tested and compared on the published GeoLife dataset. Experimental results show that although the proposed method only used four features, it still received well recognition results. Besides, the proposed method had better recognition performance than using a convolutional neural network and other classification methods.

    • Two-stage breast cancer diagnosis system based on ultrasound and mammogram images

      2019, 51(11):8-15. DOI: 10.11918/j.issn.0367-6234.201904005

      Abstract (1609) HTML (221) PDF 2.57 M (1010) Comment (0) Favorites

      Abstract:The incidence of breast cancer continues to rise worldwide. Due to the heterogeneity of breast cancer, there is an overlap between benign and malignant tumor images that only using a type of images cannot obtain satisfactory classification results. This paper proposes a two-stage breast cancer diagnosis system based on ultrasound and mammogram images. In the first phase, an abstaining classification method is used to classify breast ultrasound (BUS) images, in which some BUS tumors are classified with high confidence, and the uncertain tumors are not classified. These unclassified tumors are then classified using mammogram images in the second stage. Supplemented by mammogram information, the system can diagnose breast cancer by utilizing multimodal image information to screen for unrecognizable ultrasound images. The ultrasound and mammogram images used in this study were provided by the Cancer Hospital of Harbin Medical University and the First Affiliated Hospital of Harbin Medical University. The abstaining method and the two-stage diagnosis system were validated in experiments. Compared with diagnostic systems using only BUS features, the proposed two-stage diagnostic system provided better performance with the accuracy of 92.59%, AUC of 0.933 3, G-mean of 93.09%, sensitivity of 86.67%, specificity of 100%, positive predictive value of 100%, negative predictive value of 85.71%, and Matthew’s correlation coefficient of 0.861 9. Experimental results demonstrate that adding mammogram information can increase the performance of the diagnosis system that uses BUS images only.

    • An accrual failure detector in cloud computing

      2019, 51(11):16-21. DOI: 10.11918/j.issn.0367-6234.201903149

      Abstract (1680) HTML (168) PDF 2.66 M (998) Comment (0) Favorites

      Abstract:In order to better solve the problem that the performance of failure detection is effected by dynamic of network environment in cloud computing, a new adaptive accrual failure detector (Two Windows Accrual Failure Detector, 2WA-FD) was proposed. First, two groups of actual data from two network conditions were analyzed, and we found that the Weibull distribution is a more reasonable distribution assumption for heartbeat inter-arrival time. According to the Weibull distribution, the suspicion level of accrual failure detector is more accurate. Second, the framework of accrual failure detector was analyzed and improved, and the suspicion level was calculated by two sliding windows. This framework is fit for dealing with the dynamic of network conditions. Finally, the 2WA-FD and other failure detectors were tested on open source experimental data and our experimental platform. The experimental results show that the 2WA-FD has better performance in terms of low detection time and high detection accuracy with the same detection overhead. Thus, the 2WA-FD can accurately and quickly find out the node failures in cloud computing, and effectively reduce the influence of dynamic on the performance of failure detection.

    • Overlapped Local Gaussian Process Regression

      2019, 51(11):22-26. DOI: 10.11918/j.issn.0367-6234.201904056

      Abstract (1575) HTML (869) PDF 3.09 M (1631) Comment (0) Favorites

      Abstract:Gaussian processes (GPs) are distributions of functions, and are commonly used for regression in machine learning community. For n training samples, their time complexity for training and prediction is O(n3) and O(n2) respectively. The high computational cost hinders their application to large datasets. Inspired by the divide and conquer strategy, the paper proposed a simple but efficient approximate model, called Overlapped Local Gaussian Process (OLGP). The method assumes that given the nearest neighbors, the random variables are independent of farther ones. The training samples are recursively divided and a ternary tree is constructed in the end. Sibling nodes have intersections where the samples play the role of inducing points to model the dependency between neighboring regions. Each leaf node is associated with a local GP regression model. The evidence and predictive distribution in parent nodes are composed of the ones from their sons, which reduces the computational cost significantly. Furthermore, the fitted function is guaranteed to be continuous. A theoretical analysis shows that the time complexity for training and prediction reduces to O(nt) for n training samples, where t depends on the proportion of the intersection in each level, and is usually between 1 and 2. The paper demonstrated the effectiveness of the method by speed-accuracy performance on several real-world datasets.

    • Influence of fraction of alloy element on mechanical properties of copper-lead alloy materials during nano-tensile process

      2019, 51(11):27-34. DOI: 10.11918/j.issn.0367-6234.201810007

      Abstract (1530) HTML (203) PDF 10.53 M (1190) Comment (0) Favorites

      Abstract:Due to the high plasticity and strength of copper and the self-lubricating function of lead, they are excellent wear antifriction material that have been tested and proved by practice and are widely used in the fields of precision machinery and aerospace. To study the effect of fraction of alloy element on the mechanical properties of copper-lead alloy during the nano-tensile process, a large-scale molecular dynamics simulation model of polycrystalline copper is structured by Poisson-Voronoi method and Monte Carlo method, and hybrid Monte Carlo/molecular dynamics (hybrid MC/MD) method is adopted to build the copper-lead alloy model. According to the real copper-lead materials, the model of copper-lead alloy with different element fraction and polycrystalline copper simulation are established. The nano-tensile process with different element fraction are simulated by molecular dynamics method, and the coordination number, internal stress and atomic potential energy of the atoms are calculated. The results show that there are significant regularities in the nano-tensile process of copper-lead alloy with different element fraction. The hydrostatic pressure and atomic potential energy distribution are similar between copper-lead alloy and polycrystalline copper, and lead atoms can suppress the dislocation of copper-lead alloy grain boundary, making the structure of grain boundary interface of the alloy material more stable. The variations of the potential energy of the grain cell and grain boundary interface during the plastic deformation of the alloy material is opposite. The fraction of lead atoms mainly affects the grain boundary interface state, and the grain boundary interface plays a major role in the plastic deformation process. Therefore, the properties of the alloy materials can be changed by changing the fraction of elements in copper-lead alloy. The research results in this paper provide some theoretical guidance for the preparation of high-performance copper-lead alloy materials.

    • Thermal deformation and accuracy analysis of composite grid reflector structure

      2019, 51(11):35-39. DOI: 10.11918/j.issn.0367-6234.201809175

      Abstract (1781) HTML (259) PDF 2.09 M (1061) Comment (0) Favorites

      Abstract:In order to meet the requirements of high precision and light weight of reflectors in the field of satellite communications, the ABAQUS finite element was used to simulate the thermal deformation of the composite reflector and the RMS value of the profile accuracy was calculated by the least squares against the background of a geosynchronous orbit meteorological satellite reflector. The full composite grid reflector structure was designed by using the in-plane thermal expansion coefficient of the M55 carbon fiber composite laminate. Under the same working conditions, the industrial three-dimensional measurement method was used to test the thermal deformation of the grid reflector structure and the root mean square RMS value of the deformation of the working face was used to characterize the profile accuracy. It shows that the test value was approximately equal to the simulated value. Finally, factors affecting the RMS value of the profile accuracy were analyzed, such as assembly error, bonding method, thickness of glue layer, layup angle of skin and ribs. Results show that the bonding method and the thickness of the glue layer were the most important factors affecting the profile accuracy of the full composite grid reflector, and point bonding was more suitable for the bonding of the skin and the grid structure. At the same time, with the increase of the thickness of the glue layer and the assembly error, the profile accuracy showed a nonlinear growth trend. Compared with the carbon fiber aluminum honeycomb sandwich structure reflector, the profile accuracy of the full composite grid structure reflector was only 0.455 μm under the same working conditions and the same macroscopic structure size, which was increased by an order of magnitude. It provides a reliable design for high precision deep space exploration and signal transmission.

    • Fast array direction finding algorithm based on real-valued closed-form rooting

      2019, 51(11):40-46. DOI: 10.11918/j.issn.0367-6234.201904021

      Abstract (1391) HTML (310) PDF 1.72 M (1084) Comment (0) Favorites

      Abstract:Multiple signal classification (MUSIC) algorithm by peak searching is a classical algorithm in direction finding algorithm, which has been widely used because of its good parameter estimation performance, while it needs huge amounts of calculation, which increases the complexity of the direction finding system and the development cost. In contrast, root-MUSIC algorithm that utilizes polynomial rooting to obtain the target source direction information can reduce the computational complexity of the direction finding. However, root-MUSIC algorithm involves complex-valued coefficient polynomial rooting operation, and its complexity is still high. To further effectively reduce complexity, a novel fast array direction finding algorithm based on the real-valued coefficient polynomial closed root finding was proposed. By utilizing coordinate mapping technique as well as the fact that the derivatives with respect to the extreme values of the MUSIC spectrum equal to zero, a new polynomial in the domain with the same order as root-MUSIC in the domain was constructed. Since roots of the polynomial were found symmetric about the real axis, the new polynomial could be further decomposed into several quadratic polynomials by exploiting Bairstow’s method. Consequently, the target source direction information could be estimated by finding the roots of those quadratic polynomials with closed forms. Theoretical analysis and numerical simulation results show that the calculation complexity of the proposed method was significantly reduced compared with the standard root-MUSIC, and the direction finding speed was improved as the estimate accuracy remained the same.

    • Application research of an improved optical flow algorithm in seismic image texture analysis

      2019, 51(11):47-54. DOI: 10.11918/j.issn.0367-6234.201904161

      Abstract (1473) HTML (202) PDF 3.99 M (968) Comment (0) Favorites

      Abstract:To effectively protect the linear structure of seismic image while eliminating noise, a new method which combines improved optical flow algorithm with texture smoothing filter was proposed. First, the multiple-scale description method of Gauss pyramid was used to solve the problem of large flow velocity calculation and improve its accuracy. Secondly, by setting the threshold value of the mean square root for the residual error of the iteration results, the number of iterations was reduced and the processing time was shortened. Finally, according to the texture complexity of seismic profile image and combined with texture attribute analysis, different templates were selected for texture smoothing filtering to improve the signal-to-noise ratio (SNR). Compared with the conventional median filter and some advanced algorithms such as the improved Sobel filter and the Normalized Full Gradient method for seismic image boundary detection, the proposed algorithm could effectively preserve the edge information of seismic data, and enhance the continuity of seismic image in-phase axis. In particular, the SNR was increased by 7~10 dB, and the processing time was shortened by 2~3 minutes. Experimental results show that by combining the Gauss pyramid multiple-scale description with the optical flow algorithm and the texture smoothing filtering, the texture structure and energy of the original image could be well preserved by the integrated improved algorithm proposed in this paper. Meanwhile, the SNR was enhanced and the processing time was reduced. Therefore, the efficiency of seismic data interpretation was improved, indicating that the proposed algorithm is a better processing method in the field of seismic image texture analysis.

    • Study on a new type of support structure of segment lining combined with compressible crushed stone and anchor bolt

      2019, 51(11):55-62. DOI: 10.11918/j.issn.0367-6234.201809144

      Abstract (1839) HTML (152) PDF 9.36 M (1568) Comment (0) Favorites

      Abstract:A new type support structure combined with compressible crushed stone and high-strength pre-stressed anchor cable is proposed which is suitable for deep inclined shaft construction using shield method with great depth. Different conditions in terms of liner combined with crushed stone and high-strength pre-stressed anchor were simulated. The effect of the combined support was studied by analyzing the plastic zone development of surrounding rock, convergence of surrounding rock and internal force of segmontal lining. Moreover, the support mechanism of combined support was revealed from movement characteristics of the crushed stone and displacement vector of the anchored surrounding rock. The results show that only the yielding effect of crushed stone cannot promise the safety of the liner, especially when the creep effect of surrounding rock was considered. The combined support can not only improve the bearing capacity of the surrounding rock, but also effectively absorb the creep deformation of the surrounding rock. The support mechanism of combined support consists of two parts: the yielding effect of crushed stone and surrounding rock reinforcement of anchor cable. The yielding effect of crushed stone was attributed to the mutual squeezing effect and movement effect, while the support mechanism of anchor cable was attributed to reinforcement effect of anchor cable on the surrounding rock. The results have a certain reference value for the design of support structure of shield constructing deep tunnel in the future.

    • Performance improvement of BBR congestion control algorithm in wireless network

      2019, 51(11):63-67. DOI: 10.11918/j.issn.0367-6234.201901020

      Abstract (1944) HTML (2368) PDF 1.44 M (1114) Comment (0) Favorites

      Abstract:Since traditional congestion control algorithm cannot fulfill the requirements of current complex networks, Google provides a new way named bottleneck bandwidth and round-trip (BBR) for congestion control. The purpose of BBR is to maximize throughput and minimize delay on a bottleneck link with packet loss, but it has some disadvantages: First, when the delay of wireless network is severely jittered, even if there is no packet loss or congestion, the delivery rate of BBR is very low. This issue has not been raised in previous studies. Second, BBR is not sensitive enough to the bandwidth drop. This paper analyzes the causes of the above problems in detail and proposes an optimized BBR, which can judge the jitter degree of network by comparing the mean and standard deviation of RTT. When the jitter of delay was severe, the mean RTT was used instead of minimum RTT to calculate congestion window. When the network was unstable, the length of stationary phase in PROBE_BW was reduced. Experiments in the real network showed that the optimized BBR was almost unaffected by delay fluctuations, and could maintain high bandwidth utilization when BBR could hardly work. Besides, it could detect bandwidth degradation and converge faster than BBR when the network was unstable.

    • Extraction of the temperature-dependent thermoelectric material parameters of thermoelectric cooler

      2019, 51(11):68-74. DOI: 10.11918/j.issn.0367-6234.201901092

      Abstract (1344) HTML (310) PDF 3.96 M (1018) Comment (0) Favorites

      Abstract:This study extracted temperature-dependent thermoelectric material parameters of thermoelectric cooler (TEC), which are indispensible in calculation of TECs performance but usually kept confidential by manufacturers. Based on the test results of a one-stage TEC and the basic thermoelectric formula, two over-determined equations were deduced by two methods of obtaining material parameters. Then two groups of material parameters were extracted by solving the two equations, and were used to calculate and experimentally verify the performance of a five-stage TEC made of the same material. Results show that the performance error provided by manufacturer became larger with the increase of the stage. The cooling temperature error of the five-stage TEC was higher than 20 K which should be noticed in the selection of the TECs. The cooling temperature error of the five-stage TEC calculated by the extracted parameters varied between 1.6~6.1 K for different cooling capacities. The calculated result of the voltage indicates that the error of the electronic resistance in the calculation model was not negligible. After modifying the voltage calculation by the extracted electrical resistance errors, the relative errors of the voltages in the working current range of TEC calculated by the two groups of parameters were lower than 4.80% and 7.00%, respectively. The maximum calculated cooling temperature error of the proposed method was about 1/5~1/2 of that by extreme value method, 1/10~1/4 of that by manufacturer. Its accuracy was comparable to the calculated result of the finite element method using the exact material parameters. This method can be used to effectively evaluate the performance of TECs with same materials.

    • UAV route planning based on RWPSO and Markov chain

      2019, 51(11):75-81. DOI: 10.11918/j.issn.0367-6234.201812040

      Abstract (1659) HTML (372) PDF 4.27 M (1260) Comment (0) Favorites

      Abstract:Particle swarm optimization (PSO) is a global search algorithm based on population, which is characterized by simple principle and stable and efficient search. It is widely used in the field of route planning, but it is defective when falling into local optimum and in convergence speed. In this paper, random walk strategy is introduced to the mission weight and survival weight of UAV. By changing the inertia weight of particles according to certain rules, PSO’s defects can be effectively avoided, and UAV’s efficiency in finding the optimal path can be improved. On the other hand, in order to provide a criterion or reference to evaluate planned path, it is necessary to construct a survival state probability model to evaluate UAV flight path points. The route planning model of the random walk particle swarm optimization algorithm (RWPSO) was combined with the Markov chain survival state randomness model, thereby building up a route planning model for estimating the survival probability of path points. Simulation results show that RWPSO based on random walk of task weight, survival weight, and task survival weight was more efficient than PSO and quantum particle swarm optimization (QPSO) in optimization. A model describing the change of survival probability of UAV was thus obtained successfully by combining Markov chain with RWPSO. The framework can be extended to route and mission planning in complex scenes with radiation sources, weapons, or electromagnetic interference.

    • Power control strategy of wireless sensor network under cooperative game

      2019, 51(11):82-88. DOI: 10.11918/j.issn.0367-6234.201905173

      Abstract (1368) HTML (210) PDF 4.76 M (1031) Comment (0) Favorites

      Abstract:In order to obtain a higher signal-to-noise ratio, the transmitted power of nodes will be improved, wireless sensor networks work in dynamically changing channel conditions and interference levels environments at the same time, causing the interference between the nodes to increase, the node will continue to increase the transmit power to offset the negative impact, which will lead to a gradual deterioration of the network environment and waste too much node energy. Aiming at the above problems, this paper proposes a wireless sensor network power control strategy under cooperative game. In order to enable the node to dynamically adjust the transmit power according to the surrounding environment information, the algorithm introduces the distance between nodes as the interference weight factor to correct the effective interference model. The signal-to-noise ratio model is improved. Based on the cooperative game theory, the node information transmission rate and its residual energy are integrated, and the utility function under the cooperative game is established. The normalized information transmission rate and the transmission power variance value under different utility weight factors are used. After the four balances of the signal-to-noise ratio and the network utility are comprehensively weighed, the appropriate utility weight factor value is obtained, and the utility function has a Nash equilibrium solution. After several iterations of the algorithm, the optimal transmission power set of the whole network is obtained. The simulation results show that the optimal transmit power variance obtained by the algorithm is small, and the algorithm converges fast. The network can obtain a higher signal-to-noise ratio when the node has lower transmit power, and the network life cycle can be extended to achieve higher performance. The nodes utility can be improved.

    • An improved cuckoo search for multimodal optimization problems

      2019, 51(11):89-99. DOI: 10.11918/j.issn.0367-6234.201902003

      Abstract (1686) HTML (1086) PDF 5.29 M (1080) Comment (0) Favorites

      Abstract:Cuckoo search algorithm is a simple and efficient meta-heuristic algorithm, while it can be easily trapped into local optimum when solving complex multimodal optimization problems. To tackle this problem, an improved cuckoo search algorithm based on neural networks was proposed by combining the characteristics of neural network algorithm and cuckoo search algorithm. The core idea of this algorithm is to balance global search ability and local search ability of cuckoo search algorithm with powerful global search ability of the improved neural network algorithm and dynamic population strategy, thereby reducing the possibility of the cuckoo algorithm falling into local optimum. The algorithm firstly sorts the individuals in the population according to the fitness values. Then the best half individuals of the population are performed by the cuckoo search algorithm, whereas the worst half individuals are optimized by the improved neural network algorithm. Finally all individuals are grouped into a new population, from which the optimal solution can be selected. In this experiment, 24 complex multimodal optimization problems were employed to study the optimization performance and compare between the proposed algorithm and neural network algorithm, cuckoo search algorithm, and other improved cuckoo algorithms. Results show that the proposed algorithm fully demonstrated the advantages of the modified neural network algorithm and the cuckoo search algorithm, which was significantly better than other algorithms in resolution quality, convergence speed, and stability.

    • Speech emotion recognition with embedded attention mechanism and hierarchical context

      2019, 51(11):100-107. DOI: 10.11918/j.issn.0367-6234.201905193

      Abstract (1640) HTML (543) PDF 2.83 M (1150) Comment (0) Favorites

      Abstract:A challenging task remains with regarding to speech emotion recognition due to issues such as emotional corpus problems, association between emotion and acoustic features, and speech emotion recognition modeling. Conventional context-based speech emotion recognition system risks of losing the context details of the label layer and neglecting the difference of the two-level due to solely limited to the feature layer. This paper proposed a Bidirectional Long Short-Term Memory (BLSTM) network with embedded attention mechanism combined with hierarchical context learning model. The model completed the speech emotion recognition task in three phases. The first phase extracted the feature set from the emotional speech, then used the SVM-RFE feature-sorting algorithm to reduce the feature in order to obtain the optimal feature subset and assigned attention weights. The second phase, the weighted feature subset was input into the BLSTM network learning feature layer context to obtain the initial emotional prediction result. The third phase used the emotional value to train another independent BLSTM network for learning label layer context information. According to the information, the final prediction was completed based on the output result of the second phase. The model embedded the attention mechanism to automatically learn to adjust the attention to the input feature subset, introduced the label layer context to associate the feature layer context so as to achieve the hierarchical context information fusion and improve the robustness, and improved the model's ability to model the emotional speech. The experimental results on the SEMAINE and RECOLA datasets showed that both RMSE and CCC were significantly improved than the baseline model.

    • A Taylor Expansion algorithm for the load distribution homogenization of multi-bolt composite joints

      2019, 51(11):108-115. DOI: 10.11918/j.issn.0367-6234.201811201

      Abstract (1792) HTML (245) PDF 5.32 M (1007) Comment (0) Favorites

      Abstract:Most aircraft bolted joints consist of multiple bolts, and they may share the load unevenly due to the brittle nature of composites, which will easily lead to the premature failure of the joints. In order to decrease the bolt load inequality, the common method is to apply clearance fit at the most highly loaded bolt. However, it is always difficult to determine the clearance value. In this paper, a new method which is based on First Order Taylor Expansion was proposed to compute these values approximately. First, the First Order Taylor expansion was used to get the linear equation between the bolt load and the bolt-hole clearance. Then, under a given external load, using the bolt load when all the bolts share the load equally as the known parameters, and using the bolt-hole clearance as unknown parameters, a linear equation group is solved. The bolt-hole clearance was approximately calculated. The method was validated by applying it to a five-bolt joint. After optimization, the bolt load ratio of the most loaded bolt decreased from 45.1% to 23.0%. The maximum tensile stress and compressive stress of the joint decreased by about 26% and 39%, respectively. Finally, based on the validated model, the influence of parameters such as width, row spacing, number of bolts and external load was discussed. The model was highly efficient and for optimizing of an n-bolt joint, only n+1 finite element models were needed to obtain the desired clearance fit values. The model is of great significance for the optimization design of multi-bolt joints.

    • Algorithm for dealing with motion blur in visual SLAM

      2019, 51(11):116-121. DOI: 10.11918/j.issn.0367-6234.201901208

      Abstract (1994) HTML (1143) PDF 2.93 M (1019) Comment (0) Favorites

      Abstract:Motion blur caused by high-speed movement often occurs in low-price devices, which is a main factor that affects the accuracy of Simultaneous Localization and Mapping (SLAM). Some approaches dealing with motion blur such as computing blur kernel and blind deconvolution are not suitable for mobile phone, unmanned Areial Vehicle(UAV), and other platforms with limited processing capacity, which may impact the application of SLAM algorithm. In this study, correspondence was found between coordinate difference and extent of motion blur by exploring the generation of motion blur and the difference of feature coordinates between adjacent images. The average movement of feature points was used to form EBL parameter and represent the blur degree of the frame, which was then combined with frame removal algorithm to continuously remove the big-blur image. The accuracy of localization and mapping under motion blur could be enhanced by adding a small amount of computation. Experiments proved the validity of EBL parameters and the improvement of the accuracy of the SLAM system. Results show that the proposed algorithm could obviously reduce the error of the camera trajectory. For datasets with severe blur, the error could be reduced by 20% under an appropriate size of window.

    • Thickness temperature drop regularity during roller quenching process for ultra-heavy steel plate

      2019, 51(11):122-127. DOI: 10.11918/j.issn.0367-6234.201812098

      Abstract (1804) HTML (137) PDF 2.19 M (983) Comment (0) Favorites

      Abstract:In order to provide theoretical methods and experimental data for the research of quenching cooling rate and cooling uniformity of ultra-heavy steel plate, the quenching temperature drop curves of 160 mm, 220 mm, and 300 mm ultra-heavy steel plate were obtained by roller quenching machine and multichannel temperature recorder in this study. A three-dimensional inverse heat transfer model and a heat flux calculation model were established by finite element method and optimization method, and the distribution regularities of temperature gradient, heat flux, and cooling rate were analyzed. Results indicate that the calculated temperature drop curves agreed well with the measured value, and the deviation was less than 4%. High-intensity cooling allowed the steel sheet to form a large temperature gradient. Subsequent reduction of the cooling intensity could improve the wall superheat and maintain the thicker temperature gradient, which was beneficial to the temperature drop of the core to 1/4H. The coupling effect of heat flux and temperature gradient had influences on plate quenching temperature field. “Platform” and “reheating” appeared in temperature drop curves for 220mm and 300mm plate as wall superheat changed. The “reheating” was related to the heat flux change on the plate surface and the position change of MHF point (critical heat flux) caused by temperature gradient change in the thickness direction of steel plate. When ratio of the upper and lower plate surface flow density were 1∶1.25 (0.8 MPa) and 1∶1.4 (0.4 MPa) respectively, symmetrical cooling was realized.

    • GAN image super-resolution reconstruction model with improved residual block and adversarial loss

      2019, 51(11):128-137. DOI: 10.11918/j.issn.0367-6234.201812115

      Abstract (1828) HTML (936) PDF 4.71 M (1143) Comment (0) Favorites

      Abstract:Image super-resolution (SR) reconstruction is an important image processing technology to improve the resolution of image and video in computer vision. Image reconstruction model based on deep learning has not been satisfactory due to the too many layers involved, the excessively long training time resulting from difficult gradient transmission, and the unsatisfactory reconstructed image. This paper proposes a generative adversarial networks (GAN) image SR reconstruction model with improved residual block and adversarial loss. Firstly, on the model structure, the residual blocks of the excess batch normalization were designed and combined into a generative model, and the deep convolution network was used as the discriminant model to control the training direction of the reconstructed image to reduce the model’s calculation amount. Then, in the loss function, the Earth-Mover distance was designed to alleviate model gradient disappearance. The L1 distance was used as the measure of the degree of similarity between the reconstructed image and the high-resolution image to guide the model weight update to improve the reconstructed visual effect. Experimental results from the DIV2K, Set5, and Set14 datasets demonstrate that compared with the model before improvement, the training time of the proposed model was reduced by about 14% and the image reconstruction effect was effectively improved. For the loss function combined with Earth-Mover distance and L1 distance, gradient disappearance was effectively alleviated. Therefore, the proposed model significantly improved the SR reconstruction efficiency and visual effect of low-resolution images compared with Bicubic, SRCNN, VDSR, and DSRN model.

    • System modeling method research of ship CAD for audit process

      2019, 51(11):138-143. DOI: 10.11918/j.issn.0367-6234.201801024

      Abstract (1927) HTML (265) PDF 4.45 M (1155) Comment (0) Favorites

      Abstract:To improve the efficiency of ship modeling and provide an original data model for CAE analysis and Structure Design Program (SDP) checking, a ship CAD system for the audit process is designed and developed. By considering the needs of the audit process, both the geometry and the data of the ship model are described, with all ship structures represented by zero-thickness sheet body. Besides, a three-laminated panel model including release model, master model and tool model is established to record the geometric relationship between the panel and its subsidiary structures in the form of panel attributes. The panel and its subsidiary structures are managed by combining the custom structure navigation tree and the assembly tree, the feature tree of the generic 3D modeling software NX. Furthermore, for storing and managing the ship′s standard data and ensuring inquiry efficiency, a standard library system for the whole life cycle is designed based on a lightweight database named SQLite. Finally, the ship model is generated quickly by parametric modeling and structural modeling based on custom feature technology. It is verified that the system proposed in this paper has strong interactivity and stability, and the modeling efficiency is improved with smaller amount of data. After the data conversion, the CAD model can be used directly by CAE and SDP, thus avoiding repetitive design. The use of database technology achieves data sharing between CAD, CAE and SDP modules and solves the problem of data inconsistent and dispersion. The custom structure navigation tree meets the requirements of professional ship designers. Keywords: ship CAD; 3D modeling; audit process; parametric modeling; structural modeling 〖FQ(+24mm。22,ZX-W〗收稿日期: 2018-01-04 基金项目: 国家自然科学基金(51609089) 作者简介: 章志兵(1978—),男,博士,讲师; 柳玉起(1966—),男,教授,博士生导师通信作者: 柳玉起,E-mail:liuyq@mail.hust.edu.cn 船舶工业的快速发展,使得船舶CAD技术成为缩短设计生产周期、提高产品质量和降低总体成本的有效方法[1-2]. 目前使用较为广泛的船舶三维设计软件主要有 TRIBON、CATIA、IntelliShip、NX、FORAN等[3]. TRIBON虽解决了船舶设计中面临的主要问题,但其软件封闭性较强,三维图形的拓扑造型能力较弱[4];CATIA自带的模板数量较少,并且没有标准数据库[5];IntelliShip和NX生成的模型数据量较大,操作、存储不方便并且降低软件的运行速度[6];FORAN在生成外壳时需要用户输入大量信息,建模效率低[7-8]. 另外,上述软件缺乏对CAE预处理和SDP校核的针对性,不满足审核流程的需求. 基于NX软件开发平台,使用NX/Open MenuScript和NX/Open BlockUI Style进行软件菜单和界面设计,利用NX/Open API语言模块进行船舶CAD系统开发[9-10]. 基于标准数据库[11],通过结构建模和参数化建模方法,采用几何和数据相结合的方式描述船舶模型,使得模型数据量小且建模效率大大提高[12]. 同时面向审核流程,为CAE和SDP模块提供通用的原始数据模型,减少重复设计,实现数据共享. 1船舶CAD系统体系结构

    • Dual network intelligent decision method for fighter autonomous combat maneuver

      2019, 51(11):144-151. DOI: 10.11918/j.issn.0367-6234.201811083

      Abstract (1660) HTML (373) PDF 2.87 M (1083) Comment (0) Favorites

      Abstract:In the research of autonomous combat maneuver of fighter based on Deep Reinforcement Learning (DRL), the fighter’s autonomous maneuver to attack area is the precondition for the fighter to attack the target effectively. Because of the large active airspace and the uneven exploration ability in all directions, the direct use of DRL to acquire maneuvering strategy is confronted with the problems of large training interaction space, difficulty in setting sample distribution in attack area, and difficulty in the convergence of training process. To solve this problem, a dual network intelligent decision method based on deep Q-network (DQN) was proposed. In this method, a conical space was set up in front of the fighter to make full use of the forward exploratory performance of the fighter. With the establishment of an angle capture network, DRL was used to fit the strategy of adjusting the deviation angle to keep the attack area in the conical space, and a distance capture network was established to fit the maneuvering strategy of the fighter to the attack area based on DRL. Simulation results show that the traditional DRL method cannot effectively solve the decision-making problem of the fighter’s maneuvering to the attack area by using the active airspace of the fighter as the interactive space, whereas the success rate of the dual network decision method was 83.2% in 1 000 tests of the fighter’s autonomous maneuvering to the attack area. Therefore, the proposed method can effectively solve the decision problem of autonomous maneuvering of fighter aircraft to the attack area.

    • DOA estimation of shallow sea target based on time reversal and compressive sensing

      2019, 51(11):152-159. DOI: 10.11918/j.issn.0367-6234.201810113

      Abstract (1517) HTML (451) PDF 3.76 M (1735) Comment (0) Favorites

      Abstract:Shallow water waveguides are very complex in sound propagation due to the existence of seafloor boundary and inhomogeneous scatterers, which causes serious interference to signal processing. Aiming at the adverse effect of multipath on DOA estimation in complex shallow water environment, a DOA estimation algorithm based on time reversal-compressive sensing (TR-CS) is proposed in this paper. To solve the interference of shallow water multipath to channel sparsity when DOA estimation is performed by using compressive sensing (CS), the algorithm introduced the time reversal theory (TR) to preprocess the signal, transmitted the signal to the channel again, and established the DOA estimation model of shallow water target based on TR. The algorithm uses the space-time focusing property of TR to correct the multipath distortion and analyzes the performance of the algorithm under different snapshot numbers and complexity environments. Simulation results show that under low SNR and small snapshot conditions, the introduction of TR significantly suppressed sidelobe clutter, increased the sparsity of the channel and the signal-to-noise ratio of the array received signals, and improved the performance of DOA estimation in complex shallow water environment. The performance improvement was more obvious in the environment with more severe multipath effects.

    • Research on benchmark derivation techniques in verification of steady state nuclear reactor core design programs

      2019, 51(11):160-166. DOI: 10.11918/j.issn.0367-6234.201901149

      Abstract (1784) HTML (400) PDF 2.16 M (1588) Comment (0) Favorites

      Abstract:Verification is one of the necessary steps to ensure the quality of nuclear power software, and benchmark calculation is an important mean of nuclear power software verification. The traditional methods of obtaining benchmark data mainly include independent experiments, collecting operation data of nuclear power plants, joining international experimental research programs, and purchasing data of international benchmarks. However, they have the problems of high cost and long cycle. In order to provide more benchmarks for the verification of steady state nuclear reactor core design programs, a benchmark derivation technique based on metamorphic testing principle is proposed in this paper. First, the technology established the framework of benchmark derivation, whose main idea is to directly calculate and solve the related input and output parameters of the original benchmark according to the parameters and relations of the metamorphic relationship, so as to obtain the derived benchmark (new test cases). Furthermore, aiming at the characteristic that the benchmark problems can input the data of the program under test in the form of a fixed format input card, an automatic derivation algorithm for benchmark questions was designed and a benchmark automatic derivation system was developed to improve the efficiency of derivation. Finally, the two-dimensional and three-dimensional benchmarks of the steady state neutron diffusion program were demonstrated as examples. Results show that the technology can automatically generate two-dimensional and three-dimensional derivative benchmarks in a single or batch way, which not only generates data accurately and efficiently, but also costs less than traditional methods.

    • Three-phase video compressive sensing reconstruction viadynamic multi-pattern matching

      2019, 51(11):167-173. DOI: 10.11918/j.issn.0367-6234.201902009

      Abstract (1590) HTML (267) PDF 3.00 M (1337) Comment (0) Favorites

      Abstract:Compressive sensing theory indicates that high-dimensional signal reconstruction can be obtained from far fewer measurements than those required by the Nyquist-Shannon sampling theorem, and compressive sensing has a great potential in video signal sensing. The existing reconstruction algorithms utilize the multihypothesis prediction to derive the residual model. A large number of studies adopt the method based on the least mean square error to select multiple hypothesis matching patehes and reconstruct the video, while the maximization of the structural similarity is not considered in these algorithms, and there is much room for improvement in the overall quality effect of image reconstruction. Therefore, a three-phase video compressive sensing reconstruction algorithm is proposed in this paper on the basis of dynamic multi-pattern matching, in which the first phase reconstructs each frame independently, the second phase dynamically selects the hypothesis patches from the reference frames and reconstructs the frames, and the final reconstruction result is obtained in the third phase with the best structural similarity. Experimental results demonstrate that compared with the state-of-the-art algorithm, the proposed algorithm could obtain better prediction accuracy and reconstruction quality for video compressive sensing.

    • Bilevel coevolutionary clonal selection algorithm and its application

      2019, 51(11):174-182. DOI: 10.11918/j.issn.0367-6234.201812176

      Abstract (1800) HTML (266) PDF 2.79 M (1635) Comment (0) Favorites

      Abstract:In order to solve the problem of slow convergence speed and low convergence precision inherent in the clone selection algorithm, a bilevel coevolutionary clonal selection algorithm is proposed. The algorithm used different evolutionary schemes for each level of evolution to search for optimization. Through information sharing, co-evolution between levels was realized. Moreover, an evolutionary model of intra-level competition and inter-level cooperation was formed. By constructing a hybrid co-evolution mechanism of multi-evolutionary strategies, it could realize the complementary advantages and information increment of different evolutionary strategies in the optimization process. Thus, the purpose of effectively balancing the global exploration and local exploitation of the algorithm was achieved. Simultaneously, the premature convergence problem of the algorithm could also be better avoided. The feasibility and effectiveness of the proposed algorithm were verified by 10 benchmarks. Experimental results show that the proposed algorithm had obvious advantages such as stronger global search ability, better stability, faster convergence speed, higher convergence accuracy, and so on. Furthermore, these advantages became more prominent with the increase of testing dimensions. Lorenz chaotic system was taken as an example to test the algorithm in estimating the parameters. Simulation results confirmed that the proposed algorithm can be used for high-precision estimation of system parameters, and it is an effective parameter estimation method for chaotic systems.

    • 3D-LCRN based Video Abnormal Behavior Recognition

      2019, 51(11):183-193. DOI: 10.11918/j.issn.0367-6234.201812005

      Abstract (1510) HTML (545) PDF 18.34 M (1058) Comment (0) Favorites

      Abstract:Automatically anomaly recognition in surveillance videos is a crucial issue for social security. A 3D-LCRN visual time series model was proposed for abnormal behavior recognition on video surveillance. Firstly, a structural similarity background modeling method was proposed to obtain corrected optical flow and corrected motion history image, which was insensitive to illumination variation and background moving against background interference in complex scenes. Secondly, a new sample expansion method was proposed to solve the imbalance between normal training samples and abnormal ones, which enriched the spatial and temporal information of samples from both dimensionality and quantity. On dimensionality, the method stacked corrected optical flow and corrected motion history image to generate the corrected optical flow motion history image. In quantity, COFMHI was randomly cropped and clustered into center visual words by K-means. Finally, COFMHI was used as 3D-CNN input to extract local short-time spatial-temporal features of behavior. In order to suppress irrelevant, redundant and confusing video clips, a learnable contribution factor weighted LSTM was used to deeply extract the global long-time spatial-temporal features for abnormal behavior recognition. Through 3D-LCRN, abundant spatial-temporal features were extracted from both local to global and short-time to long-time levels. Experimental results show that the proposed method has excellent performance of abnormal behavior recognition in complex scenes such as illumination variation and background moving in comparison with the state-of-art methods.

    • Effects of joint roughness and joint matching degree coefficient on stress wave propagation

      2019, 51(11):194-200. DOI: 10.11918/j.issn.0367-6234.201808050

      Abstract (1568) HTML (333) PDF 3.94 M (1183) Comment (0) Favorites

      Abstract:To explore the influence of joint morphology and its degree of anastomosis on the propagation of stress waves, effects of joint matching coefficient (JMC) and its geometric distribution on the dynamic mechanical properties of joints and the propagation of stress waves were analyzed. Cement mortar samples were adopted to prepare cylindrical specimens for simulating rock samples. The distribution pattern of joints was quantified by dividing the end surface of a cylinder into different numbers of fan-shaped concave surfaces. Meanwhile, two specimens with same joint morphology and distribution were obtained at different angles. The split Hopkinson pressure bar (SHPB) was employed for impact test and the results showed that when the joint morphology of the specimens were the same, the nonlinearity of the stress-strain curves in the loading section increased with the increase of the joint contact area. It revealed that the smaller the contact area was, the more obvious the mechanical response of the joint to the initial stage of loading was. Similarly, under the condition of same joint morphology of the composite specimens, the stress wave transmission coefficient and the joint equivalent stiffness increased linearly with the increase of the joint contact area. When the contact area of the joint specimens was the same, the more dispersed the geometric distribution of the joint surface (the more the number of the fan-shaped concave) was, the greater the transmission coefficient and the equivalent stiffness of the stress wave became. The larger the contact area of the joint was, the more obvious the joint geometry distribution effect was.

Current Issue


Volume , No.

Table of Contents

Archive

Volume

Issue

Most Read

Most Cited

Download Ranking