Abstract:Currently, the most popular activation function for deep convolutional neural network is the rectified linear unit (ReLU).The ReLU activation function outputs zero for negative quadrant, inducing the death of some neurons, and remains the input data for the positive quadrant, inducing a bias shift.According to the theory that "zero means activations improving learning ability", softplus linear unit(SLU) is introduced as an adaptive activation function that can tackle with these two problems.Firstly, negative inputs are processed with the softplus function, pushing the mean of outputs of the activation function to zero and reducing the bias shift.Then, the parameters of the positive component are fixed to control vanishing gradients.Thirdly, to maintain continuity and differentiability at zero, the parameters of the negative part are updated according to the positive quadrant.Several experiments are conducted on the MNIST dataset for supervised learning with deep auto-encode networks, as well as several experiments on the CIFAR-10 dataset for unsupervised learning with deep convolutional neural networks.The experiments have shown faster convergence and better performance for image classification of SLU-based networks compared with rectified activation functions.