Document Type
Article
Publication Date
2-1-2021
Abstract
The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these functions remarkably enhance the model accuracy. However, there is limited insight into the effects of nonlinearity in neural networks on their performance. Here, we investigate the performance of neural network models as a function of nonlinearity using ReLU and L-ReLU activation functions in the context of different model architectures and data domains. We use entropy as a measurement of the randomness, to quantify the effects of nonlinearity in different architecture shapes on the performance of neural networks. We show that the ReLU nonliearity is a better choice for activation function mostly when the network has sufficient number of parameters. However, we found that the image classification models with transfer learning seem to perform well with LReLU in fully connected layers. We show that the entropy of hidden layer outputs in neural networks can fairly represent the fluctuations in information loss as a function of nonlinearity. Furthermore, we investigate the entropy profile of shallow neural networks as a way of representing their hidden layer dynamics.
Recommended Citation
Kulathunga, Nalinda; Ranasinghe, Nishath Rajiv; Vrinceanu, Daniel; Kinsman, Zackary; Huang, Lei; and Wang, Yunjiao, "Effects of nonlinearity and network architecture on the performance of supervised neural networks" (2021). Faculty Publications. 32.
https://digitalscholarship.tsu.edu/facpubs/32