(1) Interpretable neural networks based on network structural constraints

When developing machine learning algorithms to solve real life problems, the prediction accuracy and interpretability of the model are the two most important goals. Neural networks in deep learning generally have high accuracy, but it is difficult to explain due to the complexity of the model. In order to enhance the interpretability of neural network model, we consider the following network structural constraints: a) sparse additive subnetworks: b) orthogonal projection; c) neural network structure constraints; c) A new interpretable neural network (s0sxnn) is proposed.

The model parameters of sosxnn are estimated by batch gradient descent algorithm and Cayley transform, and some important super parameters are optimized by grid search. In numerical experiments, we compare sosxni with several classical machine learning methods, including Lasso, SWM, random forest and multi-layer perceptron. The experimental results show that s0sxnn can improve the interpretability of the model while maintaining high prediction accuracy, so it can be used as a proxy model to effectively approximate complex functional relationships. Finally, we demonstrate the application of the s0sxnn model by using a set of real data from lending club, an American P2P lending company.


Experience it now and start the journey of digital transformation !