0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessAbstract It is an interesting topic to interpret artificial neural networks (ANNs) by considering some change various approaches. This paper explores the relationship between the input and output units of the simplest ANN, a single layer perceptron for the binary classification problem, from the probability point of view. If the feature variables of datasets follow independent normal distribution and outputs are activated by sigmoid function or smooth Relu function, we advocate that the probability density function (pdf) of the output variable is an exponential family distribution. Furthermore, by introducing an intermediate variable, the pdf of the output variable can be written as a linear combination of three normal distributions with same spread but different centers. Based on these results, the probability of the predicted class label can be written as a standard normal cumulative distribution function (cdf). The originality of this paper comes with interesting theoretical results to provide ANNs with a new description of the relationship between input variables to output variables, which can enable ANNs to be understood from a new perspective. Extensive experiments based on one artificial synthesized dataset and ten real‐world benchmark datasets validate the reasonability of those results.
Tingting Pan, Witold Pedrycz, Jiahui Cui, Jie Yang, Wei Wu (2022). Interpretability of Neural Networks with Probability Density Functions. , 5(3), DOI: https://doi.org/10.1002/adts.202100459.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Article
Year
2022
Authors
5
Datasets
0
Total Files
0
Language
en
DOI
https://doi.org/10.1002/adts.202100459
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access