TY - JOUR
T1 - Ifme
T2 - 10th International Symposium on Parallel Architectures, Algorithms and Programming, PAAP 2019
AU - Zha, Benbo
AU - Shen, Hong
N1 - Publisher Copyright:
© Springer Nature Singapore Pte Ltd. 2020.
PY - 2020
Y1 - 2020
N2 - Due to the high precision and the huge prediction needs, machine learning models based decision systems has been widely adopted in all works of life. They were usually constructed as a black box based on sophisticated, opaque learning models. The lack of human understandable explanation to the inner logic of these models and the reason behind the predictions from such systems causes a serious trust issue. Interpretable Machine Learning methods can be used to relieve this problem by providing an explanation for the models or predictions. In this work, we focus on the model explanation problem, which study how to explain the black box prediction model globally through human understandable explanation. We propose the Influence Function based Model Explanation (IFME) method to provide interpretable model explanation based on key training points selected through influence function. First, our method introduces a novel local prediction interpreter, which also utilizes the key training points for local prediction. Then it finds the key training points to the learning models via influence function globally. Finally, we provide the influence function based model agnostic explanation to the model used. We also show the efficiency of our method through both theoretical analysis and simulated experiments.
AB - Due to the high precision and the huge prediction needs, machine learning models based decision systems has been widely adopted in all works of life. They were usually constructed as a black box based on sophisticated, opaque learning models. The lack of human understandable explanation to the inner logic of these models and the reason behind the predictions from such systems causes a serious trust issue. Interpretable Machine Learning methods can be used to relieve this problem by providing an explanation for the models or predictions. In this work, we focus on the model explanation problem, which study how to explain the black box prediction model globally through human understandable explanation. We propose the Influence Function based Model Explanation (IFME) method to provide interpretable model explanation based on key training points selected through influence function. First, our method introduces a novel local prediction interpreter, which also utilizes the key training points for local prediction. Then it finds the key training points to the learning models via influence function globally. Finally, we provide the influence function based model agnostic explanation to the model used. We also show the efficiency of our method through both theoretical analysis and simulated experiments.
KW - Explaining the black box
KW - Influence function
KW - Interpretable Machine Learning
KW - Model explanatory
UR - http://www.scopus.com/inward/record.url?scp=85111436150&partnerID=8YFLogxK
U2 - 10.1007/978-981-15-2767-8_27
DO - 10.1007/978-981-15-2767-8_27
M3 - Conference article
AN - SCOPUS:85111436150
SN - 1865-0929
VL - 1163
SP - 299
EP - 310
JO - Communications in Computer and Information Science
JF - Communications in Computer and Information Science
Y2 - 12 December 2019 through 14 December 2019
ER -