Talk: Explanatory graphs for CNNs & interpretable CNNs

Published:2017-09-27

讲座标题:Explanatory graphs for CNNs & interpretable CNNs

 

时间:2017年9月29日,星期五,上午10:00-11:00

地点:电信3-412

主持:王新兵教授

 


Abstract: 


Although convolutional neural networks (CNNs) have achieved superior performance in different visual tasks, the knowledge representation inside a CNN is still considered as a black box. In this talk, I mainly introduce two of my studies to enhance the interpretability of the knowledge encoded in conv-layers of a CNN, i.e., 1) learning a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside a pre-trained CNN, and 2) end-to-end learning an interpretable CNN, whose filters in high conv-layers encode semantically meaningful patterns.

 

1. For the explanatory graph: Considering that each filter in a conv-layer of a pre-trained CNN usually represents a mixture of object parts, I propose a simple yet efficient method to automatically disentangle different part patterns from each filter, and construct an explanatory graph. Each graph node represents a part pattern, and graph edges encode co-activation relationships and spatial relationships between patterns. I learn the explanatory graph for a pre-trained CNN in an unsupervised manner, i.e. without a need for annotating object parts.

 

2. For interpretable CNNs: I design an interpretable CNN, in which each filter in a high conv-layer represents an object part or a discriminative texture. The interpretable CNN can be designed with different loses for different tasks. Without given additional annotations of object parts or textures for supervision, the interpretable CNN automatically assigns each filter with clear part/texture semantics during the learning process.

 

Speaker bio:

 

Quanshi Zhang received the BS degree in machine intelligence from the Peking University, China, in 2009 and M.S. and Ph.D. degrees in the center for spatial information science at the University of Tokyo, Japan, in 2011 and 2014, respectively. Now, he is a postdoctoral researcher at the University of California, Los Angeles, under the supervision of Prof. Song-Chun Zhu. His research interests range across computer vision and machine learning. Now, he is leading a group for explainable AI. The related topics include explainable neural networks, explanation of pre-trained neural networks, and unsupervised/weakly-supervised learning.

 

Homepage: https://sites.google.com/site/quanshizhang/

Contact webmaster@cs.sjtu.edu.cn

Copyright @ 2013 SJTU Computer Science & Engineering All Rights Reserved