Why need XAI?

With the aid of interpretability, people can better manage and appropriately trust AI systems, which is important for making decisions in the high-stake domains.

Where may need XAI?

Health Infomatics

People may need more interpretations towards health-related areas, including medical diagnosis, electronic health record (EHR), healthcare industry and etc.

Social Infomatics

More interpretability could also contribute to people's social life, and help to better detect as well as filter tons of spammers like fake news in social media.

Our Current Work

Work 1:
Interpretable Health Text Classification
(Refining...)

Our interpretable health text classifier could help to process medical text, for example, in EHR. Each input sentence could be classified into 3 categories, i.e. Medication, Symptom and Background. Besides, the dominated features and discriminative patterns for each classification would also be provided as interpretations. Further visualizations are also applied for user-friendly interaction.

Work 2:
Interpretable Image Classification
(Refining...)

Our interpretable image classifier has two parts of functionalities. First, it can interpret the deep classification model by using some shallow models such as linear model and gradient boosting tree model. Second, it can interpret the predictions generated by deep classifier, and show the relevant hihgly-weighted super-pixels as corresponding interpretations.

Work 3:
Interpretable Fake News Detection
(Refining...)

In this work, we aim to detect fake news on popular news websites as well as social media, and provide different forms of interpretations such as Attribute Significance, Word/Phrase Attribution, Linguistic Feature and Supporting Examples. Further human studies are conducted to guarantee the effectiveness of our system in real cases. More improvement will be posted soon.

Publications

Try our online XAI fake news demo (current version) here!


If the link above doesn't work, watch our demo video on YouTube (Interpretable HealthText/Image Classification and Interpretable Fake News Detection )!

These research are conducted by the DATA lab & DIVE lab at Texas A & M University, as well as the INDIE lab at University of Florida.

Photo
Dr. Xia (Ben) Hu is the director of the DATA Lab.
Photo
Dr. Shuiwang Ji is the director of the DIVE Lab.
Photo
Dr. Eric Ragan is the director of the INDIE Lab.
Photo
Ninghao Liu is a Ph.D. student in the DATA Lab.
Photo
Sina Mohseni is a Ph.D. student in the INDIE Lab.
Photo
Fan Yang is a Ph.D. student in the DATA Lab.
Photo
Mengnan Du is a Ph.D. student in the DATA Lab.
Photo
Hao Yuan is a Ph.D. student in the DIVE Lab.
Photo
Shiva Kumar Pentyala is a M.S. student in the DATA Lab.

Contact Us: hu@cse.tamu.edu | 979-845-8873 | TAMU, 400 Bizzell St, College Station, TX 77843-3112

TAMU CSE Logo

This project is funded by the Defense Advanced Research Projects Agency (DARPA)