Towards the Quantitative Interpretability Analysis of Citizens Happiness Prediction
Paper
Paper/Presentation Title | Towards the Quantitative Interpretability Analysis of Citizens Happiness Prediction |
---|---|
Presentation Type | Paper |
Authors | Li, Lin (Author), Wu, Xiaohua (Author), Kong, Miao (Author), Zhou, Dong (Author) and Tao, Xiaohui (Author) |
Editors | Raedt, Luc De |
Journal or Proceedings Title | Proceedings of the 39th International Joint Conference on Artificial Intelligence (IJCAI-ECAI 2022) |
ERA Conference ID | 43623 |
Number of Pages | 7 |
Year | 2022 |
Place of Publication | Austria |
ISBN | 9781956792003 |
Web Address (URL) of Paper | https://www.ijcai.org/proceedings/2022/ |
Conference/Event | 31st International Joint Conference on Artificial Intelligence (IJCAI-ECAI 2022): Special Track on AI for Good |
International Joint Conference on Artificial Intelligence | |
Event Details | International Joint Conference on Artificial Intelligence IJCAI Rank A A A A A A A A A A A A A A A A A A A A A |
Event Details | 31st International Joint Conference on Artificial Intelligence (IJCAI-ECAI 2022): Special Track on AI for Good Event Date 23 to end of 29 Jul 2022 Event Location Vienna, Austria |
Abstract | Evaluating the high-effect factors of citizens' happiness is beneficial to a wide range of policy-making for economics and politics in most countries. Benefiting from the high-efficiency of regression models, previous efforts by sociology scholars have analyzed the effect of happiness factors with high interpretability. However, restricted to their research concerns, they are specifically interested in some subset of factors modeled as linear functions. Recently, deep learning shows promising prediction accuracy while addressing challenges in interpretability. To this end, we introduce Shapley value that is inherent in solid theory for factor contribution interpretability to work with deep learning models by taking into account interactions between multiple factors. The proposed solution computes the Shapley value of a factor, i.e., its average contribution to the prediction in different coalitions based on coalitional game theory. Aiming to evaluate the interpretability quality of our solution, experiments are conducted on a Chinese General Social Survey (CGSS) questionnaire dataset. Through systematic reviews, the experimental results of Shapley value are highly consistent with academic studies in social science, which implies our solution for citizens' happiness prediction has 2-fold implications, theoretically and practically. |
Keywords | Humans and AI: Computational Sustainability and Human Well-BeingAI Ethics, Trust, Fairness: Explainability and InterpretabilityAI Ethics, Trust, Fairness: Societal Impact of AIMachine Learning: Explainable/Interpretable Machine Learning |
ANZSRC Field of Research 2020 | 460899. Human-centred computing not elsewhere classified |
460208. Natural language processing | |
461003. Human information interaction and retrieval | |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Wuhan University of Technology, China |
Hunan University of Science and Technology, China | |
University of Southern Queensland | |
Institution of Origin | University of Southern Queensland |
Funding source | Australian Research Council (ARC) Grant ID DP220101360 |
https://research.usq.edu.au/item/q7q1x/towards-the-quantitative-interpretability-analysis-of-citizens-happiness-prediction
142
total views12
total downloads11
views this month0
downloads this month