Comparing ChatGPT With Experts’ Responses to Scenarios that Assess Psychological Literacy
Article
Article Title | Comparing ChatGPT With Experts’ Responses to Scenarios that Assess Psychological Literacy |
---|---|
ERA Journal ID | 6614 |
Article Category | Article |
Authors | Machin, M. Anthony, Machin, Tanya M. and Gasson, Natalie |
Journal Title | Psychology Learning and Teaching |
Journal Citation | 23 (2), pp. 265-280 |
Number of Pages | 16 |
Year | 2024 |
Publisher | SAGE Publications Ltd |
Place of Publication | United Kingdom |
ISSN | 1475-7257 |
Digital Object Identifier (DOI) | https://doi.org/10.1177/14757257241241592 |
Web Address (URL) | https://journals.sagepub.com/doi/abs/10.1177/14757257241241592 |
Abstract | Progress in understanding students’ development of psychological literacy is critical. However, generative AI represents an emerging threat to higher education which may dramatically impact on student learning and how this learning transfers to their practice. This research investigated whether ChatGPT responded in ways that demonstrated psychological literacy and whether it matched the responses of subject matter experts on a measure of psychological literacy. We tasked ChatGPT with providing responses to 13 psychology research methods scenarios as well as to rate each of the five response options that were already developed for each scenario by the research team. ChatGPT responded in ways that would typically be regarded as displaying a high level of psychological literacy. The response options which were previously rated by two groups of Subject Matter Experts (SMEs) were then compared with ratings provided by ChatGPT. The Pearson’s correlations were very high (r’s = .73 and .80 respectively), as were the Spearman’s rhos (rho’s = .81 and .82 respectively). Kendall’s Tau were also quite high (tau’s = .67 and .68 respectively). We conclude that ChatGPT may generate responses that match SME psychological literacy in research methods, which could also generalise across multiple domains of psychological literacy. |
Keywords | Psychological literacy, ChatGPT, Situational Judgement Test |
Article Publishing Charge (APC) Funding | Other |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 399999. Other education not elsewhere classified |
529999. Other psychology not elsewhere classified | |
Byline Affiliations | University of Southern Queensland |
Curtin University |
https://research.usq.edu.au/item/z601y/comparing-chatgpt-with-experts-responses-to-scenarios-that-assess-psychological-literacy
Download files
Published Version
machin-et-al-2024-comparing-chatgpt-with-experts-responses-to-scenarios-that-assess-psychological-literacy.pdf | ||
License: CC BY-NC 4.0 | ||
File access level: Anyone |
54
total views18
total downloads1
views this month1
downloads this month