As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Comparing Emotional Valence from Human Quantitative Ratings and Qualitative Narrative Data on Using Artificial Intelligence to Reduce Caregiving Disparity
Authors
Sunmoo Yoon, Robert Crupi, Frederick Sun, Dante Tipiani, Melissa Patterson, Tess Pottinger, Milea Kim, Ncole Davis
We compared emotional valence scores as determined via machine vs human ratings from a survey conducted from April to May 2024 on perceived attitudes on the use of artificial intelligence (AI) for African American family caregivers of persons with Alzheimer’s disease and related dementias (ADRD) (N=627). The participants answered risks, benefits and possible solutions qualitatively on the open-ended questions on ten AI use cases, followed by a rating of each. Then, we applied three machine learning algorithms to detect emotional valence scores from the text data and compared their mean to the human ratings. The mean emotional valence scores from text data via natural language processing (NLP) were negative regardless of algorithms (AFINN: -1.61 ± 2.76, Bing: -1.40 ± 1.52, and Syuzhet: -0.67 ± 1.14), while the mean score of human ratings was positive (2.30 ± 1.48, p=0.0001). Our findings have implications for the practice of survey design using self-rated instruments and open-ended questions in an NLP era.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.