Psychology, 1930-2024
Permanent URI for this collectionhttps://theses-dissertations.princeton.edu/handle/88435/dsp01cz30ps722
Browse
Browsing Psychology, 1930-2024 by Author "Coman, Alin I."
- Results Per Page
- Sort Options
The Effects of Hope and Fear Framing on ‘Social Activist’- Aligned Behavior
(2025-05-05) Elliott, Jenna C.; Coman, Alin I.Much of the previous research on social activism has focused on large scale collective action and the traits of individual actors. Yet, emotions including hope and fear are understood to serve as important motivating forces on our behavior. In this investigation, I expand upon our understanding of hope and fear motivation to ask how the emotional qualities of the language used to present a social issue might impact an individual's desire to support that cause. Exposing participants to either hope-inspiring language, fear-inducing language, or a neutral control, changes in perceived message strength as well as several behavioral measures (including hypothetical action, time allocation, and token donation tasks) are assessed as indicators of social activist behavior. Between treatment groups, no statistically significant differences were found to indicate that one particular emotional frame (or even the absence of emotional framing) was more effective in increasing perceived issue importance or solution-oriented behavior. Given that participants were able to detect the relevant emotional frame in a manipulation verification check, I hypothesize that this difference is the result of short-term exposure and stimuli format — not necessarily a failure of emotional presentation to influence persuasion or solution-oriented behavior as a whole.
Theory of Mind and the Persuasive Power of AI: Evaluating the Role of AI Chatbots in Reducing Polarization on Topics of US Political Discourse
(2025-04-21) Mulshine, Colin B.; Coman, Alin I.In recent years, the climate for political discourse in the United States has become increasingly heated, with the 2024 presidential election serving as an apparent inflection point. Concurrently, developments in machine learning have enabled large language models (LLMs) to become progressively more human-like, allowing them to take on the role of a social companion. Previous research has suggested that AI chatbots can be successful persuaders in political contexts due to their wealth of factual knowledge and their objective nature. The present study looks to 1) further evaluate AI chatbots as an effective agent in reducing polarized attitudes on political topics, 2) compare their effectiveness to other humans, and 3) to pursue an ideal political identity for the chatbot. College students from Princeton University (N = 42) participated in an 8-round debate on two of their strongest political viewpoints with an AI chatbot (powered by GPT-4o), in which they were informed they were either chatting with the AI or a neighboring human participant through an online message room. Chatbots were prompted to either personify an extreme or moderate opposition to that of the participant. Results from the debates revealed that the chatbot was successful in depolarizing attitudes across all conditions, though differences between conditions were all statistically insignificant. As a follow-up, this paper also includes a textual analysis of the dialogues produced, building on prior research that suggests linguistic styles between humans and AI tend to converge over the course of their conversations.