<100 subscribers


TL;DR
This is a working paper for my PhD social media research course. This study explores how self-monitoring drives creators to manage AI disclosure through perceived authenticity threats and anticipated audience disapproval. How do creators treat AI transparency, as a deliberate impression management tactic that reflects their personality or react based on perceived anticipated social feedback. The study will measure how AI disclosure impacts a creator's brand, reputation and income.
Abstract
As AI tools transform content creation workflows, creators navigate tensions between operational efficiency and audience expectations of authenticity. This study examines how self-monitoring, a trait measuring individuals' responsiveness to social cues, shapes creators' willingness to disclose AI assistance in content distribution. Building on self-monitoring theory and impression management literature, the research tests a sequential mediation model where self-monitoring drives perceived authenticity threat, which could increase anticipated audience disapproval and decreases disclosure likelihood. The research experiment will use a cross-sectional survey of 300-500 independent digital creators who utilize AI tools monthly will employ validated Likert-scale measures for each construct. Exploratory factor analysis will establish construct validity, while Hayes' PROCESS macro (Model 6) will examine sequential mediation through bootstrapped path analysis. Expected findings indicate that high self-monitors perceive greater reputational risk from AI disclosure, leading to reduced transparency. The study conceptualizes AI disclosure as a strategically calibrated decision and positions transparency as an impression management tactic influenced by personality traits and anticipated social feedback, rather than a procedural requirement. These insights extend communication theory by incorporating individual differences and platform dynamics into frameworks of digital creative labor and strategic self-presentation.
Keywords: AI disclosure, self-monitoring, perceived authenticity, audience disapproval, impression management, digital labor, creator economy, creator intention, platform governance, digital content distribution
1. Introduction
1.1 Background
Independent creators operate within platform ecosystems that simultaneously demand algorithmic optimization and authentic self-presentation. Creators increasingly incorporate generative AI tools across ideation, production, and distribution phases while carefully managing audience perceptions of these technologies (Walsh et al., 2024). Despite AI tools' creative benefits, transparency about their use threatens to compromise creators' perceived originality and trustworthiness, triggering authenticity concerns that shape disclosure strategies (Rae et al., 2024; Duffy & Hund, 2015).
This authenticity threat intensifies through documented audience skepticism toward AI-generated content. Research demonstrates that viewers interpret AI involvement as signaling reduced effort or compromised human creativity, independent of actual content quality (Rae et al., 2024). These negative perceptions create anticipated disapproval that influences creators' strategic self-presentation choices. Creators, thus, navigate complex impression management decisions that balance technological capabilities with maintaining humanized brand identities (Goffman, 1959; Marwick & boyd, 2011). Disclosure choices transcend technical acknowledgments to become calculated responses to perceived reputational risks and anticipated audience reactions.
Self-monitoring theory illuminates these strategic calculations by explaining individual differences in social responsiveness. High self-monitors adapt their communication to align with situational expectations and anticipated feedback, while low self-monitors maintain consistent messaging regardless of context (Snyder, 1974; Gangestad & Snyder, 2000). Within digital environments where algorithmic visibility intersects with authenticity imperatives, self-monitoring shapes how creators perceive authenticity threats from AI use, anticipate audience disapproval, and ultimately decide whether to disclose their technological practices (Abidin, 2016).
1.2 Definition of Key Concepts
This research proposal is structured by three core concepts: AI tools, independent creators, and self-monitoring. Firstly, for the purposes of this study, AI tools are exclusively made up of large language models (LLMs) such as OpenAI's GPT, Anthropic's Claude, and DeepSeek's R1. These systems can support or replace human effort across content creation tasks, such as encompassing caption composition, visual content generation, publication scheduling, and distribution optimization (Kaplan & Haenlein, 2019; Walsh et al., 2024). The study categorizes creators as AI users if they use these technologies on a monthly basis, or even more frequently during the content ideation, production, or distribution process.
Regarding the second concept, independent creators act as autonomous content producers that function outside established media institutions. They have direct authority over creative outputs, brand identity, and audience relationships while also depending on platform algorithms for discoverability and income streams (Nieborg & Poell, 2018; Duffy & Meisner, 2023). These creators confront amplified reputational stakes and, in order to sustain audience trust, have to continuously manage their authenticity perception.
The third concept, self-monitoring, represents a psychological construct that measures behavioral responsiveness to situational social signals. The study will examine if high self-monitors act in a way that matches their public personas to the anticipated audience preferences. In contrast, based on prior research, this study assumes that low self-monitors exhibit consistent self-expression regardless of context (Snyder, 1974; Gangestad & Snyder, 2000). This personality variable anchors the proposed model by clarifying how individual traits can shape disclosure strategies in environments that are socially monitored.
1.3 Problem Statement
With the fast rise of AI, one particular problem regarding its creative use can be found: while audiences remain sceptical of AI tools’ potential benefits, AI creators continue to use them exponentially. Independent creators use AI tools for their content workflow process regardless of the way in which audiences could potentially evaluate the creators' effort, qualification and general satisfaction in a negative light. This negative perception happens especially when the use of AI becomes clearly known, even if the content quality perceptions remain consistent (Rae et al., 2024). In those cases, audiences question the genuineness of AI-generated content. For example, some studies show that viewers perceive AI news anchors as "fake" and emotionally flat: they are unable to provide the cultural sensitivity and human connection needed for news delivery (Ndlovu, 2024).
This gap between the benefits of AI and audience perception creates a series of psychological reactions that affect disclosure. Independent creators perceive AI use as a threat to their authenticity, which makes them expect disapproval from the audience (Duffy & Meisner, 2023). These concerns have led creators to not reveal AI involvement to the public and, instead, has pushed them to choose a strategy of silence over transparency, especially when they anticipate possible negative reactions. These decisions reflect how creators take social risk as an important factor in their calculations, even when they consider aligning with platform compliance (Hogan, 2010; Litt et al., 2020).
Current research on AI adoption lacks models that explain how personality traits can potentially influence disclosure in terms of authenticity and social evaluation paths. The existing frameworks do not take into account why certain creators can perceive greater threats that make them anticipate stronger disapproval when considering AI transparency. This study addresses said gap and, thus, examines how self-monitoring shapes disclosure decisions in relation to its effects on authenticity threat perception and anticipated audience reactions.
Literature Review Section coming soon.
Subscribe to stay informed on the future of this research. I'll be publishing the literature review, research methodologies and conclusion soon.
References
Abidin, C. (2016). Visibility labour: Engaging with influencers’ fashion brands and #OOTD advertorial campaigns on Instagram. Media International Australia, 161(1), 86–100. https://doi.org/10.1177/1329878X16665177
Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Guilford Press.
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Routledge.https://doi.org/10.4324/9780203774441
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Duffy, B. E., & Hund, E. (2015). “Having it all” on social media: Entrepreneurial femininity and self-branding among fashion bloggers. Social Media + Society, 1(2). https://doi.org/10.1177/2056305115604337
Duffy, B. E., & Meisner, C. (2023). Platform governance at the margins: Social media creators’ experiences with algorithmic (in)visibility. Media, Culture & Society, 45(2), 285–304. https://doi.org/10.1177/01634437221111923
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191. doi: 10.3758/bf03193146
Fritz, Matthew. S., & MacKinnon, David. P. (2007). Required Sample Size to Detect the Mediation Effect. Psychol Sci, 18(3), 233–239. doi:10.1111/j.1467-9280.2007.01882.x.
Gangestad, S. W., & Snyder, M. (2000). Self-monitoring: Appraisal and reappraisal. Psychological Bulletin, 126(4), 530–555. https://doi.org/10.1037/0033-2909.126.4.530
Goffman, E. (1959). The presentation of self in everyday life. Anchor Books.
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press.
Hogan, B. (2010). The presentation of self in the age of social media: Distinguishing performances and exhibitions online. Bulletin of Science, Technology & Society, 30(6), 377–386. https://doi.org/10.1177/0270467610385893
Kaplan, A. M., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004
Litt, E., Zhao, S., Kraut, R., & Burke, M. (2020). What are meaningful social interactions in today’s media landscape? A cross-cultural survey. Social Media + Society, 6(3), 1–17. https://doi.org/10.1177/2056305120942888
Marwick, A. E., & boyd, d. (2011). To see and be seen: Celebrity practice on Twitter. Convergence, 17(2), 139–158. https://doi.org/10.1177/1354856510394539
Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11), 4275–4292. https://doi.org/10.1177/1461444818769694
Ndlovu, K. (2024). Audience perceptions of AI-driven news presenters: A case of 'Alice'. Journal of Broadcasting & Electronic Media, 68(2), 123–138. https://journals.sagepub.com/doi/10.1177/01634437241270982
Rae, I. (2024). The effects of perceived AI use on content perceptions. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW2), Article 207. https://doi.org/10.1145/3613904.3642076
Snyder, M. (1974). Self-monitoring of expressive behavior. Journal of Personality and Social Psychology, 30(4), 526–537. https://doi.org/10.1037/h0037039
Treem, J. W., & Leonardi, P. M. (2012). Social media use in organizations: Exploring the affordances of visibility, editability, persistence, and association. Communication Yearbook, 36, 143–189. https://doi.org/10.1080/23808985.2013.11679130
Walsh, D., Kliamenakis, A., Laroche, M., & Jabado, S. (2024). Authenticity in TikTok: How content creator popularity and brand size influence consumer engagement with sponsored user-generated content. Psychology & Marketing, 41(3), 2645–2656. https://doi.org/10.1002/mar.22075
Zhang, Yunhao., & Gosline, Renée. (2023). Human favouritism, not AI aversion: People’s
perceptions (and bias) toward generative AI, human experts, and human—GAI collaboration in persuasive content generation. Judgement and Decision Making. 18, 1-16. doi:10.1017/jdm.2023.37
TL;DR
This is a working paper for my PhD social media research course. This study explores how self-monitoring drives creators to manage AI disclosure through perceived authenticity threats and anticipated audience disapproval. How do creators treat AI transparency, as a deliberate impression management tactic that reflects their personality or react based on perceived anticipated social feedback. The study will measure how AI disclosure impacts a creator's brand, reputation and income.
Abstract
As AI tools transform content creation workflows, creators navigate tensions between operational efficiency and audience expectations of authenticity. This study examines how self-monitoring, a trait measuring individuals' responsiveness to social cues, shapes creators' willingness to disclose AI assistance in content distribution. Building on self-monitoring theory and impression management literature, the research tests a sequential mediation model where self-monitoring drives perceived authenticity threat, which could increase anticipated audience disapproval and decreases disclosure likelihood. The research experiment will use a cross-sectional survey of 300-500 independent digital creators who utilize AI tools monthly will employ validated Likert-scale measures for each construct. Exploratory factor analysis will establish construct validity, while Hayes' PROCESS macro (Model 6) will examine sequential mediation through bootstrapped path analysis. Expected findings indicate that high self-monitors perceive greater reputational risk from AI disclosure, leading to reduced transparency. The study conceptualizes AI disclosure as a strategically calibrated decision and positions transparency as an impression management tactic influenced by personality traits and anticipated social feedback, rather than a procedural requirement. These insights extend communication theory by incorporating individual differences and platform dynamics into frameworks of digital creative labor and strategic self-presentation.
Keywords: AI disclosure, self-monitoring, perceived authenticity, audience disapproval, impression management, digital labor, creator economy, creator intention, platform governance, digital content distribution
1. Introduction
1.1 Background
Independent creators operate within platform ecosystems that simultaneously demand algorithmic optimization and authentic self-presentation. Creators increasingly incorporate generative AI tools across ideation, production, and distribution phases while carefully managing audience perceptions of these technologies (Walsh et al., 2024). Despite AI tools' creative benefits, transparency about their use threatens to compromise creators' perceived originality and trustworthiness, triggering authenticity concerns that shape disclosure strategies (Rae et al., 2024; Duffy & Hund, 2015).
This authenticity threat intensifies through documented audience skepticism toward AI-generated content. Research demonstrates that viewers interpret AI involvement as signaling reduced effort or compromised human creativity, independent of actual content quality (Rae et al., 2024). These negative perceptions create anticipated disapproval that influences creators' strategic self-presentation choices. Creators, thus, navigate complex impression management decisions that balance technological capabilities with maintaining humanized brand identities (Goffman, 1959; Marwick & boyd, 2011). Disclosure choices transcend technical acknowledgments to become calculated responses to perceived reputational risks and anticipated audience reactions.
Self-monitoring theory illuminates these strategic calculations by explaining individual differences in social responsiveness. High self-monitors adapt their communication to align with situational expectations and anticipated feedback, while low self-monitors maintain consistent messaging regardless of context (Snyder, 1974; Gangestad & Snyder, 2000). Within digital environments where algorithmic visibility intersects with authenticity imperatives, self-monitoring shapes how creators perceive authenticity threats from AI use, anticipate audience disapproval, and ultimately decide whether to disclose their technological practices (Abidin, 2016).
1.2 Definition of Key Concepts
This research proposal is structured by three core concepts: AI tools, independent creators, and self-monitoring. Firstly, for the purposes of this study, AI tools are exclusively made up of large language models (LLMs) such as OpenAI's GPT, Anthropic's Claude, and DeepSeek's R1. These systems can support or replace human effort across content creation tasks, such as encompassing caption composition, visual content generation, publication scheduling, and distribution optimization (Kaplan & Haenlein, 2019; Walsh et al., 2024). The study categorizes creators as AI users if they use these technologies on a monthly basis, or even more frequently during the content ideation, production, or distribution process.
Regarding the second concept, independent creators act as autonomous content producers that function outside established media institutions. They have direct authority over creative outputs, brand identity, and audience relationships while also depending on platform algorithms for discoverability and income streams (Nieborg & Poell, 2018; Duffy & Meisner, 2023). These creators confront amplified reputational stakes and, in order to sustain audience trust, have to continuously manage their authenticity perception.
The third concept, self-monitoring, represents a psychological construct that measures behavioral responsiveness to situational social signals. The study will examine if high self-monitors act in a way that matches their public personas to the anticipated audience preferences. In contrast, based on prior research, this study assumes that low self-monitors exhibit consistent self-expression regardless of context (Snyder, 1974; Gangestad & Snyder, 2000). This personality variable anchors the proposed model by clarifying how individual traits can shape disclosure strategies in environments that are socially monitored.
1.3 Problem Statement
With the fast rise of AI, one particular problem regarding its creative use can be found: while audiences remain sceptical of AI tools’ potential benefits, AI creators continue to use them exponentially. Independent creators use AI tools for their content workflow process regardless of the way in which audiences could potentially evaluate the creators' effort, qualification and general satisfaction in a negative light. This negative perception happens especially when the use of AI becomes clearly known, even if the content quality perceptions remain consistent (Rae et al., 2024). In those cases, audiences question the genuineness of AI-generated content. For example, some studies show that viewers perceive AI news anchors as "fake" and emotionally flat: they are unable to provide the cultural sensitivity and human connection needed for news delivery (Ndlovu, 2024).
This gap between the benefits of AI and audience perception creates a series of psychological reactions that affect disclosure. Independent creators perceive AI use as a threat to their authenticity, which makes them expect disapproval from the audience (Duffy & Meisner, 2023). These concerns have led creators to not reveal AI involvement to the public and, instead, has pushed them to choose a strategy of silence over transparency, especially when they anticipate possible negative reactions. These decisions reflect how creators take social risk as an important factor in their calculations, even when they consider aligning with platform compliance (Hogan, 2010; Litt et al., 2020).
Current research on AI adoption lacks models that explain how personality traits can potentially influence disclosure in terms of authenticity and social evaluation paths. The existing frameworks do not take into account why certain creators can perceive greater threats that make them anticipate stronger disapproval when considering AI transparency. This study addresses said gap and, thus, examines how self-monitoring shapes disclosure decisions in relation to its effects on authenticity threat perception and anticipated audience reactions.
Literature Review Section coming soon.
Subscribe to stay informed on the future of this research. I'll be publishing the literature review, research methodologies and conclusion soon.
References
Abidin, C. (2016). Visibility labour: Engaging with influencers’ fashion brands and #OOTD advertorial campaigns on Instagram. Media International Australia, 161(1), 86–100. https://doi.org/10.1177/1329878X16665177
Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). Guilford Press.
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Routledge.https://doi.org/10.4324/9780203774441
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Duffy, B. E., & Hund, E. (2015). “Having it all” on social media: Entrepreneurial femininity and self-branding among fashion bloggers. Social Media + Society, 1(2). https://doi.org/10.1177/2056305115604337
Duffy, B. E., & Meisner, C. (2023). Platform governance at the margins: Social media creators’ experiences with algorithmic (in)visibility. Media, Culture & Society, 45(2), 285–304. https://doi.org/10.1177/01634437221111923
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191. doi: 10.3758/bf03193146
Fritz, Matthew. S., & MacKinnon, David. P. (2007). Required Sample Size to Detect the Mediation Effect. Psychol Sci, 18(3), 233–239. doi:10.1111/j.1467-9280.2007.01882.x.
Gangestad, S. W., & Snyder, M. (2000). Self-monitoring: Appraisal and reappraisal. Psychological Bulletin, 126(4), 530–555. https://doi.org/10.1037/0033-2909.126.4.530
Goffman, E. (1959). The presentation of self in everyday life. Anchor Books.
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Press.
Hogan, B. (2010). The presentation of self in the age of social media: Distinguishing performances and exhibitions online. Bulletin of Science, Technology & Society, 30(6), 377–386. https://doi.org/10.1177/0270467610385893
Kaplan, A. M., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004
Litt, E., Zhao, S., Kraut, R., & Burke, M. (2020). What are meaningful social interactions in today’s media landscape? A cross-cultural survey. Social Media + Society, 6(3), 1–17. https://doi.org/10.1177/2056305120942888
Marwick, A. E., & boyd, d. (2011). To see and be seen: Celebrity practice on Twitter. Convergence, 17(2), 139–158. https://doi.org/10.1177/1354856510394539
Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11), 4275–4292. https://doi.org/10.1177/1461444818769694
Ndlovu, K. (2024). Audience perceptions of AI-driven news presenters: A case of 'Alice'. Journal of Broadcasting & Electronic Media, 68(2), 123–138. https://journals.sagepub.com/doi/10.1177/01634437241270982
Rae, I. (2024). The effects of perceived AI use on content perceptions. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW2), Article 207. https://doi.org/10.1145/3613904.3642076
Snyder, M. (1974). Self-monitoring of expressive behavior. Journal of Personality and Social Psychology, 30(4), 526–537. https://doi.org/10.1037/h0037039
Treem, J. W., & Leonardi, P. M. (2012). Social media use in organizations: Exploring the affordances of visibility, editability, persistence, and association. Communication Yearbook, 36, 143–189. https://doi.org/10.1080/23808985.2013.11679130
Walsh, D., Kliamenakis, A., Laroche, M., & Jabado, S. (2024). Authenticity in TikTok: How content creator popularity and brand size influence consumer engagement with sponsored user-generated content. Psychology & Marketing, 41(3), 2645–2656. https://doi.org/10.1002/mar.22075
Zhang, Yunhao., & Gosline, Renée. (2023). Human favouritism, not AI aversion: People’s
perceptions (and bias) toward generative AI, human experts, and human—GAI collaboration in persuasive content generation. Judgement and Decision Making. 18, 1-16. doi:10.1017/jdm.2023.37
Share Dialog
Share Dialog
1 comment
Day 2 Posted the abstract and introduction to my research proposal paper. Would love feedback on how to improve it! Posting a Ponder soon! How does self-monitoring influence creators’ likelihood to disclose their use of AI tools to their audience?