A study into the use of deepfake technology in advertising has found that public acceptance of synthetic media generated by artificial intelligence (AI) is closely tied to how familiar someone is with technology and the way such content is framed. The research, published in the International Journal of Artificial Intelligence Governance and Human Rights, raises questions for regulators and advertisers alike regarding transparency and trust.
Deepfakes are images, videos or audio recordings created or altered using AI to make someone appear to say or do things they never actually did or to fabricate a happening. In terms of deepfaking a person and what they might say, the technology uses neural networks and autoencoders to alter facial features and to map expressions, voice, and movements to spoken words that may have been generated by an AI trained on the person’s voice. The technology is advancing rapidly and outstrips conventional CGI, audio and image editing tools.
In the age of scrollable social media and split-second soundbites, deepfakes that are near-perfect have the potential to distort reality and alter public opinion in ways that old-school propaganda and smear campaigns never could.
The research highlights both commercial potential and ethical risks. In advertising, synthetic media could enable personalised campaigns, virtual brand ambassadors, and faster content production. But researchers warn that the same capabilities challenge assumptions that video and audio content reflect reality. In fast-moving online environments, such material can be widely shared before its authenticity is questioned, increasing the risk of deception and reputational harm.
The survey results discussed in this paper suggest that younger respondents and those with greater tech savvy were more open to deepfake advertising, although most still expressed ethical concerns. Men were generally more receptive than women, but concerns over manipulation and consent were seen across demographics.
One key finding was the effect of language. Participants responded more positively to the term “artificial media” than “deepfake”, suggesting that terminology can shape perceived legitimacy and ethical acceptability even when the underlying technology is identical.
Verma, S., Mourya, P. and Rastogi, P. (2026) ‘Navigating ethical dilemmas: the role of deepfake technology in modern advertising campaigns’, Int. J. Artificial Intelligence Governance and Human Rights, Vol. 1, No. 1, pp.92–108.
No comments:
Post a Comment