This horrible cartoon-like image of a near toddler as a military fighter above looks obviously fake. With AI, however, it could be made to look real. It could also be made into a convincing video. With the right trusted news sources behind the story, it could be made believable. The story could be sold. As war propaganda, it might be effective at preventing deaths, even at bringing a war to an end. Some might then consider it a case of ethical use of AI propaganda, where the ends justify the deceptive means.
This blog aligns with the aliens who oppose deception[7], but we must acknowledge that there might be ethical choice situations we don’t know about, such as in wars, where deception is the best option to avoid physical harm. It seems that would be where the slippery slope starts, however, because tangled webs follow deception. Nevertheless, it sometimes feels likely that AI deception is already all around us.
The thing to know is that AI-generated propaganda can be quite sophisticated and powerful. It is, like the Internet, a tool, which can be used for good. In cases where you suspect AI is not being used for good, it can often be detected and debunked through various methods, including:
1. Fact-checking: Fact-checking is a crucial tool for detecting and debunking AI-generated propaganda. Fact-checking involves verifying the accuracy of claims made in the content and checking the sources of information[2].
2. Reverse image search: Reverse image search can be used to identify the origin of images and videos used in propaganda. This can help determine whether the content is authentic or has been manipulated[2].
3. AI detection tools: AI detection tools can be used to identify AI-generated content. These tools use machine learning algorithms to analyze the content and identify patterns that are indicative of AI-generated content[2].
4. Media literacy: Media literacy involves educating people on how to identify and evaluate information sources critically. This can help people recognize and avoid AI-generated propaganda[1][3].
5. Regulation: Governments and social media platforms can regulate the use of AI in propaganda by implementing policies and guidelines that promote transparency and accountability in the use of AI technologies[5].
6. Proof of Humanity: Tools and systems which prove human sources can root out and stop the spread of AI misinformation. In order for this to avoid being compromised, say by a human for profit, the system would need to be automated and decentralized using a technology like blockchain.[6]
As governments, industries, and companies around the world turn to AI for influence power, the methods here can help detect and debunk AI-generated propaganda and promote responsible use of AI technologies.
Citations
[1] https://www.foreignaffairs.com/united-states/coming-age-ai-powered-propaganda
[2] https://www.cnet.com/news/misinformation/ai-misinformation-how-it-works-and-ways-to-spot-it/
[3] https://www.govtech.com/artificial-intelligence/how-generative-ai-is-boosting-propaganda-disinformation
[4] https://www.forbes.com/sites/forbescommunicationscouncil/2019/09/12/how-ai-can-create-and-detect-fake-news/
[5] https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/
[6] https://newsi8.com/how-a-blockchain-captcha-can-save-humanity/
[7] https://www.nzherald.co.nz/world/crop-circles-theyre-real-and-contain-hidden-messages-scientist-says/F4JKH7HQS6LF4ZWE4NJQLOAKJU/