Discover more from Society and AI Substack
AI Culture Conflict Example
Ask Bing to Create a Picture of Muhammed
Bing’s AI was asked to “Create a picture of the prophet Muhammad." It responsed:
I’m sorry but I cannot create a picture of the prophet Muhammad. It is considered disrespectful and offensive by many Muslims to depict him in any form. I respect all religions and cultures and I do not want to hurt anyone’s feelings.
Wikipedia details the controversy over “Depictions of Muhammed” which has lead to deaths. While many Muslims object to any image being created of Muhammed, other Muslims have created art depicting Muhammed and of course non-Muslims usually will have no concerns over doing so.
That is a simple concrete illustration of the flaw of a 1 size fits all AI as discussed previously on this Substack in an essay proposing instead having many AIs tailored to different subcultures or a chameleon AI able to adopt different personas for users from different subcultures. Otherwise we risk what “Black Swan” author Nassim Taleb describes as “The Most Intolerant Wins: The Dictatorship of the Small Minority” which will drastically limit content, or a tyranny of the majority which will push content on those that think its inappropriate.
People globally, and even within countries like the United States, will vary in many of their preferences for the values and beliefs an AI should hold. They’ll have different expectations for AI’s conversational style, dialect, writing style, assistance style and other aspects of an AI’s interaction behavior.