OpenAI Risks Global Culture & Holy War
[1st quick draft - this page can also be reached at RainbowOfAI.com ]
On May 21st, 2023 this author's page asked:
Is it appropriate, or even possible, for the relatively small number of humans creating these AI systems in the two major vendors to give guidance on the all the range of topics an AI may comment on?
Coincidentally only a few days later on May 25th OpenAI seemed to acknowledged concerns and put out a call for proposals to explore the issue of “Democratic Inputs to AI”:
How should disputed views be represented in AI outputs? [...] No single individual, company, or even country should dictate these decisions.[...]The primary objective of this grant is to foster innovation in processes – we need improved democratic methods to govern AI behavior.
They argue that like humans, AI requires "more intricate and adaptive guidelines for its conduct". Every culture has implicit guidelines for how people are expected to behave and communicate in certain contexts. People usually speak differently with family at home than they do with colleagues in a business meeting, or friends at a party. Guidelines vary by context for factors such as the propriety of expressing views on controversial topics like politics or religion.
There is no global agreement on how humans should behave. Different countries, ethnic groups and subcultures within countries have evolved different expectations. There is no "one size fits all" approach that will work since these guidelines sometimes contradict each other, what is expected in one culture may be frowned on in another. When people travel, they are advised to learn local customs to ease their journey. Every human interaction involves a mixture of individuals in a certain location and expectations from each participant collectively merged to "dictate these decisions" regarding expected behavior, and the level of tolerance for those who deviate from expectations.
That raises questions regarding the interpretation of OpenAI's statement that "No single individual, company, or even country should dictate these decisions." The wording of that is ambiguous, but it might be interpreted by some as suggesting a global collective democratic process to choose one set of rules to "govern" AI regardless of who it is interacting with, despite there being nothing like that for humans.
Guidelines for human behavior are always dictated by a combination of the "single" entities relevant to the situation. Imagine trying to get global agreement through some democratic process on universal customs of appropriate human behavior to be follow whenever people interact. There are already culture wars within the United States, as there are in many countries, so consider what it would be like to try to globally agree on these issues.
While Americans are struggling to collectively decide what schools should teach their children on various topics, it seems questionable to expect them to reach a productive consensus on what moral guidance AIs should be given. Globally in some religions certain topics or images are considered absolutely forbidden or highly offensive, while other cultures consider discussing such things an important part of education.
It is not realistic to expect a democratic process to quickly come up with a universal one size fits all AI response to major controversies people have fought literal wars over. It does not seem productive to either provide everyone with what the majority wants and offend large minorities of the populace on various topics, or to so severely limit AI functionality that it offends no one in what “Black Swan” author Nassim Taleb describes as “The Most Intolerant Wins: The Dictatorship of the Small Minority.”
Diversity, Not Majority Colonization
It is useful to envision AIs functioning like friendly travelers who follow local customs rather than exhibiting the stereotype of the "ugly American" traveler who expects everyone to cater to theirs. While traveling, you would likely expect the best tour guide to be someone native to the area expert in local customs. Should an AI used in France have its cultural interactions defined by democratic input from China from people with a different culture, or vice versa? Hopefully OpenAI's reference to "adaptive guidelines for its behavior" indicates an openness to the idea that AIs conform to each cultural context, rather than some lowest common denominator democratically determined universal AI behavior.
Rather than conceiving of the goal as defining a global democratic process to mold a single universal AI, it seems more constructive to imagine it as a framework usable to craft many AIs, each well adapted to the specific context where they will be used. We do not expect all humans to think alike or to know how to function in every culture, so it seems natural to allow for the creation of a similarly diverse rainbow of AIs with varying perspectives. The democratic input process may provide the ability to train generalized AI chameleons that can fully adapt their behavior to any context, but it should also allow for training many individual culturally specific AIs.
AI technology is still evolving, and these democratic processes do not yet exist. We do not know yet how many different specially focused AIs might be feasibly trained, or how many culturally specific personas a single fully adaptive chameleon AI can usefully embody. Therefore, any democratic process should be designed to evolve to operate at different scales, whether it is used to train one AI persona that caters to the whole United States or AIs that handle 300, 3000 or even 300 million different subcultures within the US.
Teaching AI vs. Teaching Children
The OpenAI document refers to the question of how much user preferences should guide how AIs behave. Even if someone ideally wishes AIs to be completely adherent to individual preferences, there is a practical concern regarding what is required to fully instruct a complex entity in how to interact. Parents spend years teaching their children how to behave. Lessons are taught whenever a situation arises in the world that presents an opportunity to nudge the child towards how they are expected to behave.
Unlike the existing major AIs that have memory limited to their current session, future AIs may also learn from nudges a user gives them regarding their preferred behavior. However, while people patiently wait many years for children to learn enough to assist in adult tasks in a workplace, humans want AI systems to be useful from the start. Even if an AI does not yet know all their personal preferences, consumers will be more satisfied if an AI by default appears to respect relevant cultural guidelines for how to communicate.
Fortunately, AIs are trained on a vast collection of humanity's writings on all aspects of our society, so they do not need to start from scratch learning about how to function in the world the way babies do. AI vendors have collected examples of the types of things they would like an AI to be able to do in specific contexts. Those examples guide the AI towards how best to interact with humans to make use of that larger body of knowledge they contain.
The problem is that even once an AI understands its task generally: people will have different preferences for how the AI should communicate and what its opinions should be when asked about varied topics. For example, there are cultures labeled "woke" where a certain way to express a concept would be considered offensive, whereas other subcultures would find a statement phrased in a "woke" manner offensive, or at least undesirable.
A parent may prefer an AI their children use to share their same religious and political opinions by default. Parents will have differing views of what topics are appropriate for AIs to discuss with children of various ages. Countries will have different general guidelines for what topics are inappropriate even for adults to discuss, regardless of whether we may agree with them.
Democracy As Cultural Guidance
Teaching AIs about specific subcultures is where crowdsourcing via a democratic process may be of use. Rather than expecting a user to spend years teaching the AI like a child about their subculture, the work could be crowdsourced using little bits of time from vast numbers of people using methods like the democratic process OpenAI is exploring.
People can give a little bit of guidance as to what they think is appropriate in their country and subculture, either directly through the inputs gathered to train an AI, or later while using an AI by nudging it when they think the AI is off track. The initial result may not be exactly what any individual user from a subcultures may wish, but it would be provide a default starting point likely closer than an AI without specific cultural guidance.
The underlying knowledge these AIs were trained on contain lots of information about these various cultures so research will have to determine how much additional guidance is needed to steer the AIs towards responding based on the relevant worldview, it may not take much. It may be that a specially constructed training assistant AI could be created to gather current data about the characteristics of a particular group to generate training material to do an initial fine tuning to guide an AI towards how to make use of its built in cultural knowledge to adapt to that subculture. The initial subculture specific AI would then be further fine tuned using this proposed democratic crowdsourcing process.
Subcultures: Hierarchies and Mixing
Just as the United States has geographically nested governments, there may be nested layers of democratic processes to guide AIs. People across the US will agree on many preferences, but there are subcultures who may prefer an AI more specifically tailored to their worldview, for example perhaps a southern rural preference group and a northern urban one.
There may be similarly nested hierarchies to deal with the opinions of varied religious groups. For example one for Christians in general that contains nested groups for different denominations, and similar hierarchies for other faiths and even for non-believers where agnostics will differ from atheists on some topics. Some religious, professional, political, and academic groups may create their own process for what they consider authoritative guidance regarding default opinions within their niche, with of course subgroups of those niches free to differ.
People will have preferences regarding the functionality of these AI systems, like the type of coaching they provide, not merely guidance for how they communicate and what their default opinions should be. The same methods used to collect user input for guiding an AI's default opinions and culture might apply to that type of preference as well.
Users may belong to multiple subcultures and ideally should be able to use an AI trained on a combination of those separate facets of their total worldview. Some of the groups a user generally identifies with may hold views that conflict with each other. Its likely no AI system will perfectly handle that since the way such conflicts are resolved, or ignored, will vary by person. Commonly known sources of conflicts between subcultures a user might identify with may be specifically addressed. Consider the case of a Catholic Democrat.
A group that guides an AI on Catholicism might hold a default democratically chosen pro-life view, which contradicts the default pro-choice view of a group that guides an AI on Democratic Party preferences. One method to address common conflicts is to create subgroups that mix together two different subcultures and embody how they are resolved. A group that represents the views of Catholic Democrats may subdivide into two subgroups: one subgroup of those who consider themselves Catholics but hold atypical pro-choice views, and another that consider themselves Democrats but hold atypical pro-life views. Unfortunately, no system will be perfect, just as humans contain contradictions, these systems will too.
Some of these subcultures used to guide how AIs behave will be based on aspects of a user's identity they are fully aware of, like any political party or religious groups they belong to. A user may not wish to bother trying to inform an AI of all the particular subcultures they belong to before they start using it. AIs may be trained to pick up clues they aren't the best fit for a user and then defer to a more appropriately adapted AI, or persona within the same AI. This process may be necessary since more advanced AI training processes might be based on subcultures people don't even realize they belong to.
Social media platforms and companies that do data mining for marketing already find ways to cluster together people with similar characteristics for which there may be no defined label. The process of gathering democratic input to train AIs might similarly discover such clusters and use them to create AIs, or internal personas, tailored to those unnamed groups. Since people have no way to know they belong to these clusters, the AI systems will have to spot clues from a user to appropriately characterize which AI best matches their needs.
Layers of Guidance From Different Sources
Current methods of training an AI to avoid expressing what vendors consider problematic content are a work in progress, with many publicly reported examples of flaws. If any jurisdictions regulate AI to place absolute restrictions on the provision of certain types of information, it may be necessary to have all output from an AI go through an explicit separate censor process to filter it using some combination of conventional software and AI to recognize items of concern. The public should be given access to the details of that layer. It is in the interest of companies to make the public fully aware of which AI restrictions are government imposed rather than vendor defined.
Corporations will have their own preferences for how these AI systems interact with users since they will be concerned about how it reflects on their reputation. Companies train human customer service representatives to ensure users have a good impression of the company, and will wish to ensure their AI systems exhibit certain customer friendly attributes. The exact methods they used to do this might be proprietary. However, it seems fair for vendors to be upfront regarding how they steer their AIs so the public can know whether concerns they have come from the training for a specific subculture or government, or whether it is from the vendor.
Guidance, Not Governance
The OpenAI document uses words like "govern" and "governance". While those words can be used in the context of purely private processes, it seems appropriate to consider switching to other terms to make clear the distinction between official governmental controls and private guidance.
AI systems are tools guided by humans to create speech, even if they are more sophisticated than pens or basic editing software. Any absolute restrictions placed on the speech a tool can create impedes a user who wishes to produce that speech, even if they are still able to do so using other means. It seems important to question whether its appropriate for any private entity or process to be making decisions regarding what speech to inhibit on a broad scale, rather than deferring to governments created to make such collective choices.
In the United State freedom of speech is protected by the First Amendment. Although private entities are not bound by the First Amendment, the prevent big brother page explores reasons to consider applying its spirit to AI generated text. The founders understood emotions could lead the public to wish to use the force of law to silence those they disagree with, and felt it important to guard against a tyranny of the majority. The same problem can arise in any privately run democratic process, so perhaps it should incorporate something equivalent to the First Amendment.
There doesn't currently exist any large scale democratic private process for making decisions regarding what speech is allowable for humans or machines. Those concerned with protecting the general culture of freedom of speech should consider that if such a framework is established to restrict machine generated speech, big tech might eventually copy it as a method to decide what human speech to filter. Perhaps not, but it seems useful to avoid falling into the trap of rationalizing an approach that may empower tyranny of the majority limiting any speech, AI or human.
Thanks for reading Society and AI Substack! Subscribe for free to receive new posts and support my work.