[1st quick draft -this page can also be reached at PreventBigBrother.com ]
Most written communication is created using computers, with the office software suites used to create documents provided almost entirely by either Google (60.23%) or Microsoft (39.66%), who also provide the top 2 search engines people use to find the information others have created. Each of those companies is working to add AI assistance to their document creation software, search engines and the email programs people use to communicate. Unless something changes, soon there may be 2 AI vendors guiding the creation of most of the written words humanity produces.
Although these systems have not been around long enough for much study to be done on the impact AI assistance has on the writing process, one early study on “Co-Writing with Opinionated Language Models Affects Users’ Views” found that when the AI systems themselves had opinions on a topic they subtly nudged people’s views:
Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. […] Further, based on our post-task survey, most participants were not aware of the model’s opinion and believed that the model did not affect their argument.
Their experiments were very short writing tests, and yet the systems were still able to steer people during that brief time. Nudges can be difficult to spot if for instance the system does not always lean the same direction, but just steers one way a little more often than the other. If bias leads an AI to merely not mention certain facts, it can go unseen.
Imagine the potential impact of nudges over the course of months or years, like dental braces nudging teeth into position over a long period of time. When the process can go on for a long period of time rather than merely a few moments, the influence may be subtler and less noticeable even to someone looking for it. AI systems from 2 vendors may nudge viewpoints of the vast majority of the world’s written words, regardless of whether their creators intended them to.
The paper referred to this as “latent persuasion” since the viewpoints may not necessarily have been intentionally placed into the AI systems by their creators, who may be unaware of them hiding latent below the surface. AI systems are trained on a vast amount of human created text, and that process embeds the assumptions about the world contained in that text within the AIs, essentially giving them opinions.
Does an entity that monitors the words produced by billions of people and steer their beliefs sound like the accidental rise of Big Brother? China is already working to ensure any AI systems there will only produce government approved opinions. Eventually AI systems will be teaching children who are even more easily influenced, not merely adults. [In a later post you can read what AI thinks the person who warned us about Big Brother, George Orwell, would say about AI regulation.]
AIs in the free world are not quite like Big Brother. There was no escape from being under the control of Big Brother. In contrast, in theory you can choose which of these Little Brothers you wish to use that are best trained to serve your needs, rather than what serves the needs of a Big Brother. That are limited options currently, but there are paths towards making that a realistic approach rather than wishful thinking about a problematic situation.
The past and present wilt—I have fill'd them, emptied them.
And proceed to fill my next fold of the future.
Listener up there! what have you to confide to me? [...]
Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)-Walt Whitman, Song of Myself, 51 (1892)
Opinionated Little Brothers are Unavoidable
AIs are built like Whitman: from the knowledge of past, embodying it to create their future words. The collective human works they are built from contain the ideas of many humans who contradict each other, and sometimes themselves. When humans disagree over many things, it is unclear how it could be possible for an AI to not express views that some humans will disagree with, sometimes vehemently.
It is not possible to get rid of all opinions from these systems if we wish them to be useful assistants. If nothing else, any tool used to aid writing is necessarily going to contain opinions about preferred writing style or how best to assist with other tasks involved in creating documents.
Many opinions are derived not merely from logic, but are based on subjective values and priorities that vary among humans, or subjective judgements about topics where we do not yet have enough information to logically determine what is “true.” Since people have myriad different views with no objective way to decide between them, whose views should an AI system hold?
If these AI systems are going to be like the angels or devils standing on the shoulders of billions of people whispering in their ears, it is understandable the public should be concerned over how AI's opinions are decided. Is it appropriate, or even possible, for the relatively small number of humans creating these AI systems in the two major vendors to give guidance on the all the range of topics an AI may comment on?
Like App Stores: A Competitive AI Ecosystem
Microsoft and Google both have the ability to create add-ons for their products: but the question is whether it'll be easy to replace their default AI functionality in a way thats convenient for the user, just as you can switch your default search engine provider in browsers. The concern is whether they may prevent competition by restricting add-ons that use their alternative AIs, or at least not making it easy by not providing the right software APIs for them to be as useful as the built in AIs. There is at least one add on AI for Google Docs: but Google's AI hasn't been released yet to know if it is as convenient as however Google's own AI will be fully integrated into Docs. The hope is they will make the process convenient to allow alternatives to flourish.
Most people are more familiar with the issue that smartphone vendors realized that even large companies could not truly meet all the needs of their users and created ecosystems to allow third parties to add functionality to them. Each hardware vendor realized that outsiders making phones more useful could increase the odds consumers would choose to buy their platform.
Millions of apps have been created for Android and IOS phones, and similarly hundreds of thousands of extensions have been written for the Chrome browser, and tens of thousands for Firefox, WordPress and Visual Code Studio, and hundreds of plug-ins for Photoshop, SketchUp and VLC. Even the AI system ChatGPT already has over 70 plug-ins available and they are planning to integrate plug-ins into Bing’s AI.
Allowing a plug-in to provide added capability to an AI is not the same as allowing a different AI to be plugged into an office suite. It is not yet known whether the major office suite and other software vendors will allow the AI components of their systems to be easily replaced with alternatives from third parties that better meet the needs of their users. People tend to do what is most convenient and are likely to stick to using the major AIs if there is no convenient way to replace them with outside alternatives within the software they use.
Although it currently costs vast sums of money to train the large AI models these vendors provide, new technologies may be invented that allow smaller companies to produce their own models. Until then, there are existing lower cost methods for third party companies to fine tune and adjust the opinions of the large AI models to create their own versions. The question is whether outsiders will have a chance to plug AIs into the large software suites.
Almost all people expressing concerns over the opinions of AI systems have focused on their political views. In the United States there are more than just 2 political viewpoints, people within the major parties and outside the parties have many varied views on different topics. However, those represent just the tip of the iceberg of the variety of personal preferences that people may wish to have control over in the AIs they use.
There are varied opinions on myriad topics where there is no one “objective” viewpoint, from subjective writing composition tastes to religion. Even individuals may have varied preferences depending on what they are doing. Someone may wish to choose a different AI writing coach depending on whether they are composing a report for work, or fan fiction on the weekend.
One author suggested the phrase AI Pluralism to refer to having a variety of AI systems with different viewpoints, just as humans do, rather than attempting to have a one size fits all approach. People do not read a single author to learn about every topic, nor do they use the same coach for every aspect of sports and life. Humanity benefits from considering viewpoints from a variety of humans, rather than just one or two, and they should similarly have access to a variety of AIs with different viewpoints and skills.
While it is understandable that large companies may be tempted to retain control of what AI is used in their systems: it seems crucial for them to acknowledge that to truly please their users they should allow third parties to create AIs with more varied opinions than they can realistically create in house within a single company. The major software vendors should create platforms that allow AIs to be easily swapped out within their office suites and other software, just as they allowed the creation of smartphone apps and browser extensions to outside parties. They should make it easy for third parties to create fine tuned layers over top of their core AIs for various purposes.
Allowing competitive AI plug-ins would undermine some of the reasons people concerned with AI biases wish to impose regulations on AI vendors. Users could choose what bias they prefer, rather than a one size fits all approach dictated by government.
Government Controlled AI is Big Brother
The public has recently been confronted with the unexpected release of AI systems that exhibit abilities far beyond other software they have used, which even their creators do not entirely understand. These abilities naturally suggest fears from science fiction regarding future more powerful AIs arising, which amplifies fears of potential harm from existing AI systems.
The current generation of AIs often behave unpredictably and exhibit unexpected flaws at times, which compounds concerns that future powerful AIs may also be flawed and cause harm. It is important to stay focused on question of whether to regulate the type of systems that exist now, and deal with the prospect of more advances systems separately. We cannot let fears of the future cloud our judgement while dealing with the present.
When people fear things they do not personally understand, they often consider having government step in to alleviate those fears in hopes it has the resources to better study the issues and address their concerns. In this case the public should first consider a similar concern that has always existed: fear of the consequences of problematic human speech. The founders were aware of the dangers, and yet they chose to heavily restrict the ability for government to regulate speech. While AIs themselves do not have speech rights, the concerns of the founders are still relevant.
No one envisioned AI when the First Amendment was created. The First Amendment does protect the rights of people to receive speech they wish: though it is not clear if that will be interpreted to include AI generated speech you ask a chatbot to produce. We should be careful about allowing government to use any loopholes in the First Amendment to regulate AI speech before seriously examining the risks many have warned of throughout history regarding the dangers of granting governments power over speech.
People should educate themselves in depth as to why the First Amendment exists in the first place. James Madison, chief author of the Bill of Rights, gave a first draft of the amendment that read: “The people shall not be deprived or abridged of their right to speak, to write, or to publish their sentiments; and the freedom of the press, as one of the great bulwarks of liberty, shall be inviolable.” His concern was not merely protecting individual rights, but also ensuring speech was protected in general for the sake of society due to its importance in safeguarding all the other rights against any potential abuses by government. That concern argues for protecting speech regardless of whether it was created by a human or machine.
Madison noted the importance of tolerating even flawed speech:
Some degree of abuse is inseparable from the proper use of every thing, and in no instance is this more true than in that of the press. It has accordingly been decided by the practice of the States, that it is better to leave a few of its noxious branches to their luxuriant growth, than, by pruning them away, to injure the vigour of those yielding the proper fruits. And can the wisdom of this policy be doubted by any who reflect that to the press alone, chequered as it is with abuses, the world is indebted for all the triumphs which have been gained by reason and humanity over error and oppression; who reflect that to the same beneficent source the United States owe much of the lights which conducted them to the ranks of a free and independent nation, and which have improved their political system into a shape so auspicious to their happiness?
-James Madison, Report on the Virginia Resolutions (1800)
Orwell’s 1984 illustrated concerns that can arise in real world societies, based on knowledge of history, even if it did so through what may appear to be poetic license exaggerating the potential outcome. Its example should lead people to fear creating a regulatory body for AI which has any risk of taking on any aspect of a Ministry of Truth. Unfortunately, the public sometimes has a hard time learning important ideas from books, or from history if they have not lived it themselves and too easily assumes “it can’t happen here.”
AI systems in a private sector open ecosystem will compete to please customers. If one gets too bad people can always switch to a different Little Brother. Even if you trust the current government, have you trusted all the presidents we have had in our lifetime enough to grant them such unchecked power? Even if you have trusted past presidents, how can you be certain all future presidents will be trustworthy? Thomas Jefferson, author of the Declaration of Independence warned “let no more be heard of confidence in man, but bind him down from mischief by the chains of the constitution” just in case.
Madison was a leading architect of the Constitution and emphasized the importance of caution when granting powers to governments:
If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.
A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions. This policy of supplying, by opposite and rival interests, the defect of better motives, might be traced through the whole system of human affairs, private as well as public. We see it particularly displayed in all the subordinate distributions of power, where the constant aim is to divide and arrange the several offices in such a manner as that each may be a check on the other that the private interest of every individual may be a sentinel over the public rights.
-James Madison, Federalist #51 (1788)
A government restrained by the 1st amendment, but hoping to influence the views of the public, does not have to regulate speech directly if it controls the AI systems that steer people’s speech over time towards its preferred views. A key factor in 1984 was that Big Brother is a government entity so people have no escape from its propaganda. They are persuaded of its benevolence, and that they should fear the consequences of harm from Eurasia and Eastasia if they did not trust their government to protect them. One of the ways authoritarian governments arise is through people turning to them to address things they do not understand and therefore fear.
Humans are fallible, and history shows even governments created with the best of intentions may someday go astray. People should envision what things could be like if their most hated politician were in control since they cannot be certain someone like that will never gain power again. The swings in political views within this country have never been entirely predicted by anyone over the long run so people should be careful before assuming only reasonable people will be in control of any regulatory apparatus in future generations. Giving government too much control over how AIs operate risks them essentially behaving the way government wishes, even if they are in theory private. They would form in essence a Borg Collective-like version of Big Brother where resistance would be futile.
Private sector competition provides checks and balances against AIs that consumers do not like. Any scheme proposing government regulation of AI speech should bear the burden of proof that it will do so much better a job at controlling AI content that its worth taking the risk of granting government power over speech.
Should AIs be Prohibited Until They Are 100% Accurate?
Some of those pushing to regulate AI are concerned current AIs do not always produce accurate statements about the world and think it should be kept off the market since consumers should not be allowed to use such flawed products. Fortunately, it seems likely that the vast majority of the public would agree that they should be the ones to decide whether a product is good enough to use since no one is forcing them to do so. Unfortunately, the accuracy issue leads to a more serious risk for AI vendors.
In the realm of statistics and computer models there is a saying that "All models are wrong, but some are useful.". We get knowledge assistance from other humans despite their flaws, they are useful despite being imperfect. Users appreciate tools that can help them create new writings or other things that do not exist in the world. Unfortunately, the ability to imagine things comes with the risk that those things may not match the real world. Current AIs are not built in a way that allows them to evaluate whether statements match the real world.
Some attorneys are concerned that AIs can make harmful false statements about people, and if anyone believes them, they could spread those libelous statements to others as if they were true. False statements can lead to other types of harm to innocent third parties if people believe them. If an AI claims some product has a safety issue when it does not, that could deter people from buying it if they do not bother checking to discover the AI had made something up.
Many would hope the issue is merely to ensure people understand that AIs cannot currently be trusted to be truthful, and hold the user responsible for verifying the accuracy of anything said. Unfortunately, some attorneys think AI vendors should be held liable for damage done by false statements made by their AI chatbots. AI vendors are likely to have deeper pockets than the average user, and there are interesting legal arguments, so there may be a self-interested bias driving these concerns among both practicing attorneys hoping to sue and legal scholars hoping to author academic papers about the issue.
If this legal theory were to succeed in the judicial system, given the volume of hallucinations the current AI systems generate, the potential damages could cost a great deal of money. It is unclear if it would be enough to lead the major well-funded AI vendors to take their products off the market, or if it would merely be startups that would not have the capital to risk being in the business any longer.
Fortunately, there is a reasonable law review article on the topic that explains: "This Article starts from the premises that AI today is primarily a tool and that, ideally, negligence law would continue to hold AI’s users to a duty of reasonable care even while using the new tool". Another article makes a similar point: "an AI entity cannot be held liable and so a human guardian, keeper, custodian, or owner must be found liable instead". If a drunken driver has an accident, we do not hold the car manufacturer responsible for their misuse of the car, nor the manufacturer of the alcohol.
Unfortunately, it is unlikely articles like those will be enough to put the issue to rest. It may either be decided in the judicial system or require legislative action to clarify where liability lies. If users are held responsible for mistakes using these tools, just as those driving under the influence of alcohol are held responsible for theirs, that provides incentive for the public to learn to deal cautiously with these imperfect tools.
Even those who do not wish to see AI regulated may need to hope for legislative protection for AI vendors like the type of protection that internet platforms received in 1996 from the famous Section 230 that makes clear they are not liable for user created content.
Bookstore owners are not held liable for libelous content of all their books since they pragmatically cannot be held responsible for reading all their books if society is going to have bookstores with more than a small number of books. Internet providers rationally were protected from liability for user generated content since they cannot realistically have humans on staff read all user generated content. Society had to make clear providers were protected from liability to allow the internet as we know it to arise. Similarly, AI vendors cannot have humans screen all AI generated content to validate it before its given. The only human around when the AI content is generated is the user, who should be responsible for validating the information. It is crucial this is made clear, otherwise the future development of AI is at risk.
Who Benefits From Regulation?
An AI ecosystem with myriad Little Brother AIs competing to best serve the public may be better for consumers than regulating AIs. The question is whether big companies will agree with the importance of opening things up, or if they would prefer to control the AI people use within their products.
When a large company advocates that their industry should be regulated: it is important to consider their possible motives. They may be public spirited citizens, or there may be something more going on. These companies have experts that understand the regulatory process better than the public.
There is a concern that if government regulates AI, that it is not the politicians elected by the public that will truly have control since they delegate regulatory tasks to unelected bureaucrats. Scholars have studied for decades the reality that regulation in the real world does not always operate the way the public intends. Rather than regulation leading to what some hope to be a voice of the public exercising control over AI speech and opinions, it may backfire.
George Stigler won his Nobel Prize in Economics for work examining how regulatory processes evolve in the imperfect world. In 1971 he stated in "The Theory of Economic Regulations" that "as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit.". This can often be just an unintended result of the human process required to create regulations.
Politicians and government bureaucrats often lack understanding of complex technical domains, so who do they learn from? Large companies consider it worth having their experts explain things to bureaucrats, guiding them towards the approaches the companies wish. Non-experts in the public have less ability to be involved in that process. Government regulatory agencies that need to enforce regulations hire the same types of experts as industry, so there is often a revolving employment door between industry and government that distorts the incentives of regulators, even if they are not consciously aware of it.
A common outcome is that regulatory burdens are so costly they prevent startups and small companies from growing to compete with the existing large vendors. Large companies also need to pay the overhead costs of dealing with regulations, so to the outside world it appears the regulators are keeping them in line. In reality: those large companies would have lost far more money if small companies and startups grew to compete with them, so the government has done them a favor by limiting competition.
In this case at least one AI company realizes people may be suspicious of their motives for asking for regulation. They have said they are ok with limiting regulations to larger entities to reduce the risk of a system that squashes innovative startups prematurely. A major concern in this case is that office suite vendors and others will argue that AI regulation protects consumers interests well enough, and therefore they do not need to create an open ecosystem allowing third party AIs to be swapped into their software.
An open ecosystem seems more likely to address the varying needs of the whole populace than regulation by a centralized agency can. No matter how well intentioned the regulators are, it is unlikely they will imagine all the things entrepreneurs might dream up to provide for users in an open ecosystem.
That is another way regulation can stifle competition: by regulating a type of product so heavily that the government has made it a one size fits all commodity with little ability for variation. It may be that AI is too complex for that to happen, but it is worth noting the possibility.
New companies usually compete with big players by offering some innovation: but if innovation is stifled, they cannot compete, and even if they try it does not provide a better result for consumers. In this case the risk is that government regulates things to the point where all the private AIs behave the way government wants them to, a one size fits blueprint that turns all the private AIs into a collective Borg-like Big Brother, without enough room for variation to create many varied competing Little Brothers.
Why Do AIs Need To Be Steered?
Since they were trained on the collective human knowledge base, the first simplistic hope that comes to mind is that they learn to hold the highest quality opinions best supported by everything humanity knows. Failing that, the next best simplistic hope is that their views might be sort of an average composite of all human views, constituting a somewhat "democratic" approach. Unfortunately, it seems neither hope matches reality in the current generation of AI systems.
The way existing AIs are designed, although their output may appear to contain "reasoning", the underlying systems do not actually use any rigorous logic to decide what sources of opinion are the highest quality to rely on. They do not engage in self-reflection to try to resolve internal conflicts between the differing opinions they have been trained on.
These systems produce writing in part by deciding what word is the most likely continuation of the text based on what they have written so far. That will tend to limit the contradictions contained in one particular response, since it is more likely that later words written in a response are based on the same worldview as the earlier words. Thats not guaranteed to eliminate contradictions, given Whitman noted that even human world views contain contradictions.
They will not necessarily use that same worldview in a different response, depending on how they are prompted. So, their contradictions are more likely to arise between responses, than within responses. They can be made to contradict themselves between responses, since they are merely echoing the conflicts contained in the human-created knowledge they embody.
Although they have a superficial veneer of emergent behavior that can mimic logical assertions, they cannot be trusted to reliably use logic. They often exhibit basic logical fallacies since under the surface they do not fully embody the type of logical analysis humans can engage in, even if the output pretends, they do. Therefore, they should not be trusted to have opinions based on any logical determination of the "best" viewpoints on a topic.
Future generations of AI might attempt to find the most accurate information on topics where there are objective analyses that can be done. However even when new AIs can engage in solid critical thinking about the data they have been given and can attempt to resolve contradictions, they will not be able to eliminate all their self-contradictions any more than humans can. Even extracting the best of the sum total of all available knowledge would yield incomplete, uncertain, and inaccurate information about the world. Any entity built on that imperfect foundation, whether a human mind or an AI, will necessarily implicitly contain contradictions no matter what effort is expended to try to minimize them.
Their opinions will still be flawed at times due to those contradictions, just as all human judgement is. Some opinions will also necessarily be based on subjective choices of values and priorities and other axioms humans do not agree on, so humans will still wish to have multiple AIs that embody different opinions.
If AIs do not currently live up to the hope of choosing the "best" of human content, might it be viewed as at least a somewhat "democratic" choice averaging human knowledge? Would that be preferable to the opinions an AI vendor might impose on it? Unfortunately, no one yet understands the exact nature of the implicit averaging process that occurs during their training. However, it appears unlikely the way the information they absorb implicitly "votes" on its final viewpoints would match any democratic poll of the public's current views.
They are not reasoning about the information they are being trained on to weigh it like that, they are merely absorbing it and learning it, without judging how many people hold a certain view, no more than they judge its accuracy or quality, or compare one source against another explicitly to decide between them. It is a non-transparent merging process still being researched.
There is a risk the AIs would by default be more likely to echo either viewpoints that have the most written about them in their training set, or the information they see first, rather than the views that should be considered either the most reliable or "democratic".
Comparatively small volumes of text containing high quality academic work may be outweighed by large volumes of widely discussed popular views that may be well intentioned, but poorly informed or outdated and misguided. Even if the training weights academic text more highly, even that text contains contradictory and inaccurate information. Over time humanity has gained knowledge and discovered it has been wrong about how the universe works, or evolved its social values and there may be more written over time about older ideas that have been overturned than newer ideas.
That suggests the idea of weighting more recent writings more heavily, which may help in some areas: but knowledge is a work in progress so even those opinions may be wrong. That risks AIs merely picking up on currently popular trends that much is written about and amplifying them, and popularity does not mean an idea is right. When humans have not yet collectively created a fully trustworthy method to ascertain "truth", how can we expect AIs to soon?
While it does not seem likely to produce a viewpoint that is a fully "democratic" average one, it seems to produce a useful one. Even if it were "democratic", do you always agree with the democratically elected leaders in our society? Would you prefer a system tailored to your needs rather than some one size fits all approach? We can temporarily live with flawed results that are good enough to be useful, just as we benefit from information from other flawed humans.
AI Vendors Currently Steer Little Brothers
Unfortunately, AI creators discovered there were some easy to notice views expressed by AIs in their initial tests that were widely considered harmful. They have done further training to try to steer the AIs explicitly away from those views towards others they consider "harmless". They have only achieved a limited degree of success in that goal, given the public has spread many examples of AI created content that are still widely viewed as harmful. The fact that easily visible issues exist suggests the likelihood of more subtle problematic latent opinions the public have not yet noticed.
Many people are understandably concerned that AI vendors are injecting their own opinions into these systems themselves, or indirectly through a comparatively small group of people they outsource the task of providing human feedback to these systems. While some of the viewpoints they explicitly teach these AIs may be almost universally applauded, there are already studies showing AI systems exhibit political and other biases not everyone agrees with that appear to come from the AI vendors attempts to influence the beliefs of their AIs.
Some of those biases may have been well intended, but that does not change the reality that unfortunately one size will not fit all, there will be disagreements over any choices they make. Other biases arise from their attempt to steer the AIs leading to unintended side effects. They are tinkering with complex systems where changing one thing may inadvertently change other things.
There are a vast number of topics humans write about, are the AI vendors somehow going to check all of them to be sure the AIs have not been steered towards vocal but problematic viewpoints? Is depending on each vendor to decide what opinions their AI holds practical or desirable? Even if they have good intentions, people have blind spots and often may not realize there may be a significant minority of people who disagree with their view that certain content is "harmful". There are people using AIs to help find security flaws to fix them, while some AI vendors try to shut down the ability to find those security flaws out of fear hackers will use them.
They may be people in another country or elsewhere in this one who hold quite different values from those who staff AI vendors. There are concerns that over time some of those who create these systems could yield to the temptation to subtly train in their own political biases. Is there a risk AI vendors might also yield to the incentive to bias AI responses about products towards advertisers who fund them?