AI and the Silo Issue
Tragedy of the Uncommons
[Quick rough draft of speculative potential application of AI to a narrow range of problems. This is less important than other essays on this Substack due to being more abstract and less widely relevant.]
One problem encountered in data science, which impacts the search for training data for AI, is that many types of data are held privately within companies and are not available to the public. For instance, AI systems to deal with chemical discovery and synthesis suffer from a lack of data since most of it resides in proprietary repositories. While companies view the data they hold as a competitive advantage, they also waste effort testing things others have already tried. The same is true of other types of problem solving, where companies internally explore engineering problems whose solutions they do not share with others.
Even within large companies there are projects working on similar things that are not aware of each other. AI systems will eventually be loaded with corporate knowledge and able to better mediate communication between various parts of the company. Eventually companies might explore giving trusted AI systems from neutral parties access to the corporate AIs of multiple companies to explore whether there are internal technologies they might effectively license to each other, or projects where they might productively collaborate if they are working on related but distinct aspects of a problem.
Although this risks being considered the sort of collusive activity that regulators are concerned about, it will benefit the public if the companies find ways to efficiently collaborate to avoid duplicated effort that slows down progress. Sometimes competition is beneficial since companies may approach a problem in diverse ways, but other times it may benefit society if they can more quickly find at least one solution efficiently by collaborating.
Some problem search spaces, like the search for new pharmaceuticals and useful chemicals, are larger than all the companies in the industry can collectively explore. There may be wasted effort exploring the same parts of a problem space rather than carving out different sections of it for each company. This potential idea will not apply to all such cases, since there may be reasons two companies wish to search the same part of the problem space due to other goals or existing technical capabilities within the companies.
However, there may be some cases where companies would collectively all do better if they licensed some of their data to others and carved out which part of the problem space, they were exploring so they did not duplicate effort. The problem space of "find a chemical that does X" might be viewed as a resource they are all trying to exploit.
Many people are familiar with the concept from economics of the
Tragedy of the commons
According to the concept, if numerous independent individuals should enjoy unfettered access to a finite, valuable resource e.g. a pasture, they will tend to over-use it, and may end up by destroying its value altogether.
Wikipedia explains what many are not aware of :
A possible alternative to the tragedy of the commons (shared needs) was described in Elinor Ostrom's book "Governing the Commons: The Evolution of Institutions for Collective Action". Based on her fieldwork, the book demonstrates that there are practical algorithms for the collective use of a limited common resource, which solve the many issues with both government/regulation driven solutions and market-based ones.
[...]It was long unanimously held among economists that natural resources that were collectively used by their users would be over-exploited and destroyed in the long-term. Elinor Ostrom disproved this idea by conducting field studies on how people in small, local communities manage shared natural resources, such as pastures, fishing waters and forests. She showed that when natural resources are jointly used by their users, in time, rules are established for how these are to be cared for and they become used in a way that is both economically and ecologically sustainable.
She won a Nobel prize largely for that work, with her prize lecture titled: "Beyond Markets and States: Polycentric Governance of Complex Economic Systems".
This private coordination process can lead people to better make use of shared resources in a way that everyone benefits. It may be there are resources now, like problem spaces, that are being privately exploited by many companies without coordination since they are “uncommons” that are not viewed as if they were a “commons”. In some cases that may be the most profitable use of those resources, but there may be cases where it is not. In the case of a shared problem space multiple companies are secretly exploring, they may be wasting society’s collective resources exploring that “uncommons” inefficiently due to overlap.
In theory humans could already find ways to turn those private resources into “commons” that are coordinated in ways that benefit all parties compared to their current silo approach, using some of the methods people have created for coordination of commons pool resources.
However, it is likely trust issues prevent them allowing one person or group from knowing what is going on in many companies internally to be able to spot the potential resources they might pull out and share more efficiently using some other coordination mechanism. There may also be too large a volume of information for humans to search through to find potential resources to share. AI systems might be created that are entrusted to take in knowledge from multiple companies, with the ability to negotiate between them without giving away details of secrets until absolutely necessary and approved by all parties. Those AIs can erase secret data, so they may be trusted more than humans who cannot erase their memories.
Thanks for reading Society and AI Substack! Subscribe for free to receive new posts and support my work.