A couple of Senators have expressed concerns over AI’s interaction with kids.
Scott introduced the Artificial Intelligence Shield for Kids (ASK) Act, and told Fox News Digital in an interview that he’s already winning support for the bill from Senate colleagues as well as American parents.
https://www.nextgov.com/emerging-tech/2023/03/alarming-content-ai-chatbots-raises-child-safety-concerns-senator-says/384251/
In a letter to the CEOs of five tech companies, Sen. Michael Bennet, D-Colo., criticized using kids and teenagers in the “social experiment” of generative AI testing.
The issue of tailoring AI to subcultures relates to the issue of evolved cultural norms humans adopt that aren’t laws as was discussed on this podcast:
https://www.econtalk.org/michael-munger-on-obedience-to-the-unenforceable/
Michael Munger on Obedience to the Unenforceable
Jun 19 2023
Civilization and the pleasantness of everyday life depend on unwritten rules. Early in the 20th century, an English mathematician and government official, Lord Moulton, described complying with these rules as "obedience to the unenforceable"--the area of personal choice that falls between illegal acts and complete freedom.
That refers to this article:
https://www.theatlantic.com/magazine/archive/1942/07/law-and-manners/654181/
Law and Manners
By Lord Moulton
A study showing unfortunately people who hold different ideologies often don’t truly grasp the views of those with other worldviews:
The Ideological Turing Test: a behavioural measure of open-mindedness and perspective-taking
Abstract
Truly understanding the position of ideological opponents is challenging, yet crucial if our goals are to avoid escalation or further polarisation, identify areas of agreement, and ultimately reduce misunderstanding. We operationalise the idea of an ‘Ideological Turing Test’, as a behavioural measure of the extent to which people are able to accurately represent the position of their ideological opponents.
…On the whole, participants from both sides, across all topics, were equally “bad” at passing the relative criteria, however there was variation in the pass-rate between topics. Only around 54% could pass within the topic of Covid-19 vaccinations, whereas around 71% passed in the topic of veganism, with Brexit achieving around a 64% pass rate for both sides. When accounting for variation within and between arguers, raters and arguments, we found no evidence that either side was more likely to ‘pass’ the test within each topic
A study suggesting LLMs can aid communication between people who hold diverse viewpoints, with the risk the LLM itself can be biased:
https://threadreaderapp.com/thread/1671950647393124352.html
We find evidence that LMs have promising potential to help human facilitators and moderators synthesize the outcomes of online digital town halls—a role that requires significant expertise in quantitative & qualitative data analysis, the topic of debate, and writing skills. [….]
For example, when we prompt a model to vote on key issues, it tends to align with certain opinion groups more than others. As a result, model-based ideological biases (which human facilitators and moderators may also have) must be carefully measured and considered.
A substack post on:
Why Pluralism (Really) Matters
Not everyone's landscape is the same
…the point of my irritation above is the really obnoxious habit all humans have of assuming that everyone who does things differently than them does these things differently because they’re wrong or misunderstand something fundamental.
...What people like Teuter and all “men of systems,” to use Adam Smith’s phrase, get wrong about human beings is that there exists real and meaningful and legitimate variation precisely because humans occupy different parts of the moral and political landscape at different times.
Many are concerned about unacknowledged indoctrination in schools, and will eventually raise concerns about AI: