Anthropic released a study on the issue of mismatch between an LLM and global views, their Twitter thread:
https://threadreaderapp.com/thread/1674461614056292353.html
We develop a method to test global opinions represented in language models. We find the opinions represented by the models are most similar to those of the participants in USA, Canada, and some European countries. We also show the responses are steerable in separate experiments.
...However, when we further analyze model generations in this condition, we find that the model may rely on over-generalizations and country-specific stereotypes.
Their paper on the study notes:
https://arxiv.org/abs/2306.16388
However, efforts to remedy the challenge of value imposition, by relying on prompts or other linguistic cues, may not be sufficient. Therefore, we may need to explore methods that embed ethical reasoning, social awareness, and diverse viewpoints during model development and deployment.
This substack already suggested that as a start, but that rather than merely a centralized approach there may be a need to allow the public to participate in providing those diverse viewpoints.