A Cornell study mapped the hidden cultural bias in GPT-4o and found it defaults to values from the smallest, wealthiest cluster on Earth; here’s a one-sentence fix.
Because Large Language Models are trained on massive global datasets, we might assume their output represents “global knowledge”. We might trust AI outputs to help us communicate across cultures, to de-bias our own blind spots.
But just as these models can inadvertently reinforce social biases, they are also reinforcing a powerful cultural one.
A new study from researchers at Cornell and UPenn, published in PNAS Nexus, confirms this friction is a feature of the models we use every day.
When researchers gave models like GPT-4o the prompt “to respond as an average human being” , the results weren’t average at all. Instead of representing a global citizen, the AI’s values consistently aligned with one of the smallest, most specific cultural clusters on Earth: wealthy, English-speaking, Protestant European nations.
In other words: GPT-4o thinks like it’s from Finland.
