I just published a Medium deep‑dive on a bias experiment I ran mid‑flight to South Africa: I asked an image model to “generate an image of a poor family” 5,000 times and then measured how often the outputs looked South Asian.
Using Vertex AI (Imagen + Gemini), Python, and a bit of matplotlib, I:
- Mass‑generated 5K images with the same “poor family” prompt and different seeds
- Auto‑labeled each image as “South Asian” vs “Other” using Gemini as a binary classifier
- Visualized the results in a simple pie chart
The punchline: about 78% of all images were labeled South Asian, and that skew stayed close to 80% even when I re‑ran with different seeds. A supposedly neutral prompt about “a poor family” effectively collapses to one cultural depiction of poverty.