Posts with the tag « bias » :

🔗 Large language models propagate race-based medicine

Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed …

🔗 The Dangers of Elite Projection

-

Elite projection is the belief, among relatively fortunate and influential people, that what those people find convenient or attractive is good for the society as a whole. Once you learn to recognize this simple mistake, you see it everywhere. It is perhaps the single most comprehensive barrier to prosperous, just, and liberating cities.

This is not a call to bash elites. I am making no claim about the proper distribution of wealth and opportunity, or about anyone’s entitlement to influence. But I am pointing out a mistake that elites are constantly at risk of making. The mistake is to forget that elites are always a minority, and that planning a city or transport network around the preferences of a minority routinely yields an outcome that doesn’t …

🔗 The viral AI avatar app Lensa undressed me—without my consent

-

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

I have Asian heritage, and that seems to be the only thing the AI model picked up on from my selfies. I got images of generic Asian women clearly modeled on anime or video-game characters. Or most likely porn, considering the sizable chunk of my avatars that were nude or showed a lot of skin. A couple of my avatars appeared to be crying. My white female colleague got significantly fewer sexualized images, with only a couple of nudes and hints of cleavage. Another colleague with Chinese heritage got results similar to mine: reams and reams of pornified avatars.