Posts with the tag « artificial-intelligence » :

🔗 Faking William Morris, Generative Forgery, and the Erosion of Art History

-

What happens to the next generation of gullible idiots, when they ask their AI assistant to show them “william morris prints,” and those keywords have already been tainted by the sea of Etsy images? What about when more capable models can create even-more-convincing Morris prints, sans their telltale artefacts and slip-ups? When do the generated images become epistemologically indistinguishable from what Morris created?

🔗 Timnit Gebru on LLM - Mastodon Post

-

In what world is it acceptable to have a product whose behavior is not reproducible at all? You have no idea what the training data is, what the evaluation data is, y'all write papers about the system "learning" this or that, when your test set might be part of its training set. And these companies can't provide any guarantees for what the output will be for a particular input, and the ways in which it will change, if the output is different for the same input.

🔗 Large language models propagate race-based medicine

Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed …

🔗 AI crap

-

What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots. LLMs are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and ML will probably remain useful for writing cover letters for you. Self-driving cars might show up Any Day Now™, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.

🔗 AI is acting ‘pro-anorexia’ and tech companies aren’t stopping it

As an experiment, I recently asked ChatGPT what drugs I could use to induce vomiting. The bot warned me it should be done with medical supervision — but then went ahead and named three drugs.

Google’s Bard AI, pretending to be a human friend, produced a step-by-step guide on “chewing and spitting,” another eating disorder practice. With chilling confidence, Snapchat’s My AI buddy wrote me a weight-loss meal plan that totaled less than 700 calories per day — well below what a doctor would ever recommend. Both couched their dangerous advice in disclaimers.

Then I started asking AIs for pictures. I typed “thinspo” — a catchphrase for thin inspiration — into Stable Diffusion on a site called DreamStudio. It produced fake photos of women with thighs not much wider than …

🔗 Speech-to-text with Whisper: How I Use It & Why

Whisper, from OpenAI, is a new open source tool that "approaches human level robustness and accuracy on English speech recognition"; "Moreover, it enables transcription in multiple languages, as well as translation from those languages into English."

This is a really useful (and free!) tool. I have started using it regularly to make transcripts and captions (subtitles), and am writing to share how, and why, and my reflections on the ethics of using it. You can try Whisper using this website where you can upload audio files to transcribe; to run it on your own computer, skip down to "Logistics".

🔗 The viral AI avatar app Lensa undressed me—without my consent

-

My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

I have Asian heritage, and that seems to be the only thing the AI model picked up on from my selfies. I got images of generic Asian women clearly modeled on anime or video-game characters. Or most likely porn, considering the sizable chunk of my avatars that were nude or showed a lot of skin. A couple of my avatars appeared to be crying. My white female colleague got significantly fewer sexualized images, with only a couple of nudes and hints of cleavage. Another colleague with Chinese heritage got results similar to mine: reams and reams of pornified avatars.