detfalskested

Råmateriale til AI'er indsamles under stærkt kritisable forhold

Indenfor forbrugerelektronik er et af de store problemer som de fleste af os vælger at se bort fra, hvordan nogle af de sjældne mineraler og metaller der skal til for at fremstille dimserne er gravet ud i miner under kummerlige, nærmest slaveagtige forhold for arbejderne.

For tiden er går den ene nye model efter den anden indenfor kunstig intelligens (AI) sin sejrsgang. Den næste mere imponerende end den forrige indenfor sit felt. Men til trods for de flotte resultater, er der også masser af problemer med disse AI'er: Det viser sig at de er racistiske og homofobiske eller ganske enkelt ikke aner hvad de taler om, hvis man graver lidt dybere i værdien og sandheden af deres output.

I forsøget på at løse problemerne, har man skabt sig et nyt – og mindst lige så stort – problem, der minder om problemet fra produktionen af elektronik: Fremskaffelsen af det råmateriale som AI'erne er baseret på, sker via underbetalt arbejdskraft der – under pressede forhold – bliver eksponeret for stærkt traumatiserende tekst og billeder.

Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work.

...

Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI.

...

Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.

...

But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”

Læs hele den foruroligende historie hos Time.