AI and harm

Every time I read about how is being applied in the world, this graph immediately pops up in my head: As the potential to do harm increases, there must be a corresponding increase in the level of caution, skepticism, and […]


This content originally appeared on Brad Frost and was authored by Brad Frost

Every time I read about how is being applied in the world, this graph immediately pops up in my head:

A bar graph with x axis label "potential for harm" and y axis label "caution, safeguards, restraint". A 45 degree angle arrow line denotes a proportional increase of caution as the potential for harm increases

As the potential to do harm increases, there must be a corresponding increase in the level of caution, skepticism, and restraint. More controls, redundancies, guardrails, and regulations need to be put in place as the stakes get higher.

What’s alarming — yet sadly not at all surprising — is that this doesn’t appear to be playing out in reality. Reality looks something more like this:

A bar graph with x axis label "potential for harm" and y axis label "caution, safeguards, restraint". A 45 degree angle arrow line denotes a proportional increase of caution as the potential for harm increases. A red line arrow hugs the x axis, denoting that little caution is being exercised even as the potential for harm increases.

In case after case, the fervor and urgency to adopt AI seems to stomp all over the need to exercise caution, responsibility, and to establish critical safeguards that curtail harm.

I haven’t been able to shake the extraordinarily disturbing news that the Israel’s Lavender AI system was (is?) used to determine bombing targets — with little more than a “rubber stamp” from human intelligence officers — that resulted in many civilians being killed. We’re not talking about potential harm here; this is explicitly AI determining which human beings to kill. This level of recklessness is appalling, but unfortunately it’s not an isolated story.

AI is swiftly being woven into many facets of society: education, healthcare, criminal justice systems, hiring, etc that all have immense impact on people. For each and every AI application, we must ask “what harm can this do?”, answer that question honestly and thoroughly, then exercise a level of caution and establish safeguards that are proportional to the amount of harm that can be done.


This content originally appeared on Brad Frost and was authored by Brad Frost


Print Share Comment Cite Upload Translate Updates
APA

Brad Frost | Sciencx (2024-05-02T15:02:15+00:00) AI and harm. Retrieved from https://www.scien.cx/2024/05/02/ai-and-harm/

MLA
" » AI and harm." Brad Frost | Sciencx - Thursday May 2, 2024, https://www.scien.cx/2024/05/02/ai-and-harm/
HARVARD
Brad Frost | Sciencx Thursday May 2, 2024 » AI and harm., viewed ,<https://www.scien.cx/2024/05/02/ai-and-harm/>
VANCOUVER
Brad Frost | Sciencx - » AI and harm. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/05/02/ai-and-harm/
CHICAGO
" » AI and harm." Brad Frost | Sciencx - Accessed . https://www.scien.cx/2024/05/02/ai-and-harm/
IEEE
" » AI and harm." Brad Frost | Sciencx [Online]. Available: https://www.scien.cx/2024/05/02/ai-and-harm/. [Accessed: ]
rf:citation
» AI and harm | Brad Frost | Sciencx | https://www.scien.cx/2024/05/02/ai-and-harm/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.