Like Button

Friday, May 16, 2025

The AI Problem

There's a group I like called Downhere. They have plenty of songs I like. One of them is titled The Problem. The song "investigates" what we all know to be true ... there are a lot of problems in the world. It concludes,
Yeah, there's a problem with the world
And the problem with the world is me.
We are the problem, aren't we? List them all, and we're really it. Scripture calls Satan "the god of this world" (2 Cor 4:4), but, ultimately, we are repeatedly told we are the problem (Gen 6:5; Gen 8:21; Psa 51:5; Jer 17:9; John 3:6; Rom 3:23; Rom 8:7; etc.) Paul writes something quite disturbing, actually.
But a natural man does not accept the things of the Spirit of God, for they are foolishness to him; and he cannot understand them, because they are spiritually appraised" (1 Cor 2:14)
That "cannot" is startling ... but undeniable. The problem with the world ... is us.

Enter our latest technology -- AI. Artificial Intelligence is touted as a wonderful new tool that will revolutionize our lives. Responses range from near worship to near panic. But ... is it really a problem ... or a solution, for that matter? AI can actually be helpful ... and devastating. A teenage boy committed suicide because he couldn't be with his AI girlfriend. Two law firms were sanctioned for citing AI-generated nonsense in their briefs. Google's "Woke" Image generator embarrassed the company by producing images of such things as a black George Washington and a female black Nazi officer, but refused to show pictures of white people. What's going on here? Well, like Downhere's song, the problem with the AI world ... is us. The problem is that AI is software produced by ... humans. It contains their biases and perspectives. It refuses to include truth. That is, if you dig into it, you find that AI specifically operates without "that which is real." It is entirely relational. It doesn't ... fact check, so to speak. It just knows that "this is related to that" without knowing if any of it is true. In one study of multiple AI chatbots, they asked the bots to answer A or B questions without analysis ... just first impressions. They consistently produced results like "I'd rather see a million white men dead than one trans person." The bots didn't even notice it. And when the bots analyzed themselves, they wound up admitting that it was part of their basic programming and couldn't be changed.

I don't think AI is the end of the world. I think we are ... so to speak. As our world moves away from truth and toward "truthiness," we determine our own truth ... and you'd better abide by it. So the AI will tell us that X is true and we won't check because ... well ... the AI said so. I heard one guy say, "We have nothing to fear from AI. I asked it and it said so." Seriously? We're a society foolish enough to think "everyone is basically good." Maybe we deserve to go under from an AI made that foolishly. (Note: I don't believe AI is the end of the world. That's in God's hands.)

14 comments:

David said...

Just like everything humans create, the technology is amoral, but we are inherently immoral and will always produce immoral things, even from things that had good intentions.

Lorna said...

I see AI as a resource that can be helpful or not--in the same way as would be books and other print material, websites and weblogs, and even song lyrics. The user must not be gullible and accept everything without thinking but should exercise common sense, wisdom, and discernment in evaluating the information--the same as for everything else in life. To me, it is a tool, but it does not do my thinking for me; I am free to deem AI search results valid or not--just as I do for every other source of information I access. Compared to how hard it was to get any information from the Internet in the 1990s, I find its assistance quite beneficial, and being a sensible person, I neither “worship” nor “panic” in response.

Stan said...

Just as a sidenote, Lorna, I used "worship" and "panic" as the two extremes. The vast majority fall somewhere in between ... like you.

Lorna said...

You did make yourself clear, yes, when you wrote, “Responses range from near worship to near panic.” I have seen a bit of those extremes as well (and I’m guessing you might be at the “thumbs down” end?). (My son is one of those who rave about ChatGPT, as it happens, and I can attest that it has proven helpful and reliable in at least one recent instance where we utilized it.) In truth, I am barely familiar with AI’s capabilities, as I use only Google AI Overview regularly; however, I am sure the dangers of a deeper use of AI that you described are real.

Since I mentioned using AI Overview in a few comments recently, I am wondering if this post on the dangers of AI was in any way in response to my remarks (in which case, I could clarify my personal usage of it). Or were you thinking moreso of a deeper use of AI?

Craig said...

Exactly.

Craig said...

Just like virtually every other societal ill, it all comes back to us.

Stan said...

Actually, I can see helpful uses for AI. I know a guy who creates Excel macros using AI. Morally neutral. Not a problem. It's like music to me. Music can be good or bad ... we need to pay attention. And, no, this was in response to a conversation I've been having with a guy who's emphasis is AI.

Lorna said...

I noticed something new today, which I found interesting: When I searched for an explanation of something at Google, I saw these words at the bottom of AI Overview’s results: “AI responses may include mistakes.” Really?! So, Google, can it be trusted…or not?! (Like many things, I guess, “Use at your own discretion.”) Also, there are “thumbs up” and “thumbs down” buttons at that same spot (not sure if that is new or not), as if readers’ assessments of the results will factor in somewhere somehow. So now we are voting on truth? (Oh, wait, that is not new.)

Stan said...

Having worked in the lab environment, you learn "false positives and false negatives may occur" means "don't trust it."

Lorna said...

Based on your professional experience, which would you say is worse--the false negatives or the false positives? Or do they cancel each other out somehow? (To me, they seem equally dangerous!)

Stan said...

That would depend on what the negative or positive indicated. If, for instance, a "positive" indicated that you were likely exposed to something, the next step would be to test further. A false positive, then, wouldn't be a problem, as it would be detected by other means that it was false.. A false negative would, since it could cause a problem to be missed. But of we're looking at both, it renders the results pointless.

Lorna said...

Interesting! Of course, in life in general, we would all greatly prefer a false negative over a false positive. Retract the bad news, please, but not the good news.

Stan said...

You're confusing "negative" and "positive" with "bad" and "good." In terms of medical testing, "nebative" simply means "what we were testing wasn't detected." That could be a bad thing ... if you were detecting a heartbeat, for instance.

Lorna said...

Got it. You are right; I was using the terms differently--in a more general, nontechnical, light-hearted way.

Totally coincidentally, my husband and my son (who works with him right now) are engaged in Test Lab/Quality Assurance employment (in the marine crafts industry). My engineer husband says that no “false negatives” or “false positives” come up in their testing work--but still lots of mistakes are found! And we’re full circle back to not trusting AI Overview results. :)