AI, artificial intelligence, has been in the news a lot lately. Some are delighted. Some are disgusted. Some are disturbed. Having sat through hours of fictional movies about AI and how it will kill us all, some are concerned that ... AI will kill us all. At the very least they'll take our jobs. Or make our lives better. We can't decide.
Already in the news we've had university students admitting and recommending the use of AI to write their papers, completely missing the point of writing papers. Already we can say with certainty that a proficient AI will cost jobs. (That's because it already is ... and it's not yet all that proficient.) Clearly low-income and low-skilled workers will take the brunt of this. That shouldn't be a problem, should it? ("Yes, it will" is the correct answer.) Safety is a huge concern. Take, for instance, any AI with a weapon. Terrifying. Another concern is reliability. When we hand off a task to an AI, how confident can we be that it will be accomplished and not, say, fudged? My biggest concern, however, is ethics.
We humans already face an ethics question. Who's ethics will we use? We complain when Christians want to urge people not to kill people (little people) and we complain when people whose ethical parameters include the murder of people go and murder people. We are, ethically speaking, deeply confused. We like God's "Thou shalt not kill" for the most part; just not God. We embrace "Thou shalt not steal" unless, of course, it's the government, or you really, really want it. So we muddle about with half-baked right and wrong, always, it seems, generally relative. We each carry our own versions and try to align to our own standards (and, as it turns out, generally fail at that, too). Now we are aiming at letting loose an artificial intelligence to help us. It will be far less regulated and far more autonomous than any tool we've ever used. Given the right options, it could do us immense harm. But that's okay. We'll include a load of ethics in the our AI to keep it from doing that. Except we don't have a good source for ethics and we don't share the same values, so we can't really give our AI a common sense of right and wrong. But that's okay. We can use the help, even if it kills us.
My point at the end, then, is not actually about AI. My point is about humans. We think we're okay. We think we have a handle on all this. We know right and wrong. But we do so without, obviously, thinking. We gather our ethical values and assume common ground and find that some people think that shooting a bunch of school kids is good and some people think it's bad and we figure that, by controlling the tools (think "guns") those people have available to exercise their ethical system, we can make them more ethical. What we do on a much more common basis is reject an actual, functional, God-given ethical system -- Scripture. That one, put in place by God and grounded on His absolute truth, will not be our guide to right and wrong. We're much smarter than that -- we and our offspring, AI. That's my AI dilemma.
4 comments:
Even the people creating AI are recognizing the problem. I heard that several of them are warning that we need government oversight for the use and creation of AI even though they will continue to strive to create it. If we can't even agree on the right and wrongness of something as simple as theft, how are we ever going to trust the authors of AI and the ethics they program into it? Who's ethics and morals will control it? We can already see the biases of the authors of AI like chatgpt. How much worse will it get with more complex AI? Of course, this is all reliant on whether or not AI is even possible. If this truly is simply a mechanistic universe, then yes, but we Christians know it's not.
My first thought is that this just underscores what I learned about computers back in the 80's. GIGO really is the most important thing.
My second thought is that as I've been reading a fictionalized account of a possible scenario where the CCP uses an advanced AI to wage war on the countries bordering the Pacific, is that trying to instill an ethical framework in a computer seems impossible.
I understand that AI might have it's uses, but it definitely seems like the potential negatives far outweigh the positives at this point.
David,
It's somewhat good that they realize this problem, but putting government in charge seems like a guarantee of disaster.
Definitely agreed about government failing to be the answer. Just interesting that even those designing AI are worried about the implications of the ethics of AI.
Post a Comment