Some thoughts on AI - Artificial Intelligence
Artificial Intelligence is the greatest thing since sliced bread, if you believe the hype. On the other hand, there is Terry Pratchett
“Real stupidity beats artificial intelligence every time.”
You cannot escape AI. It is everywhere.
- AI is in our vehicles.
- We rely on AI-generated “summary” answers when we ask questions online.
- ChatGPT - and similar LLMs - offer quick answers on fairly complex questions.
"What comes into our minds when we think about God is the most important thing about us."
Maybe a variation for the AI age might be:
How AI responds when we ask it about God is the most important thing about it.
When we outsource our thinking to AI and trust that the AI answers are accurate, and reliable, and, in the case of theology, biblical, we give over the image of God that we are created to be, to another entity.
For those who know a field well enough to know whether what a chatbot produces is mundane, nonsense, or helpful and new, this technology can be a helpful tool in a number of ways. When a question related to linguistics was asked of an AI chatbot and then sent to a friend who is a linguist, they said that
- some of it was correct and obvious,
- some of it was nonsense,
- and one thing was something he might not have thought of, and that was genuinely interesting.
The promise of AI in medicine is high, but it will only be realized if it is built and used responsibly. Today’s AI algorithms are powerful tools that can recognize patterns, make predictions, and even make some decisions.
- They are not infallible, all-knowing oracles.
- They are not, yet, on the verge of matching human intelligence, despite what some evangelists of so-called artificial general intelligence suggest.
- Yes there are great possibilities.
- But also the pitfalls. Medical AI tools can misdiagnose patients and weaken doctors’ own diagnostic skills with AI.
Albania has appointed an AI bot as a Minister charged to handle public procurement will be impervious to bribes, threats, or attempts to curry favour. Of course, nothing will go wrong with this.
In the last few months, a number of top AI executives and thinkers have offered an eerily specific and troubling prediction about how long it will be before artificial intelligence takes over the economy. The message is: “YOU HAVE 18 MONTHS.”
By the summer of 2027, they say, AI’s explosion in capabilities will leave carbon-based life forms in the dust. Up to “half of all entry-level white-collar jobs” will be wiped out, and even Nobel Prize-worthy minds will cower in fear once AI’s architects have built a “country of geniuses in a datacenter.”
What do we think about this prediction?
- It’s hard to envision a world where people are essentially worthless.
- It’s hard to take seriously economic predictions that resemble a kind of secular rapture, in which a god-like entity descends upon the earth and makes whole categories of human activity disappear with a wave of Its hand.
- The doom-and-gloom 18-month forecast asks us to imagine how software will soon make human capabilities worthless, when the far more significant crisis is that people are already degrading their cognitive capabilities by outsourcing their minds to machines long before software is ready to steal their jobs.
I am much more concerned about the decline of today’s thinking people than I am about the rise of tomorrow’s thinking machines.
Eliezer Yudkowsky is a founding researcher of the field of AI alignment. In his book, he argues that there is a real threat of extinction from super-intelligent AI. While his concern is real, his only suggested solution to solve AI risk is to tell the world to just stop all AI research, which, as we have seen with every technological advance, is impossible to implement or even monitor.
Whatever you think about AI - and there are some very positive uses of AI - I think we need to do some serious thinking about the ethical implications of aspects of the use of AI.

Comments