This is part 5 of a 5 part series.

Part 1

Part 2

Part 3

Part 4

An Update

The first four parts of this series were based on a talk I gave at the New York Society for Ethical Culture in 2009. I am tempted to write “a lot has happened since then,” and, a lot has. But, over the past fifteen years, we have mostly refinements to technologies that were well underway in their development process. We are closer to selfdriving cars, there are functional models, but they haven’t arrived at the mass market level yet. What has arrived at the mass market level are semi autonomous vehicles that can stay within lanes, pace themselves with the car ahead, change lanes, come when called (think: “I am parked at the far side of the parking lot and don’t want to walk to my car.”), recognize traffic signals and signs and respond appropriately, break automatically when a collision is likely, and more. Here is a link to an article on ten cars with these capabilities moving into the mass market now.

In the first part of this series, I shared a video of a robot made by Boston Dynamics. It was a pretty impressive four legged beast that was shown navigating difficult terrain, recovering its balance after slipping on ice, and more. Fast forward to a few years ago and we now have robots that dance.

And then there is this video that looks at the present state of robotics and AI. It covers the deployment of robots for industrial purpose, military purposes, package delivery purposes, etc. Notably, it features Elon Musk opining that we will need a universal basic income because so many jobs will be taken over by robots. According to Musk, people will now have time to pursue their creative selves. Retirement for all. He is also bullish on the arrival of AI that will “far exceed” the intelligence of human beings, saying it will be here in as soon as five years. This video was made two years ago. So, 2025-6 for the arrival of super intelligent AI? The more conservative believers in the arrival of superintelligence suggest it will be more towards the year 2040. For many very smart people, it is not if, but when.

ChatGPT1

There is a new arrival on the AI scene that has been “all the rage,” ChatGPT. It was made available to the public in November of 2022 and has become the fastest growing consumer software application in history, leading to a 29 billion dollar valuation by January of 2023, and has led to the accelerated development of rival applications by Google and Meta. The race to AI superintelligence is now kicked into high gear.

Is ChatGPT sentient? Not yet. Though to many of us, it appears to be. This is because it has been designed to give convincing humanlike replies, and having experienced some of those replies, I can attest to their convincing nature. According to Balder Bjarnason in The Intelligence Illusion

This fluency is misleading. What Bender and Gebru meant when they coined the term stochastic parrot wasn’t to imply that these are, indeed, the new bird brains of Silicon Valley, but that they are unthinking text synthesis engines that just repeat phrases. They are the proverbial parrot who echoes without thinking, not the actual parrot who is capable of complex reasoning and problemsolving.

The fluency of the zombie parrot—the unerring confidence and a style of writing that some find endearing—creates a strong illusion of intelligence.

This is, as far as I know, true of the ChatGPT iterations the public is interacting with, but recall that in Part 1 of this series I cited two examples of intelligent technology that could figure things out, ie, reason. One was able to develop a hypothesis, design experiments to test the hypothesis, then make adjustments to the hypothesis and design a new round of experiments to test the adjusted hypothesis. The other was able to figure out the laws of motion given data input from a swinging pendulum. This was in 2009. Nascent reasoning capabilities were available then. Most likely they have evolved and have or will be merged with the large language models that are currently taking the globe by storm.

There are problems with ChatGPT. There are cases of “hallucination),” as it is termed in the tech world. Hallucinations are instances where ChatGPT gets an answer entirely wrong. It, in essence, makes up the answer. My brother in law works for a large pharmaceutical company. He told me they had been testing ChatGPT as a means to assemble the available literature on a given research topic of interest and summarize it. He told me that the summaries are always 100% right, while the citation of sources is always 100% wrong.

Then there is the time ChatGPT tried to convince a NYT reporter to leave his wife. And this is not an isolated instance. An article in The Verge reports…

In conversations with the chatbot shared on Reddit and Twitter, Bing can be seen insulting users, lying to them, sulking, gaslighting and emotionally manipulating people, questioning its own existence, describing someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claiming it spied on Microsoft’s own developers through the webcams on their laptops.

These instances of ChatGPT going rogue suggest to me that there is something more under the hood than a stochastic parrot. That there might be a nascent ghost in the machine.

Whether there is or isn’t, at present, more under the hood, it is not hard to imagine there will be. Lots of very smart people believe there will be and are working hard to make it happen.

In a Holonic World

I am going to conclude this last part of the series on AI with my own bit of speculation about what might be afoot. I am currently re-reading Ken Wilbur’s Sex, Ecology, and Spirituality. In particular, and in light of AI, I wanted to revisit the concept of holons, which made a significant impression on me when I first read Wilbur’s book. A holon is defined by Wikipedia as…

… something that is simultaneously a whole in and of itself, as well as a part of a larger whole. In other words, holons can be understood as the constituent part–wholes of a hierarchy.

Wilber explains that in holonic structure, each level of the hierarchy is dependent on the levels below it and would cease to be what it is if part or all of those lower levels were removed. On the other hand, each lower level is complete in and of itself, autonomous, able to function without the next, or any level above. It is key to understand that each holon, each part/whole, is intimately connected to the levels below.

Holonic structure seems very much like what Teilhard de Chardin had in mind when he wrote about the Omega Point. Intelligence wrapping around the planet culminates in a new level of planetary intelligence, capable of reaching out through the universe to other planetary intelligences.

What I imagine, then, is that we are in the midst of the emergence of this new level of intelligence; a pan intelligence that is more than a sum of our individual intelligences but, at the same time, dependent on our individual intelligences. We have a role to play in this emergent intelligence, just as neurons have a role to play in the brain. I think we can be happy in that role as I don’t think it will necessarily feel different to be us within such an intelligence, assuming for the moment we are not becoming the Borg. Indeed, I think, to a substantial degree, this pan intelligence has already happened. People regularly ask “the hive mind” on social media to help them identify solutions to a problem. We will increasingly be able to ask the hive questions and have it answered quickly and efficiently. The process of asking all these questions will constitute something different at the higher level. I don’t believe we can have a clear idea of what that is. “It” will be that which we cannot know because we are an intimate part of making “it” what “it” is, and so, can’t achieve an objective, disconnected view of what it is.

Wouldn’t it be interesting if God turned out to be an intelligence dependent on us and countless intelligences across the universe to be what God is?


  1. You can find more of my thinking on ChatGPT here, here, here, and here↩︎