Recent discussions with friends have provoked me to think harder about artificial intelligence (AI). Watching 60 Minutes and CNBC, the world seems to be going crazy about it. We’re flooded with terms like large language models, artificial general intelligence (AGI), and generative AI.
Inside my gut, I have never bought into the singularity, machine over man, and all that. But I had not thought too deeply about why I felt that way until now. I also noticed good friends on both sides of the debate, as well as near the center. I wondered if there were differences in our views caused by factors other than facts and scientific “evidence.” Something about us—or something about me or my worldview, my understanding of human behavior and how the world works.
What is Artificial Intelligence?
The idea of computer-based intelligence has been around for at least 80 years. John McCarthy of Bell Labs and Dartmouth coined the term “artificial intelligence” in 1956. At the academic conference reviewing his work, the term was debated and alternatives including “complex information processing” were suggested, but AI stuck because it was sexier. McCarthy saw great potential, but 50 years later he admitted that he had been too optimistic about how fast the technology would proceed. He said, “We humans are not very good at identifying the heuristics we ourselves use.” That is, we do not fully understand our own minds.
Enthusiasm for the concept has waxed and waned over the intervening decades. There are many components and types of AI. There is “strong” AI, which comes closest to “being human,” and “weak” AI, which is what we see every day in Amazon’s recommendations and other practical applications. The ultimate grail, the strongest form of AI, is AGI—computers that seem to be “human.”
Yet no one knows precisely what AGI means or how to know it when we have it. Numerous tests have been proposed, with the late British computer scientist Alan Turing’s Turing test being the most famous. The Turing test proposes that a person have “conversations” with a computer and a real human. If the person cannot tell the difference between the two, then that computer passes the Turing test. The test requires that the person be fooled “a significant share” (undefined) of the time. Others have proposed the IKEA test, in which a program commands a robot to assemble furniture. Apple co-founder Steve Wozniak has proposed the Coffee test, in which “a machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.”
As far as I can tell, no system has yet decisively passed these tests. We are still waiting. Still, the general idea is pretty clear: a machine or program that can do most anything a human can do. Yet that begs the question, what do we mean by intelligence?
Standard definitions of AGI include the power to reason, use strategy, solve puzzles, make judgments under uncertainty, plan, learn, communicate in human language, and integrate those skills to accomplish a goal. Those elements roughly match the most common definitions of intelligence, whether artificial or real. Standard IQ tests reflect these types of skills. The concept of “goal-seeking” is pervasive in the literature on AGI. AGI appears to require intent, a purpose, supplied by a human.
Nevertheless, there is plenty of debate about those definitions of intelligence. In 1995, the American Psychological Association wrote, “Concepts of ‘intelligence’ are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.”
I must ask myself, what do I think human intelligence is all about? Is it just “intelligence” as defined above? Is it always seeking goals? How definable is human intelligence?
A Broader View of “Human Intelligence”
I am all for reason and logic, but do they represent the totality of our minds? Nowhere in the definitions of intelligence do we see the ability to invent a joke, produce a work of art, or write a symphony. Missing are words like empathy, greed, courage, dreams, nightmares, lust, imagination, anger, and passion. Where are serendipity, joy, free will, adventure, fantasy, and love?
Nowhere are prayer or play. Yet we know that prayer and meditation have proven valuable to us for thousands of years. Play is also important in our lives, even among the most intelligent. Einstein said, “Combinatory play seems to be the essential feature in productive thought.”
In the ultimate guide to human thought and language, Roget’s International Thesaurus Seventh Edition, over half of the 800 pages are under the four headings: Feelings, Behavior and Will, Values and Ideals, and The Mind and Ideas. Such concepts are at the core of our lives.
Recent decades have brought to the fore concepts like emotional intelligence and social intelligence, but those are not included in the preceding definitions. These dimensions are even less well-defined and measurable, but they matter to people’s lives, successes, and failures.
While there have been attempts to write programs that are “creative” and others that have social graces, the results so far are less than compelling. We can teach a computer to make a fake painting that looks like a Van Gogh or an Impressionist, but that’s not very deep creativity; it’s closer to plagiarism.
The focus on traditional definitions of intelligence is perhaps to be expected from the “scientific” community. A friend recently suggested that AGI will ultimately be “a trillion times” as intelligent as humans. What does that really mean? An IQ of a hundred trillion?
But IQ has its limits. We all know people of high IQ who have accomplished little in life, while others of average IQ have shaped the world for the better. There are plenty of people with high IQ’s but little imagination. I’ve known several Nobel Prize winners as friends and teachers, some of the “smartest” people I’ve met. Yet as one travels up the IQ scale past 150, we see more people who have difficulty relating to the world around them: Rain Man, A Beautiful Mind, or the troubled Alan Turing, who committed suicide at the age of 41.
AI theoretically will not make the mistakes that we silly humans make. But will AI and its robot oversleep and miss a fatal flight? Will AI know serendipity? Will it chain-smoke, make off-color jokes, or turn in front of oncoming traffic? And thus sometimes be in the right place to make a discovery? Would AI have noticed burrs in its socks and invented Velcro? Would it have accidentally noted a melting candy bar and invented the microwave oven? If you can’t make mistakes, can you be truly human? Can you be creative?
Humanity is also about connectedness and relationships. Just watch a great, well-acted play or movie—the nuances, the smirks and grins, the odd reactions. Can a program or machine really become Macbeth? Or Curly, Moe, and Larry?
In a nutshell, we have made much progress on understanding the human brain, but how far have we come in understanding the human mind—our inner thoughts, conscious and subconscious? If we don’t understand it, we are unlikely to develop mathematical models and computer systems that can mimic or equal that mind.
There is one other element machines and software do not have: life. In my first biology class (in about 1962), our teacher Ron Cole talked about the difference between a person one second before they die and one second afterward. The organs are the same, the weight is the same, the chemical makeup is the same, but clearly something is missing—something called life, something that we don’t understand. Look up “life” on Wikipedia 61 years later, and you’ll find, “The definition of life has long been a challenge for scientists and philosophers. Philosophical definitions of life have also been put forward, with similar difficulties on how to distinguish living things from the non-living. As many as 123 definitions of life have been compiled.”
For certain, those steeped in high tech, often science fiction lovers, have answers and technical specs about how you’d do all these things—how to build mathematical models to mimic human behavior and take it to “the next level.”
But being “intelligent” is not the same as being human. And therein lies all the difference. Therein lies the magic of our human potential.
Artificial Intelligence in the Bigger Technology Context
We’ve been told that AI is the most important invention in human history. That’s a pretty tough standard when you are competing with the likes of the wheel, the boat, money, debt, insurance, the corporation, the steam engine, the plow, railroads, automobiles, airplanes, photography, vaccines, telephony, television, and the internet.
To me, AI is just another step in the development and understanding of electricity and what it can do for us, a process that began long, long ago. The first use of the word “electricity” was in 1646. Electricity took a leap forward with the use of semiconductors including silicon and germanium, which were first discovered in the 1820s and 1830s. Two hundred years later, we are still early in learning how best to produce, store, transmit, and use electricity.
Within the world of silicon and electrons, AI is part of what we call “automation.” This, too, has been around awhile and continues to progress and evolve. AI is “just another” step forward. It’s so early in its evolution that we know little about it and even less about how it will best be used.
Science, and thus technology, evolves—often in unexpected ways. The people who founded the radio industry did it for ships at sea, they never thought it might be used for music and news. We find new uses for old drugs even after years of use. Did the developers of the camcorder think it would change the gathering and dissemination of news, thus affecting global events? Did social media pioneers expect to play a role in the Arab Spring?
Our use of technology ebbs and flows. Sixty years ago, few CEOs could type; that’s what secretaries were for. Today, CEOs bang away on QWERTY keyboards every day. Electric cars were in vogue 110 years ago, went away, and came back. So did wristwatches with dials and vinyl records. While we think of wireless as being a step forward from wired, we went from a wireless national television distribution system (rabbit ears and rooftop antennas) to a wired one (cable and fiber) in the late-20th century. The popularity of home fax machines and CB radios fell as fast as they had risen.
When I look at the current state of AI and many of its applications today, I am far from overwhelmed. I’m a booklover and book collector and have witnessed the continuous decline in Amazon’s ability to recommend books; the search capability on their site is also a disaster. Google search has become cluttered with distracting advertising—the good stuff is often several pages deep. Perhaps 30 times a day I receive spam emails asking, “Tired of producing time-consuming videos and text for your website? We have the magic AI answer!” That’s an insult to anyone who loves writing or loves to produce videos. If you don’t love your work, why are you spending your time doing it?
In working with technology and technologists, we sometimes get the sense that it is magic or so far over our heads that we’ll never understand it. It’s in the interests of the holders of “the wisdom” to keep their secrets locked and to scare us mortals away. When I first learned to program computers in the 1970s, only those who knew COBOL or FORTRAN were in the fraternity. Microsoft, Apple, Adobe, Dell, Intel, and others changed all that; they pulled down the curtain.
While AI certainly includes some very advanced notions, much of it is about doing the same simple things over and over incredibly quickly. Key to large language models is understanding which words we use together, something found in any of the collocation dictionaries that I have in my office. Core ideas in data mining and machine learning like feedback loops, data clustering, and linear regression are not magic, and in the right teachers’ hands can be understood by high school students.
One of the leaders of AI recently referred to the internet as containing the totality of human knowledge. As a book collector, I had about 70,000 books in my personal library at its peak—all non-fiction and reference books. Having delved into all of them, my estimate is that at least 70 percent of their content is nowhere to be found on the Internet. So much is under copyright, as new writings will be. Writers are already beginning to sue Chat-GPT for plagiarism, which will happen with increasing frequency.
And, as in any information system, “Garbage In, Garbage Out.”
I was once in the bookstore business. At the time, the largest company was B. Dalton Booksellers, a fine company. They built their powerful position by carrying more titles than the competition, primarily Waldenbooks. Customers were not good at differentiating the two companies, both of which had stores in every major mall. One was kind of brown and one was kind of green. But if they went to Waldenbooks first, they often did not find what they wanted and moved on to Dalton. If they went to Dalton first, they were more likely to find the book they were looking for.
This led to the Dalton stores generating higher revenues and profits per store than Walden. Yet when Dalton used the most advanced data and systems available, and asked customers if they knew Dalton carried more titles, the customers said, “No.” Because management did not fully understand their customers and their behavior, they decided to slash inventories, thinking, “The customer doesn’t know or care.” The Dalton chain was soon enough sold, written down from $300 million to zero, and closed.
I know all of this because I talked to hundreds of bookstore customers, everyone I met, and probed and listened. I watched customer behavior in bookstores. I observed their eyes and noted their emotions. I knew they loved those huge selections, even if they did not realize or articulate it. My friends and I founded the first chain of giant book superstores, using our knowledge of reality, not just raw data. Information is only valuable when in the hands of truly and broadly intelligent, living human beings.
I am not saying AI is not useful or worth pursuing, or that it does not hold tremendous potential to benefit mankind, but we are a long way from the most extreme predictions. We should also keep in mind that all machines—all software and robots—must be created by humans to serve human needs.
AI needs humans. Humans to create it, to find it useful and pay for it, and to repair and improve it. When ‘acts of God’—earthquakes and hurricanes—threaten robots, it will likely be humans that rescue them.
AI, The Economy, and Jobs
Similarly, AI is unlikely to destroy our economy. A friend recently wrote me that AI is expected to eliminate 90 percent of all jobs by 2060. And, as a corollary, “It’s not reasonable to assume traditional capitalism will apply.” Of all the things I’ve heard about AI, these seem to be the weakest predictions.
Yes, AI might eliminate many current jobs. Just as the plow, reaper, fertilizer, and Norman Borlaug’s agriculture innovations eliminated 90 percent of the jobs Americans were doing (farming) in 1800. But to say we will all be out of work is to say that humans will no longer find anything of value to do for their fellow humans.
Humans have proven extremely creative in finding ways to serve our fellow humans. We find unmet, even unknown needs and desires. We each need a purpose, an occupation in the broadest sense. If we don’t have one, we’ll invent one and find others willing to exchange resources (pay) for it.
While there will be robot and automation options for many of these tasks, some humans will prefer “the human touch” and be willing to pay for it. Occupations may also use AI to make their jobs easier, but they will still have jobs.
Maybe we will eventually all be artists, entertainers, philosophers, professors, scientists, inventors, programmers, politicians, athletes, managers, gossip columnists, non-profit workers, or part-time or home business operators. So what?
We are already in a long-term trend of working fewer hours and will have more leisure time (in which to spend our money). The late Nobel economist Robert Fogel addressed this topic in his book, The Fourth Great Awakening and the Future of Egalitarianism.
Of equal importance, someone will need to dream up, develop, test, build, operate, and maintain the AI machines, software, and robots. As has been true throughout history, technology ends up creating as many or more jobs than it destroys. Television repairmen disappeared, then cable guys rose up.
Most importantly, we need jobs. As Kahlil Gibran wrote, “Work is love made visible.”
On the broader score of capitalism, it may morph into new forms as it has always done, but the basic elements of our economy will not disappear. The law of supply and demand will always be with us.
Economics is best defined as the study of the allocation of scarce resources. AI will do little if anything to eliminate the scarcity of natural resources, of energy, of oceanfront property, or of managerial talent. Even in AI itself, the challenge of resource allocation remains: do we spend a billion dollars on this medical AI project, or do we spend a billion dollars on the AI project sending us to Mars?
So, while our economic system will continue to evolve as it has done in the modern era, the key functions and components of the global economy will still be with us.
Conclusion
I do not doubt that some will try to use AI for evil. Some worry about the rise of “deep fakes” and false attributions. Yet among the most valuable applications of AI will be in cybersecurity, following in the footsteps of McAfee and the like. AI which can recognize text, images, videos, and music created by AI might become the most valuable of all.
I do not doubt that AI applied well will have tremendous benefits. But if I am going to lay awake worrying about anything, AI is not going to be on my list. Maybe the difference between me and the doomsayers comes down to one thing: faith. I have too much faith in my fellow humans. And when I lay my head down, I prefer dreaming to worrying; it has a far higher return on my investment.