ChatGPT: It’s Not the End of the World as We Know It
Complete the form below to unlock access to ALL audio articles.
The following article is an opinion piece written by Michael S Kinch. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official position of Technology Networks.
Last night, I experienced an odd coincidence that cannot readily be replicated with artificial means (even deploying the most sophisticated artificial intelligence (AI) platforms. While enjoying a rare quiet period, the telephone rang. The caller was a trusted colleague, who is far more technologically savvy than I, and yet was seeking advice on how to handle the challenges posed by ChatGPT. His concerns were that machine-derived wordsmithing might wreak havoc for the analysis of research data, raising questions about threats to the credibility of both scientific and mainstream sources of trusted information. As we began a discussion to convene a meeting of scientists to discuss the topic, my phone vibrated. A message had just arrived from another dean at my university titled, “ChatGPT.” Violating the fundamental trust of the verbal connection with my colleague about our inability to distinguish people from machines, I proceeded to read my email. This communication conveyed the exact same concerns about the AI breakthrough but as it pertained to student instruction and examination.
The rapid-fire receipt of these two communications, separated by only a few seconds, led me to believe that some new bulletin must have been posted about the dangers of AI. I presumed the news sites must be replete with dire warnings of some new danger that had just emanated from ChatGPT. Checking the headlines, I was quickly disabused of this notion.
I realized instead that the coincidence of these two messages arose from a far more primordial fear: that human society is being overtaken by technology. Limiting the subject to just our ability to distinguish the uniqueness of humanity, consider that popular culture has been obsessed with this subject, including but not limited to Battlestar Galactica (the beautiful Cylon people/machines from the sequel in the late 2000s), the 1980s Terminator franchise starring the Governor of California as well as the Invasion of the Body Snatchers (going back to the 1950s and the remake in the 1970s). Such fears date much further back and include Fritz Lang’s 1927 film, Metropolis, and more than a century before that, to Mary Wollstonecraft Shelley’s penning of Frankenstein in 1818.
Strengthening the muscles of constructive skepticism
As an inveterate optimist, I do not believe that ChatGPT foretells an impending apocalypse. It seems more likely that, with the wisdom of hindsight, we will look back at this advance as having a parallel to the 1970s, when pocket calculators became affordable and ubiquitous. Being able to recollect that period, I remember the fears conveyed by many parents (and all math teachers) that this dreaded new device would render future generations numerically illiterate; unable to perform even the simplest calculations without relying upon the infernal new device. Such harangues presumed that the pocket calculator would portend the ruination of society by allowing students to cheat on their exams. Rather than embracing the efficiencies conveyed by machines that could easily perform complex calculation (recall the challenges of manually determining square roots), calculators were at first banned in the classroom. In many ways, the reactions to pocket calculators in the 1970s echo today’s concerns about ChatGPT.
From an optimistic standpoint, I believe that the creation of ChatGPT might convey some unintended consequences of a more positive nature. Long before our awareness of ChatGPT, society had been victimized by malicious bots and algorithms that invaded our social media feeds on a daily basis. Before even then (and still today as evidenced by my own Congressman, who represents the 3rd District of New York), society has had to deal with inveterate liars, charlatans and hucksters. The vast majority of these malefactors have been made of skin and bones, not microprocessors.
My optimism arises from the possibility, albeit slight, that we as individuals and as a society, might begin to question claims and “truths” that we encounter daily. We need to strengthen the muscles of constructive skepticism that can languish and atrophy. We need to subject our daily inputs to objective reasoning and to question the validity of their foundations and the motivations of those propagating the data. We are far too susceptible, even eager, to succumb to clickbait, both in our online browsing as well as everything that we see and hear. Even in our inter-personal interactions, intellectual laziness can cause us to fall prey to abject falsehoods.
As a scientist, I have been trained to be objective and question everything in my research, being wary of facile solutions and unintended bias. Yet, the utilization of these skills is too often limited to my professional life. Given the volume and manifold floods of information to which we are exposed, we often forget to apply these same skills to our everyday evaluation of the data that envelop us.
The recognition of challenges imparted by the introduction of ChatGPT offers us a possibility to rethink our thinking. There clearly are popular conveyors of false information utterly unrelated to ChatGPT. Most of these malefactors are not, and will never be, the creation of machine-based algorithms. Rather, false data target the weaker sides of human nature and are propagated by carbon, not silicon-based creatures who prey upon our vulnerabilities. The awareness of the challenges by the introduction of Chat GPT provides an opportunity for us to strengthen the use of objective reasoning and question the sources of information and their motivations.
No, ChatGPT is not a signal of the imminent doom of modern society.