We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Generative AI’s Business Impact Will Be Slow, Then Slow, Then Big

Graphic to represent AI and ChatGPT.
Credit: Tumisu, Pixabay
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 4 minutes

The following article is an opinion piece written by Vikram Savkar. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official position of Technology Networks.


In the months since ChatGPT first broke into global consciousness, extreme hype about its transformational possibilities was met with extreme doomsaying about its potential risks. Which is correct?

 

As always at such points of disruption, the answer is neither. ChatGPT represents a powerful new application of advanced technology, but it appears its impact on the larger business world will be gradual. ChatGPT is not as much of a sui generis innovation as it is sometimes made to seem, and it won’t instantly reinvent a hundred occupations. True, the fluidity and naturalness of the text that ChatGPT creates is superior to that of previous versions of generative AI, but it’s a difference of degree rather than kind. OpenAI regards ChatGPT-3 as simply one in a series of iterative releases and were taken aback by its astonishing cultural reception. In fact, they just recently launched GPT-4 as a new and improved successor. That should tell us something important. The events of the previous few months are a waystation in a long-term journey towards integration of AI into everyday life. We may have arrived at a significant moment in terms of public interest in AI, but generative AI technology itself has much farther to go before it becomes fundamentally disruptive to industries like finance, media, education and so on.

 

That said, there are industry niches where ChatGPT will start to have an immediate impact. Professionals have long used AI technology for spellchecking and grammar review of their emails. It’s easy to envision that ChatGPT will quickly become a tool for drafting emails from a series of background information. Grammarly’s recent product launch in this vein is likely just the first of many such in the upcoming months.

 

Behind the scenes, it’s likely that ChatGPT will be integrated into workflow tools, such as the electronic health record (EHR) systems that hospitals use. Clinicians have to quickly assimilate a massive amount of information from EHRs about their patients, including their medical history, comorbidities, genetics and more when diagnosing them, and as patient volumes grow, the need to do this quickly rises in importance as well. In the same way that AI-based voice-to-text tools like Nuance helped transform how clinicians write up patient notes, ChatGPT will likely transform how they ingest info from EHRs. And there are places in the media landscape where ChatGPT will probably start to make an immediate (and possibly hidden) impact – with the volume of articles published on the internet always rising and the number of professional writers and journalists employed by newspapers, magazines and other publications always shrinking, it doesn’t take a lot of imagination to picture how some media organizations will look to square the circle.

 

But the deeper ramifications of ChatGPT will play out over a period of years, not months, as mature industries turn ideas into experiments, experiments into products, and products into markets. Healthcare software companies, like my own, will find ways to significantly accelerate their impact on patient outcomes and clinician education through generative AI, but will do it methodically and carefully, because they understand that the information and solutions they provide address matters of quality of life and even survival for patients around the world. As this dynamic plays out – as the potential of generative AI is filtered through the structural rigor and deep customer focus of companies that have decades of expertise and reputation in various professional markets – I’m confident we’ll find that the net impact is quite positive.

 

It’s not that the doomsayers don’t have viable concerns. Of course, if students start to use AI to write their essays, then they won’t learn. Of course, if medical textbooks are written with AI, they will be riddled with life-threatening errors. If news articles are written only by AI, the scale of disinformation in the world will expand exponentially. We don’t want any of those things.

 

But I think the structures that underpin most industries are robust enough to put necessary guardrails in place. Already, higher education institutions are experimenting with in-class essays, rather than take-home essays, to eliminate the role of ChatGPT, or are considering how to redesign curricula to place less emphasis on essays and more on dialogue and argumentation. Medical publishers are working with established technologies to develop tools that accurately identify when text is sourced from AI rather than a person. Disinformation is, sadly, a harder solve but the traditional companies through whom most people get their news will put rigorous editorial standards around their work – because their brand depends on it – and one can hope that social media companies will, under pressure from society and the government, turn the corner on emphasizing trust over scale. Overall, the doomsday scenarios are overblown.

 

What is more worth thinking about are the surprising “butterfly effect” ways in which generative AI could spread its benefits across industries. One example, close to home for me, is medical research. Today, clinicians from every country in the world submit potentially significant research papers on medical topics ranging from obstetrics to oncology to prestigious journals, but the reality is that most published papers still come from the U.S., Europe and China. Why are so few papers published from outside of those regions? There are structural causes, of course, but a surprisingly big contributor is the quality of English in many papers from outside of traditional research powerhouse countries. The inside baseball on this is that in some journals 50–70% of submitted research papers are rejected by editors out of hand, independently of the quality of the research itself, because of the papers’ poor English writing style (and English is the standard international language of medical communication). 

 

Could generative AI help researchers from low- and middle-income countries turn viable research into professional quality papers that would stand a better chance of being accepted in prestigious journals? I think the answer is clearly yes – a recent study indicates that 20% of researchers already access the previous generation of AI-enabled language improvement tools to improve the clarity of their research output – and the result would be a material improvement not just in the global inclusiveness of medical research exchange but also in patient outcomes around the world, because groundbreaking clinical insights are not confined to a handful of countries. This is one potentially exciting example from my own space – experts in other spaces will, I’m sure, have comparable scenarios of their own.

 

When the smoke clears, I think we’ll find that ChatGPT is, counterintuitively, more meaningful than its hype.

  

About the author


Vikram Savkar is the senior vice president and general manager for the Medicine Segment at Wolters Kluwer. In his role, Vikram leads product innovation to advance the digital evolution of information and productivity solutions for medical researchers, clinicians, medical students and faculty to inform evidence-based decisions on care and outcomes. He has been with Wolters Kluwer for ten years, serving as general manager for several businesses in the Legal & Regulatory division before joining the Health division. Prior to joining Wolters Kluwer, he held senior positions at Nature Publishing Group and Pearson Education.