AI-Composed Communications Pose Concerns for Industry and FDA

Wayne L. Pines
May 4, 2023 at 12:23 PM EST

One of the most fascinating developments in the evolving artificial intelligence (AI) universe is ChatGPT, an app/website that provides instant answers to any question in a narrative form. ChatGPT can provide an instant essay on any topic within seconds. The essay will contain not just “facts” but also insights and interpretations.

According to the Internet, “ChatGPT is an AI chatbot that uses natural language processing to create humanlike conversational dialogue. The language model can respond to questions and compose various written content, including articles, social media posts, essays, codes and emails.”

ChatGPT has been available for six months – since November 2022 – and already has struck a nerve with academics, who fear that students will submit essays written by ChatGPT. In addition, companies have to fear that employees asked to conduct research will turn to ChatGPT to do the work and produce a report.

There are a number of existential issues associated with the use of AI to do what humans used to do. One issue of special concern is accuracy and reliability.

I put my own name into ChatGPT on my iPhone and asked for my own bio. I received a narrative bio within seconds.

But every single piece of purportedly factual information was wrong. It had the wrong birthdate, birthplace, educational background, profession, employment history, authorships, etc. There aren’t that many people with my name, but it clearly provided someone else’s bio, or a totally fictionalized bio, when I entered my name. I did it again, this time using my middle initial, and it produced an entirely different bio – again all incorrect. The same happened when I asked for my son’s bio.

ChatGPT and other AI platforms undoubtedly will be used to seek health information, including information about prescription drugs and devices. These platforms have the capacity not only to provide “facts,” but also to interpret information and put it into context, for example, to compare different features of various products. Thus, it has the capacity to provide answers to questions such as which medical treatment is better or safer than another, based on a myriad of information, or misinformation, available on the Internet.

It's a bit frightening to think that consumers and even health care professionals are likely to use ChatGPT and other AI platforms to answer specific, individual medical and/or product questions – that is, to provide medical advice. But, inevitably, that will happen.

FDA Commissioner Robert Califf has said that curbing misinformation about health care and medical products is one of his strategic priorities. He said misinformation poses a significant public health issue. Dr. Califf’s statement of concern was prophetic. Now we have AI systems that can convincingly provide answers to medical and medical product questions that are inevitably based on incomplete or inaccurate information and that consumers or health care professionals could use to make individual medical decisions.

The advent of more sophisticated uses of AI poses challenges for all of us. From the standpoint of the FDA, which has legislative responsibility to oversee the accuracy of information about prescription drugs and devices, AI poses especially thorny issues. Does FDA have authority to regulate answers to medical product questions when at least some of the answers will originate with product websites sponsored by FDA-regulated companies?

It could be argued that apps such as ChatGPT do not come under FDA’s jurisdiction because they are not sponsored or controlled by a regulated entity. On the other hand, it could be argued that FDA could seek to regulate AI products like ChatGPT as multiple function medical devices, just as it regulates other apps. In that case, the potential scope and use of ChatGPT makes regulation an awesome challenge for the FDA.

The use of ChatGPT and other AI platforms poses issues for product manufacturers as well. If a ChatGPT response contains misinformation about a product, is there any obligation on the part of the manufacturer to seek to correct it and if so, how can that be accomplished? What is the product liability risk for a manufacturer if and when ChatGPT continually presents incorrect misinformation about a product? What is the FDA compliance risk if a manufacturer knows that ChatGPT responses are minimizing the risk of a product or advocating an off-label use? Is there even an obligation on the part of manufacturers to monitor AI platforms and see what is being said about their products?

ChatGPT and other AI platforms almost certainly will become even more popular and widely used as information sources for all of us. Hopefully, they will be able to provide greater assurance about the accuracy and reliability of their information. In the meantime, as the technology evolves, medical product manufacturers will have to consider how the various responses that appear on these platforms, especially if the information is incomplete or just plain wrong, pose challenges from a legal and regulatory standpoint.

Editor’s Note: Wayne L. Pines is the editor-in-chief of Thompson Information Services’ FDA Advertising and Promotion Manual. He is a senior director at APCO Worldwide and a former FDA associate commissioner.

My Research Folders

You are not Logged in yet, Please login to see Your research folders.