One of the first applications for AI in medicine will be to expedite operations like summarizing information, issuing prescriptions to pharmacies, and streamlining prior authorizations.

 

What excites you when it comes to AI in medicine?

Dr. Colleen Ryan: The potential of AI to reduce mundane tasks and enhance human skills and collaboration is fascinating. As a medical professional, I see the potential for AI to reduce busy work and help me catch all the details in a case. Using AI as a coworker and collaborator could be a game-changer, ensuring that I’m seeing the full picture and catching all the details. It’s exciting to think about the possibilities of AI as an enhancer of human skills and creativity, freeing us from low-level tasks and extending our physical capabilities. The future of AI-human collaboration is bright!

“Using AI as a coworker and collaborator could be a game-changer, ensuring that I’m seeing the full picture and catching all the details.”

Dr. Justin Lotfi: I believe there are two main areas where AI can be incredibly useful. The first is in the field of diagnosis. Personally, I have a strong passion for this and have been using differential diagnosis generators for years. These generators allow clinicians to consider multiple potential diagnoses for a set of symptoms. However, the current online generators are quite basic. AI has the potential to greatly improve this process by providing accurate diagnoses and expanding our knowledge of different diseases. It can help us overcome our biases and encourage us to consider a broader range of possibilities.

The second area where AI can be valuable is in guiding therapy based on established guidelines. It’s surprising how often patients receive care that doesn’t align with these guidelines. For example, they may be prescribed the wrong medications or are not advised on important lifestyle changes — like being on a low-sodium diet. Having AI as a collaborator can ensure that the basics of care are covered properly, leading to better patient outcomes.

Do you imagine patients will want to interact with AI chatbots as part of their primary care?

Dr. Lotfi: We already have some studies and anecdotes where patients actually rated chatbot answers quite highly – on par, if not better than, the human answers. And that’s because chatbots can process a billion things at once and therefore they have all the time in the world to write an eloquent answer. But then as soon as the patients found out it was a chatbot, they didn’t like it and the program was shut down.

Dr. Nima Afshar: In a recent article published by The New York Times, an intriguing case was presented wherein a doctor utilized a chatbot to deliver a distressing cancer diagnosis with a bleak prognosis. The responses provided by the chatbot were perceived as lacking genuine empathy, exhibiting a generic nature. It is my belief that as individuals become more acquainted with chatbot interactions, they will develop the ability to discern between responses generated by a chatbot and those stemming from genuine human compassion. At Private Medical, we consider ourselves fortunate, as we possess the luxury of time to convey our authentic selves. We are able to devote ample time to crafting personalized emails or meticulously preparing annual reports. It is crucial to acknowledge that AI can never fully supplant the irreplaceable value of an in-person experience, which is precisely what we strive to provide at Private Medical.

“AI can never fully supplant the irreplaceable value of an in-person experience.”

Dr. Jordan Shlain: The in-person experience possesses a multidimensional nature that surpasses the confines of text-based communication. While text is processed by the brain in a linear manner, an in-person encounter encompasses not only three-dimensional presence but also encompasses the richness of emotions, feelings, and empathy. Although AI possesses vast language capabilities and excels at sifting and organizing information, it lacks the ability to perceive or discern subtle cues such as a note of depression or anxiety that can be observed in a physical interaction. I vividly recall an encounter with a patient from my past. Initially presenting with back pain, it was only when I sat with her in the room and conducted a thorough examination that she began to cry and confided, “I actually came in because I discovered a lump in my breast, but I couldn’t bring myself to face it.” It was the power of being present – physically present – that empowered her to find the courage to discuss something profoundly frightening to her. It is essential to recognize that while AI serves as a valuable linear tool with immense computational prowess, it lacks the inherent capabilities of the human experience – our five senses that enable us to perceive the nuances and depth of human interaction.

Are you concerned about issues of bias?

Dr. Lotfi: The large language models are based on the history of all medical knowledge, which tends to be biased and heavily weighted toward caucasian males. But thanks to an increased cultural awareness of implicit bias, these platforms have acknowledged this issue and are working toward minimizing bias in their models. In regards to diagnosis, I think AI will greatly enhance a physician’s awareness of their own cognitive biases, which is a well-established cause for misdiagnosis and medical error. As an example, when a patient with heart failure arrives at the emergency room with shortness of breath, doctors tend to overlook pulmonary embolism as the cause for their symptoms, and an AI collaborator could catch these misses. There are dozens of cognitive biases in medicine that interfere with timely diagnosis.

Dr. Shlain: Recency bias is one example.

Dr. Lotfi: Yes, as well as confirmation bias and availability bias. I think the diagnostic applications of AI will diminish medical error due to physician heuristics.

What about issues of privacy?

Dr. Ryan: A prevailing apprehension surrounding AI is the fear of our inquiries and interactions being recorded. This concern parallels the existing reality of the internet, where similar worries exist. Just as we prioritize the protection of patient information through measures like HIPAA laws, we exercise caution when it comes to sharing identifying details or summarizing extensive data with AI. From my perspective, AI is not inherently more distinct or perilous than the internet itself, as long as we remain mindful of the boundaries of safety and the information we entrust to it. By maintaining awareness of where it is safe to engage with AI, and being discerning about the data we expose, we can mitigate potential risks and ensure a responsible and secure utilization of this technology.

“AI is not inherently more distinct or perilous than the internet itself, as long as we remain mindful of the boundaries of safety and the information we entrust to it.”

In which medical fields is AI already having an interesting impact?

Dr. Shlain: AI has already demonstrated its utility in the identification of molecules for cancer treatment, and it holds significant potential in the realm of infectious diseases, particularly in the development of antibiotics and the broader landscape of drug discovery. Notably, there are emerging companies focused solely on leveraging AI for drug discovery, attracting substantial funding and support. This research has the capacity to expedite the identification of new molecules, ultimately reducing the time-consuming aspects of labor-intensive processes. As a result, it has the potential to make drug development more efficient and cost-effective, leading to more accessible and affordable medications.

Dr. Afshar: AI has recently proven that it can predict protein structure accurately from a genetic code, which is a holy grail of molecular biology (it takes scientists decades to figure out these protein structures). Proteins are what we’re made of, and their three-dimensional structure is very important for predicting their function. Knowing that structure enables biologists to identify potential drugs that might target disease-causing proteins or virus proteins. Taking it a step further, scientists will soon use AI to create designer proteins by entering into a computer a few changes to a gene, and now the protein folds in a way where it performs a critical function that is missing in a disease. Maybe you could even design an enzyme to suck carbon dioxide out of the atmosphere. You have all these possibilities opening up, which previously took years and years of painstaking work to to make any progress on.

Has AI hit wearables yet?

Dr. Lotfi: At present, continuous glucose monitors provide fascinating data; however, it necessitates medical knowledge and experience to interpret the significance of that information. For instance, one may consume a waffle and then contemplate the resulting glucose levels and its implications. With the integration of AI, it becomes conceivable to continuously scan and analyze one’s dietary intake throughout the day. AI could generate comprehensive data outputs, pinpointing specific foods to avoid, and highlighting additives present in certain foods that may otherwise go unnoticed. Although this potential application appears promising, it is important to acknowledge that we have not reached that stage just yet.

Dr. Hela Barhoush: The utilization of AI in infant sleep training holds immense potential. An app called Huckleberry exemplifies this by employing AI algorithms to generate personalized sleep plans based on the sleep patterns of infants. While the current approach involves parents inputting data into the app, there is a possibility of incorporating wearables either on babies themselves or within sleep-monitoring devices to enhance the effectiveness of sleep training.

Another intriguing area to consider is the application of AI in the diagnosis and management of ADHD among school-age children. Presently, ADHD diagnosis and treatment heavily rely on questionnaires and subjective reports from parents and teachers. However, the integration of wearables could offer the capacity to capture more objective data, such as movement rates, task duration, and heart rate. This objective data could provide a more accurate measure of ADHD symptoms and facilitate the generation of tailored treatment plans.

It is fascinating to envision the potential impact of AI in these domains, where objective data and AI-driven analysis could revolutionize current practices and improve outcomes.

What are your biggest concerns with AI in medicine?

Dr. Lotfi: We’ve already seen instances of AI hallucinations either creating false facts or false citations. And so given healthcare is a very high-risk situation, that’s a huge red flag.

Dr. Shlain: If individuals place unwavering trust in AI as a source of comprehensive information, there exists the potential for significant harm to be inflicted upon them. This danger arises from individuals mistakenly assuming that they have become experts simply by relying on AI-generated information. Consequently, they may overlook the crucial nuances that can only be grasped through genuine experience and expertise. It is imperative to recognize that while AI can offer valuable insights, it cannot fully substitute the depth and nuance of human wisdom and understanding. Relying solely on AI without acknowledging the limitations and nuances of real-world knowledge may lead individuals to make misguided decisions and overlook crucial aspects of a given situation.

Dr. Barhoush: When dealing with younger children who lack the ability to effectively communicate their thoughts and feelings, clinical experience plays a vital role in interpreting their symptoms. This expertise is crucial for accurate diagnosis and effective treatment. However, I have concerns about the ability of AI to possess the necessary clinical acumen to accurately decipher subjective reports from parents. Consequently, the output and potential clinical recommendations provided by AI may be flawed. The nuances and context that clinicians bring to their interpretations may not be fully captured by AI systems, potentially leading to misinterpretations or incomplete understanding of the child’s condition. It is essential to recognize the limitations of AI in this context and ensure that the human expertise and experience of clinicians continue to be an integral part of the diagnostic and treatment process, particularly when dealing with younger children who cannot express their experiences verbally.

Dr. Afshar: In the current age, where information is abundant and readily accessible, a paradox emerges. Despite having access to vast knowledge, there remains a crucial need for specialists, experts, and, particularly in the field of health, doctors who can navigate and make sense of this information. These professionals possess the ability to properly prioritize and discern accurate information from falsehoods. While it is commendable that individuals can obtain preliminary possible diagnoses for themselves, they still rely on us, as healthcare providers, to validate and treat those diagnoses. At present, I perceive AI to be similar to a Google search in its capabilities, lacking the comprehensive expertise and contextual understanding that human professionals bring to the table.

Dr. Lotfi: A study has shown that the manner in which information and recommendations are conveyed by an AI chatbot can influence people’s likelihood to follow instructions. When the output is presented in an empathetic and friendly manner, individuals are more inclined to adhere to the recommendations. Conversely, if the output is delivered in a terse and robotic fashion, people are less likely to blindly follow the provided guidance. This finding suggests a potential safeguard: ensuring that the information presented by AI is clear and objective, rather than tapping into our natural empathy, which may lead individuals to unquestioningly trust a chatbot. By prioritizing clarity and avoiding manipulative tactics, we can promote responsible and discerning engagement with AI-generated information.

How is Private Medical keeping its finger on the pulse of AI?

Dr. Shlain: The current focus of AI in the medical field primarily revolves around enhancing efficiency. Questions arise regarding how we can expedite the process of summarizing information, issuing prescriptions to pharmacies, or streamlining prior authorizations. Presently, AI applications largely target operational aspects rather than directly impacting the clinical side of medicine. Many medical practices find themselves burdened with a multitude of administrative tasks, commonly referred to as “administeria.” Thus, the immediate goal for both our practice and numerous others is to achieve greater efficiency without compromising the quality of care provided. This low-hanging fruit of efficiency improvement holds significant potential for optimizing medical practices and alleviating administrative burdens while maintaining the highest standards of healthcare delivery.

“Although AI will never be able to replace human relationships, it is undeniably present and holds significant power and potential. Ignoring its existence and hoping it will fade away is an ineffective approach.”

Dr. Ryan: Although AI will never be able to replace human relationships, it is undeniably present and holds significant power and potential. Ignoring its existence and hoping it will fade away is an ineffective approach. Instead, we should seek to comprehend how AI can bring benefits to both ourselves and the individuals we serve. It is crucial to remain highly attuned to the areas where innovation is rapidly advancing and where potential is flourishing. By actively staying informed and embracing the evolving landscape, we can make prudent and essential decisions regarding the integration of AI, ensuring that we harness its capabilities to enhance our services and meet the evolving needs of our members.

This conversation has been edited and condensed.