Whether you call it artificial intelligence, AI, or augmented intelligence, I think that the day when some IT product with “machine learning” becomes  your most significant practice partner is probably not that far away. You may counter by saying that you have heard or read in the Wall Street Journal that IBM’s Watson is a bust in cancer care, or that Atul Gawande has big doubts about whether AI will ever be much of a diagnostician. Stat, the online health letter from the Boston Globe, shocked many a few weeks ago and added a little ethical odor to the discussion by suggesting that IBM was marketing the oncology application when they had internal documents that suggest problems with their most ballyhooed AI product:

The documents — slides that were part of of two presentations in June and July 2017 — contain scathing assessments and show that company cancer specialists and customers identified “multiple examples of unsafe and incorrect treatment recommendations” that didn’t stick to national guidelines.

I have had my own doubts about AI, but my doubts are not about whether it will ever be useful or what to call it. I just wonder when IBM, Google, Microsoft, and Phillips, the companies developing AI for medicine, will realize that we do not need a machine that is smarter than a world renowned professor at a famous academic medical center. We need a machine that can aid us with the mundane, routine processes that take us away from our patients, and are the same processes that we occasional fail to do properly, with the result being injury to our patients or at the least economic inefficiencies that waste resources. Even better would be a machine that will record what we say and do with patients and tap us on the shoulder when we drift out of the best diagnostic lane. My wife’s new Toyota tells her when she is following too close, and gets out of her lane. It also dims the lights when another car approaches, and then turns the brights back on when the approaching car passes. She is not happy that the windshield wipers do not come on automatically when it rains.  Her “old” Acura knew that trick.

I have followed the field of AI with some special interest and a little bit of a privileged perspective for some time. I was an advisor to Simpler, the Lean consultancy, when it was purchased by Truven, and was very excited when Truven was part of the IBM Watson spending spree that is mentioned in the WSJ article above. From the beginning of the Watson acquisition of Simpler my suspicion has been that the executives who drive strategy did not recognize what a great opportunity they had created for themselves. So far they have not realized that for medicine to use a new technology, it has to meet a need perceived need. Successful Lean projects usually begin by asking, “What is the problem?” Things change slowly in healthcare. We have a knack for ignoring what might help for a long time. We all know the folklore that ever since Fleming discovered penicillin it takes us about seventeen years  to recognize that we have an answer to a perceived need. I have reasoned that the delay is in part due to the necessity of figuring out how the new tool properly fits into our work flows. More accurately it takes us a while to realize that we must change our work flows to optimally integrate a new tool into what we do. It seemed like a no brainer to me. Lean could be used by IBM to develop the work flows that would allow us to optimally utilize their wonderful tool. That is, if we are willing to change.

Despite all the negative press, I continue to see AI as the greatest potential leap forward since fiber optics, MRIs, CT scans, and echo technology were introduced into our diagnostic armamentarium. I speak with some perspective because I was practicing before any of these everyday technologies were more than a technological possibility, and I saw each one of them get past some initial skepticism before becoming everyday tools that we could not imagine living without after we worked them into our pathways of care. I well remember doing sigmoidoscopies with a stiff metal tube. I also remember the day Lou Teichholtz (remember the Teichholtz formula for LV volume?) showed me some squiggles on a small screen that was sitting on top of a huge stack of electronic equipment that he had rolled to the bedside of my patient. He claimed that he was looking at my patient’s heart with the same technology that the navy used to look for submarines. If you had asked me on that day in 1971 what the future of ultrasound might be, I would have had to answer that I thought it would be a little while before much came of it. I would have been wrong.

Despite what Stat or Atul Gawande says and almost simultaneously with the publication of the WSJ article, I was surprised this week when at the Quality Committee meeting of one of the boards I am on the Chief Quality Officer began to discuss the analysis of our progress on our 2018 quality objectives in comparison with other institutions based on an analysis done by Watson. There are effective AI applications to improve the reading of diagnostic tests and pathology slides. You may say that these are small incremental advances and not worth the hype or the billions that has been invested so far, but I would disagree. I call them a beginning.

I have expressed my skepticism about the “online medical journals” that now flood my virtual “in basket” in exactly the same way their predecessor paper versions once littered the little wooden box on my desk where the MA  dumped them and they sat until I threw them away. That is why we called them “throwaways.” Nevertheless Medical Economics, one of the premier “throwaways” published a nice article last week on AI in primary care. The author, Mary Pratt, documents the progress toward my dream for AI. Pratt references an industry expert, Erik Louie, MD, chief medical officer at HealthBox, a healthcare innovation consulting and fund management services firm in Chicago.

Louie and others envision a variety of different uses. AI could screen patient data from physician notes, tests results, and other data sources and combine that with information from protocols, clinical studies, and recommendations to identify a patient’s condition, which follow-up tests are needed, and which medicines work best. In other words, AI will be significantly more sophisticated versions of today’s clinical decision tools.

It may not seem like a big deal but AI can manage all that data from FitBits and other personal monitors that patients would love to send in to their PCPs. The part of the article that really resonated with me was the help that AI will provide with all the documentation and office “scut work” that keeps so many clinicians chained to their computers until late in the evening.

Physicians and their care teams, even those not normally involved in eye care, can use the technology to screen their patients for the condition during routine office visits.

In addition, physicians soon could have AI take their notes, analyze their discussions with patients, and enter required information directly into their EHR systems. Physicians might even use AI to analyze indicators that they can’t, such as patients’ voices to detect anxiety or depression.

IT forecasters predict that this emerging technology will also improve many front- and back-office functions. For example, AI will guide patients through intake forms that it tailors based on responses, and AI will analyze patient records to determine how to most accurately submit claims for timely reimbursements with less chance of denials.

I know it is coming. I do not know how long it will take to impact your practice. I do not practice any longer. I am on the “other side of the sheet” now and I want my caregivers to have all the advantages that big data, AI, and optimized workflows can bring to their lives and mine. Perhaps it is good that IBM got off to a bad start with oncology because what you need more than a brainy professor to help you with the exotic cases is a reliable plodder at your side that is committed to continuous learning and the elimination of many of the tasks that sap your energy and steal time from your attention to patients.

The picture attached to this post deserves some explanation. My original thought was to go with a light bulb. You know, the route comic book presentation of a bright idea. In my opinion AI is a very bright idea, and despite the resistance that often confronts bright ideas and slows their implementation, AI is will generate a whole generation of bright ideas that will transform healthcare.  As I was thinking about where I might find a “raw light bulb” to snap with my iPhone, I remembered the lights in my garage. My next thought was even a “brighter idea.” When the house was built in the mid eighties the garage was lighted by a few “incandescent” light bulbs that were from mounted in the ceiling. I think there are similarities between the history of the lightbulb and the emergence of AI. Incandescent bulbs had gradually increased in efficiency from 1879 when Edison announced his great invention. Despite their inefficiency, they were cheap which meant that they were still the dominant form of lighting even after fluorescents that were much more energy efficient were configured in such a way (CFLs, compact fluorescent lights) that allowed them to be used in the sockets originally designed for incandescent bulbs. CFLs used a lot less energy but few of us were willing to fork over $30 for a light bulb.

There were incandescent bulbs everywhere when we bought the house ten years ago. In time we  upgraded the bulbs in the garage to CFLs, and in other parts of the house there are now LEDs that are even less of an environmental risk and more efficient than the CFLs. Progress and competition in lighting has lowered the price of CFLs to less than $2.00 compared to $30 in the late $70s. An LED that is much more efficient than a CFL sells for about $10 and is reported to last twenty five years. There is more to my metaphor and picture and story than just light bulbs. I have trouble with light bulbs. Since I was a little boy, I forget to turn off lights. Probably the most frequent admonition from my father to me was, “Turn off the lights!” Dad grew up in the Depression and a light left burning in an unoccupied space was money going down the drain, or up in smoke.

This is where the garage opener that shares the spotlight with the CFL brings it all together. We remodeled everything in 2015. The house was essentially gutted, but the only change in the garage was the installation of new doors, each equipped with an automatic garage door opener. The door openers are equipped with motion detectors. When I step into the garage the lights come on. After I leave the garage, the lights go off after a few minutes without detected motion. I never turn on the CFL ceiling lights unless I need extra light. How cool is that? The lights also come on when we use the remote to open the door. I know that the mechanism in my garage would not qualify as an AI tool. It just reacts as it is programmed to do, but it is a machine that has changed my behavior, just as it is possible that AI will soon begin to improve the experience of practice for both you and your patients.