Over 20 years ago, I began my informatics career by building out an enterprise electronic health record (EHR) for my hospital-based healthcare system. We had a medical staff comprised of 700 physicians who were mostly independent, along with half a dozen urgent care centers and a long-term care facility – the typical gamut of a reasonably large suburban healthcare enterprise.
And I had absolutely no idea what I was doing.
Fortunately, neither did anyone else. At the time, the idea of the electronic record was still developing. Federal regulatory standards for EHRs were more than a decade away, which enabled vendors to create whatever they thought would sell against proprietary standards and then sell it into what was then a best-of-breed world.
From growing pains to tech gains
Physicians were ferociously independent and not especially enamored with the notion that a third party could govern their behaviors. For example, back then it was common practice for our doctors to end a dictation by saying, “read and approved by Dr. John Smith,” so that the medical records department would not have to track down Dr. Smith for a signature, because there was an explicit statement saying he had actually reviewed what the transcriptionist typed up.
But by 2005, we were named one of the Most Wired Hospitals, and – much to my joy — also honored in a category we were responsible for creating – Most Wireless. This was especially satisfying considering how outspoken some were against adopting new tech (like cell phones) at the time.
This meant we were able to expand our adoption of new tech successfully. Still, there’s an alternate side to this rosy little EHR story. Plenty of mistakes were made in the process – like when we implemented software that irritated instead of supported providers, along with some that didn’t work at all.
Since those early days when I was a corporate buyer of healthcare IT, I’ve spent an additional 15 years on the vendor side. I’ve heard from hundreds of clinicians in dozens upon dozens of organizations. Here’s what I’ve learned.
What clinicians want to see in new tech
I think the average working clinician is looking for technology that diminishes their pain points above any other single factor.
When evaluating new technology, they will ask themselves: Will this make it easier for me to get through my day? Does it fit into my workflow, or is it one more task I am supposed to complete? Does it remove or simplify a task I find onerous? Does it keep me from making mistakes without driving me crazy?
These factors may seem obvious, but I’m often surprised at the disparity between what working clinicians are looking for and what the decision-makers in their organizations are looking for.
In addition to high quality care, healthcare leaders are chiefly focused on financial ROI, productivity, patient satisfaction, and regulatory compliance. On the other hand, clinicians are the ones in the trenches – and they want to avoid finishing their charting in their pajamas.
I am convinced that one of the reasons there is so much EHR-driven burnout among physicians – particularly primary care – is that simplifying the EHR for clinicians is considered “nice to have, but not absolutely necessary” by those at the executive level (and perhaps the major EHR vendors themselves) when they are ranking priorities.
In short, we can do better. The best way to recruit and retain good clinicians is to help them love their jobs, and that is not going to happen unless the IT tools central to their work make their days easier instead of more tedious. So, what should decision-makers or software developers look for if they want clinicians to embrace IT developments?
The Rule of S’s
When I was responsible for evaluating components to buy, I created a Rule of S’s.
The first is simplicity. Most healthcare organizations have already purchased a large single-vendor solution and trained their clinicians on its intricacies. Any additional IT must look and feel familiar or be very intuitive to retain simplicity. It isn’t easy to drive the adoption of new software that requires training hundreds of clinicians on brand-new paradigms. In my experience, clinicians will choose the familiar over the unknown if the learning curve is too high. There is no guarantee that the efficiency reward will be worth the effort to learn a new way of doing something.
The second is speed. Waiting for a response from the computer will kill software adoption. Your clinicians will begin to count. (They. Will. Count. Every. Click. That. They. Have. To. Make.) If the response is instantaneous, no one calculates the motions required. You can see evidence of this when you observe someone using apps on their cell phone. Every swipe or touch creates an instantaneous response; the user never considers each motion an individual event. But let there be even 1/10th of a second between every gesture and response, and the cumulative delay will kill that app.
The third is satisfaction. To be clear: this is satisfaction measured by the clinical users, not decision-makers like myself. I learned the hard way that getting enterprise recognition for putting clinical data on smartphones didn’t mean our physicians would actually adopt the technology. They weren’t interested in me finding ways for them to work more during their time off. Clinical adoption of new technology is not nearly as driven by executive governance as it is driven by satisfied end-users whose day has been made easier.
The fourth is safety. Medicine has become increasingly effective, complex, and dangerous. And as such, the only way to navigate that intricacy is with healthcare technology. A clinician needs supporting IT to assemble a clinical picture, evaluate all decision points, and choose the best plan of action.