The goal of healthcare analytics software is to provide both a clear picture of hospital and physician performance and a foundation for data-based decision-making. Performance improvement, readmissions or cost reduction, finding mutually beneficial partnerships — these can all be uncovered with the use of the proper data analytics tool. Data vendors all have their own unique selling points, but there are a few key differentiators that affect reliability of the data and depths of insights you can obtain.
Andrew Johnson, PhD, director of data science at Quantros, recently answered nine questions on data analytics for Becker’s Hospital Review.
Question: As healthcare shifts from fee-for-service to value-based care — from a data analytics perspective what do you think the most important indicators hospitals should have their finger on the pulse of?
Dr. Andrew Johnson: Every healthcare organization has their own selected set of “vital signs” they track, and the shift to value-based care will likely add a few new measures to their monitoring practices. These indicators should include provider compliance with evidence-based best practices; any outcome measures associated with value-based purchasing contracts; and patient experience metrics.
Q: Why can’t hospitals gain insights from their own systems, such as EMRs?
AJ: This is because EMRs’ primary design goals are the capture and retrieval of health information, and the facilitation of billing operations (and not analytics). EMR companies have been enhancing the data science, analytics and reporting capabilities in their products in recent years, though many of their solutions don’t allow for much local control of functionality. Dashboards are often created as one-size-fits-all, which can only be true when the data-generating processes and architectures are identical across participating organizations. They also lack comparative data, so there is no visibility of performance benchmarking to others with similar patient populations. As the saying goes: “If you’ve seen one EMR, then you’ve seen one EMR.” Because of the relative uniformity of claims data versus EMR-sourced clinical data, claims-based analytics products are typically more mature in this respect.
Q: Why is risk adjustment important when looking at a hospital’s care performance data?
AJ: In statistical practice, the most interesting questions involve a comparison. For example, while it is useful to estimate an absolute performance or quality score for a single entity, it is even more powerful to make relative comparisons between multiple entities. To do this comparison fairly requires adjustment for patient characteristics. I like to say there is no such thing as a “showroom-new” patient: Patients presenting for care are an amalgamation of their lived experiences, environmental exposures, health behaviors, genetic predispositions, etc. If we are to account for the contribution to their health status made by their providers, then we have to adjust for where they started at the beginning of the clinical encounter. The key point is to appropriately attribute variation in observed patient outcomes to the proper inputs, which include both patient and provider-level factors. If we didn’t do this, we’d be rewarding providers who had less complex patient populations and penalizing those providers who were willing to treat more complex patients.
Q: Why is it important for data models to be peer-reviewed?
AJ: All models are simplified assertions of how one thinks the world works. They take the tangled webs of causality and association and attempt to condense them into discrete, articulable pieces in a much-reduced framework. To this end, I’d quote the statistician George Box: “All models are wrong, but some are useful.” Given that all models are imperfect, our primary concern in model development is an economy of explanation: We want to build the most valid model while simultaneously resisting its over-complication. The practice of modern scientific reasoning has long accepted that peer-review increases the validity and reliability of its discoveries, and we should similarly allow for introspection into the models that we hold forth to our customers. The primary conflict around the “white-box” inspection of models (at least in the marketplace) is a desire to retain proprietary information, to protect the model’s recipe. My own experience has taught me that the business value of a model is in its deployment, not its construction. I believe the majority of the value we provide to our users comes from the reliable execution of the analytic product, and not solely the underlying model’s ingredients or assembly method. Because of this, I’m more willing than most to share details on a model’s construction to interested users, as doing so increases their trust in the product. And if they don’t trust it, they won’t use it.
Q: How do you think hospitals should root their strategies in data?
AJ: The first step must begin with the commitment of senior leadership to data-driven practice and operations, and the investment in analytic governance, architecture, personnel and technologies. They have to “put their organizational capital where their mouth is” and create governance structures that can push analytics throughout the organization. This can (and usually should) involve the creation of a chief analytics or data officer position, and their subsequent alignment with the aims of the chief quality officer and CMO.
Q: What are the strengths of Quantros’ existing model and future model that is in production?
AJ: The existing model has the benefit of being widely accepted and proven as a valid scoring measure through its long and successful use by clients to achieve their provider profiling-related goals. The future model will allow users to explore provider quality across combined inpatient/outpatient episodic arcs by using industry-standard episode- and procedure-based groups as the units of analysis. I like to think of patients’ collections of encounters with healthcare providers as an iceberg: Inpatient encounters are the most visible components above the waterline, though the bulk of the iceberg is made of outpatient encounters, out of sight of most analytic products. So, you need to understand and safely navigate with respect to the whole collection of clinical encounters, and not just what you see above the surface in inpatient settings.
Q: Why is it important for hospitals to get a viewpoint of their performance across all care settings?
AJ: Hospitals are only one component of the larger healthcare provision infrastructure, and all those parts are interconnected in their share of patient outcomes. Outpatient encounters tend to book-end inpatient encounters, so insight into outpatient care quality is critical to understanding both the inputs and outputs of their inpatient-based care processes. The best-quality hospital encounters can be weighed down through their association to lower-quality outpatient care encounters, and vice versa. For hospital leadership to make this conceptual leap requires the expansion of the traditional definition of attribution. Where before it was primarily causal in nature, it should shift to more associational terms. Where we once thought, “This bad outcome was due to some other provider, so we’re fine,” now it’s more helpful to think, “We are a part of a larger system that collectively generated this bad outcome, whether we directly caused it or not, so we should work to fix it.” This also aligns with the risk-sharing mechanisms of value-based healthcare, where a provider is on the hook for outcomes regardless of the particular causal factors.
Q: What gets you excited about your role working for a data analytics leader?
AJ: I’m very excited to contribute to the further development of a product that already has such a stable technological base and years of proven performance. I’m aiming to infuse more modern statistical methods into our quality product to ensure their continued success in the future. A common theme I’ve observed in the practice of healthcare analytics is that there’s always more value one can squeeze out of a data asset, often by combining with other data, so…
Q: What is the next enhancement beyond incorporation of outpatient data that you would like to see?
AJ: We have plans to incorporate social determinants of health (SDoH) data into our claims-based models to give a fuller picture of the contributing factors to health outcomes, so stay tuned for that!
As Director of Data Science, Andrew Johnson, PhD, leads a team of data scientists to enhance the quality and cost risk-adjustment models that power Quantros’ analytic offerings.
Quantros delivers critical, actionable insights into healthcare providers’ performance. Our proprietary composite quality scoring assesses the reliability of hospitals and physicians in avoiding adverse events, in turn providing valuable performance improvement insights and provider profiling for employers, brokers, vendors and tech enabled service providers.
Episodic patient quality and cost scoring for facilities and physicians — including hospital inpatient and outpatient data — will be available the end of 2020. This innovation will allow Quantros to deliver line of sight to provider performance across the continuum of care and has never been available before in the industry.
Andrew’s healthcare data science work has been published or quoted widely — in dozens of scientific journals, papers, and presentations. Prior to joining Quantros, Andrew led HCA Healthcare’s National Group data science team, where he directed the design and construction of clinical and operational predictive models, and also led a team to win first place in AHRQ’s 2019 Bringing Predictive Analytics to Healthcare Challenge. In addition to his industry experience in healthcare data science, he has held faculty positions in Public Health Administration at the University of Kentucky, and currently holds a faculty appointment in Healthcare Informatics at the Medical University of South Carolina.