Grand Rounds Blog

This white paper was co-written by Sasha Small and Nate Freese.


In the first two installments of our blog series on measuring quality, we explored why members aren’t getting matched with high-quality care along with the importance of measuring provider quality effectively

In this final installment, we take a look at a whole different approach to measuring provider quality that is differentiated and personalized to each member’s needs. This is the model that Grand Rounds uses for its employer clients. 

Discover more in our recent white paper, Grand Rounds’ Approach to Quality Measurement.


Four Key Factors Feed Into Our Approach to Measuring Provider Quality

  1. Novel quality measures to capture critical and often overlooked aspects of provider quality
  2. Dynamic quality scoring to match provider skill with member needs
  3. Machine learning to more accurately predict providers’ performance
  4. External validation to ensure that our standards are in line with those of objective industry experts

To assess physician performance, Grand Rounds developed a methodology that optimizes for outstanding, high-value clinicians who deliver cost-effective care. In addition to evaluating them against a traditional set of metrics such as things like cervical cancer screening or adherence to oral diabetes medications, we rate physicians on proprietary quality metrics such as benzodiazepine-prescribing patterns and new patient retention.


Matching a Member’s Clinical Needs to the Most Qualified Doctor 

In the last couple of years, we’ve gone a step further by developing a Match Engine that enables hyper-personalized matching between patient and provider. By employing dynamic provider scoring, our Match Engine makes it possible for every member who conducts a provider search to have their specific clinical needs matched to a physician based on that provider’s clinical skill and cost-effectiveness.

Matching a Member’s Clinical Needs to the Most Qualified Doctor 

Contrast our dynamic Match Engine with traditional quality measurement models, which don’t offer the depth and breadth of model diversity needed to build frameworks that can accommodate the personal profile and needs of a given member. As well, traditional models rely on a narrow slice of data, whereas our Match Engine incorporates a wide variety of data, thereby increasing the predictive power of our quality models.

The accuracy of our models is made possible with machine learning, which creates a feedback loop that continuously improves upon these models. Machine learning allows us to scale our analytics so that we can incorporate increasing amounts of data to rapidly build more predictive models to support members’ search for the best, most appropriate care available to them.


Getting External Validation From the Wider Provider Community

We recognize the importance of partnering closely with the provider community as a necessary element in raising the standard of care in the U.S. With this in mind, we brought on Veracity Healthcare Analytics to review our physician quality metrics.

Led by Dr. Niteesh Choudhry, a professor at Harvard Medical School who’s a renowned researcher in the field of physician quality measurement, Veracity conducted a validation study to evaluate our approach. The results of the study were extremely positive.

100% of metrics confirmed for clinical face validity, 92% of metrics supported by published studies, 95% of measure specifications deemed appropriate

We plan on continuing our engagement with Veracity—as well as with other experts in the wider community—to validate our methodology as we keep on adding more inputs to our models. We’ll be sure to share the results of our continued efforts in connecting members with the highest-quality physicians.

Download our white paper to learn more about our rigorous, data-driven approach to provider quality measurement.

Other things you might be interested in.
What ACA Means for Your Small Business
Leveraging Data to Lower High Trend and Raise ROI
Measuring Quality - White Paper: A Dynamic Approach to Evaluating Physician Performance