Parsing Volume to Value, Proxy Measures, and the Streetlight Effect

Despite some concern that the migration from fee-for-service to value based payment (VBP) is being reversed, there remains strong momentum for VBP – both nationally – in the form of the bipartisan 21st Century Cures Act that was passed and signed into law last December, and many state and commercial initiatives, including the one I’m personally involved with, New York Delivery System Reform Incentive Payment Program (DSRIP).  Defining value, of course, is not easy.  I’ve often returned to Michael Porter’s short essay on this topic when I feel my definition meandering. Go read it now.  Please.

Ok, you’re back?  Cool.  That was good, eh?  I love the last paragraph:

The failure to prioritize value improvement in health care delivery and to measure value has slowed innovation, led to ill-advised cost containment, and encouraged micromanagement of physicians’ practices, which imposes substantial costs of its own. Measuring value will also permit reform of the reimbursement system so that it rewards value by providing bundled payments covering the full care cycle or, for chronic conditions, covering periods of a year or more. Aligning reimbursement with value in this way rewards providers for efficiency in achieving good outcomes while creating accountability for substandard care.

As CEO of Alliance for Better Health, I’ve been working with care delivery organizations in our community to navigate the path forward.  They clearly have their feet in two canoes:  the majority of their reimbursement continues to come from traditional sources with a traditional structure:  more patients seen = more money.  And then – from the edges, they have people like me telling them that the future is something different.  It’s new, it’s going to pay them to do something that they would like to do – but they’re not quite sure how to do it, and, yes, some fear accountability.  

Walk before we run, or jump right into the deep end? How do we traverse this gap between where we are and where we would like to be?  One framework says that here is no traverse at all: we need to leapfrog to tomorrow and start from scratch.  Iora Health is one such model.  Care providers are focused on personalized, proactive care.  The practice is led by health coaches, nurses, physicians, and administrators working together as teams to maximize health for the communities they serve.  Reduced cost is a byproduct of great care, not a target itself.  The office workflow is different from a traditional practice, the architecture is different, the hiring practices are different, the EHR is different.  This model steps out of the old canoe and into the new one.  For those with the guts, it’s a great model.  For the rest, a slower path may work better.  Of the slower paths, there are a handful of options, and many of them are complimentary rather than mutually exclusive.  Accountable Care Organizations represent a compelling alternative to the Iora-style leapfrog.  By offering a migration path – with increasing levels of shared risk, an ACO can coalesce a community of providers, collaborate with the federal government or commercial payers to standardize care for the better, and improve health outcomes.  There are many models of ACOs, but I would argue that a common thread for the successful ones is that they have maintained laser focus on two guiding principles:  a) success will attract the right partners;  b) great primary care is the keystone of an ACO.  Let’s parse this for a moment:  why do I say that success will attract the right partners?  There is a misconception that one should start with the creation of a large ACO.  Growing the numbers of care delivery organizations will grow the number of “accountable lives” (people) and therefore, if one follows the “bigger is better” hypothesis, one can take advantage of the scale to reduce overall risk, and create a more powerful negotiating lever with the payers. While seductive, this hypothesis is flawed.  A big network is hard to manage, and an ACO will be forever “herding cats” if they start too big.  They won’t see shared savings, and they won’t be able to meaningfully accept risk, because they can’t be confident that they will perform well.  An alternative model, and one that has been followed by all successful ACOs, (which, of course,  includes my friends at Aledade) is to start small.  For the first turn of the ACO wheel in a community, focus on a small group of providers who are “all-in.”  They are fully engaged and dedicated to the success of the program.  When successful, this attracts others – like moths to a light bulb – to the program.  The ACO can then attract great partners (great primary care providers) rather than working hard to corral everyone and then re-educate them to the new ways.  The difference, of course, is “pull” vs “push.”  “Pull” usually works – and if it doesn’t, it wasn’t meant to.  “Push” never does.  We call this Motivational LeadershipTM  (More on this in another blog post.)

DSRIP Performance.  Many states have DSRIP programs, and it’s beyond the scope of my essay today to explain what DSRIP is, or what exactly New York’s variant represents.  Today, our focus is on DSRIP Performance.  Click on the image over there for a snapshot of what I mean.  Each line is a measure Alliance Performance Measures and our performance against this measure will determine a payment from the New York Department of Health.  The program (more than) pays for itself:  with improved health of a population, unnecessary acute care services are prevented.  Healthier people, better care experience, lower cost.  In that order.    One challenge that we have is that the dependent variable here is our community’s performance, yet we won’t know what that is for 6-12 months .. which gets us to the heart of our story today:  proxy measures and why we need them.

  1. Problem to solve: we want to pay our community for performance against DSRIP goals.  Most of these goals of course are measures.  We call them outcome measures – but internally we know that most of them are process measures.  That’s ok. It’s all a continuum. We’re not going to measure life expectancy (we don’t have 50 years) – so we’ll have to draw the line somewhere – and “preventable ED visits” (and the 38 other measures you can see by clicking on that thumbnail above) may be just fine.
  2. Hurdle to leap:  DSRIP funds have too long a payment lag.  Telling a CBO or small practice or a hospital CFO that “I’ll pay you Tuesday for a hamburger today” (I’ll pay you in 2019 for preventing ED visits now) just won’t work. It’s too far.  I can’t train my dog to sit by rewarding him in an hour.  I need to tie the positive reinforcement to the act that I’m reinforcing.
  3. Opportunity: we’ve created an incentive program in which we have committed to distribute funds (which we have in the bank) in advance of performance.  Up to 30% of the funds that could be earned this year will be distributed quarterly (up to 7.5% per quarter) for near-term performance.
  4. You are now asking the right (next) question:  “how will you know what near term performance looks like?”  Aaahh .. yes!  We will need to measure performance! In some cases (preventable ED visits) we will do our best to mirror DOH methods with the data that we have available from claims data, from clinical data feeds, and other sources that are available.  Of course – “data we have available” is a classic quality measurement challenge – the so-called “streetlight effect.” We’ll avoid that as much as possible by using proxy measures.
  5. Proxy Measures are therefore a big topic of conversation in these parts.  What’s a good one?  What’s not?  We want to let the community do some of this work – as thinking about how to measure value is a great exercise for them as they transition to value based payment. We don’t need them to make these perfect!  That’s what I think is the elegance to this model.  Worst case:  they make easy proxy measures that look like success, get 30% up front, miserably fail on the “real” measures from DOH and we get $0 at the end.  This is fine.  We will have tried – and they will have “cheated” us for 30%.  But we have the 30% this year to cover our experiment because of the evolution of New York DOH’s DSRIP program:  this year, we still get some funding to support “pay for reporting.”  Next year, we shift to nearly 100% “pay for performance” and $0 for “pay for reporting.”  By allowing for this evolution, we encourage providers  to experiment with proxy measures, allow them to be imperfect, all while pulling (not pushing!) forward into value based payment.    It’s unlikely that they’ll fail miserably and “cheat” us.  Much more likely is that this enough to cause them to work really hard for true success.  The 30% is then just a pre-payment – and they’ll get the 70% next year when it flows from DOH for our extraordinary performance.What’s an example of a proxy measure?  Ideally, a proxy measure is a perfect reflection of the “real” measure we’re aiming to satisfy.  So if we want to reduce preventable Emergency Department visits, and our performance measure will be “% annual reduction in preventable ED visits,” then a monthly (weekly? daily?) measure of this would be optimal.  Indeed, if we had rapid insight, we could intervene.  This where quality measurement, if performed real-time, actually becomes decision support.  (This is a topic for another day …) So here’s an example of a less obvious but perfectly reasonable proxy measure:  if we accept the hypothesis that preventable ED visits are a given percentage of all ED visits, and the hypothesis that ED visits resulting in hospital admissions are less likely to be preventable ED visits (they represent conditions that merit a hospital admission) then if the proportion of ED visits that result in hospital admissions grows, one might conclude that the number/proportion of preventable (unnecessary) visits fell.   Long-term, this would be a terrible performance measure, since it may cause the ED staff to feel pressure to admit more patients.  But as a proxy for a reduced number of preventable ED visits, I think it does a nice job. Do you agree?  Disagree?

You can play too, if you like.  Here is an editable spreadsheet with all 39 of our measures.    Add/edit columns with your ideas for proxies!  You can also see much of the baseline data for the DSRIP performance measures (and others) by poking around here.

Parsing eCW’s $155M payment to the government

UPDATE:  Here are the public records for the case.  More details there about the original complaint.  Excerpt:

@eClinicalWorks to pay $155M fine: https://t.co/RngioK62SX

But what’s it mean?

Certification

The purpose of any certification program is to create a method for the purchasers of a product to have confidence that the product safely and reliably does what it is supposed to do.  One example from USDA:

Turning Point for Meat Inspection   In 1905, author Upton Sinclair published the novel titled The Jungle, taking aim at the poor working conditions in a Chicago meatpacking house. However, it was the filthy conditions, described in nauseating detail—and the threat they posed to meat consumers—that caused a public furor. Sinclair urged President Theodore Roosevelt to require federal inspectors in meat-packing houses.The Pure Food and Drug Act and the Federal Meat Inspection Act (FMIA) became law on the same day in 1906. The Pure Food and Drug Act prevented the manufacture, sale, or transportation of adulterated or misbranded foods, drugs, medicines, and liquors. The FMIA prohibited the sale of adulterated or misbranded meat and meat products for food, and ensured that meat and meat products were slaughtered and processed under sanitary conditions.

In this case, the government moved to protect the public because a subset of meat packers was putting profit above public health.  After The Jungle was published. public outcry caused the government to step in and regulate the industry.  Regulation is not, therefore, a four letter word.  It’s there to protect us from evildoers.

Before Health IT Certification

You may not remember this, but I do.  Before there was certification, Health IT development companies (some people call them “vendors”) created software and sold the software with claims of improved provider productivity, improved public health, and (yup) improved billing (among other impressive capabilities). Sometimes these claims were completely valid.  Sometimes they were not.  In the case where the developers’ functional claims were not quite valid, buyers had little recourse.  One might say “well, the markets will take care of that.  Bad actors will lose sales.”  But that’s not the case here for several reasons:

  1. The buyer may not be the one using the software.  A hospital buys software for clinicians.  Clinicians complain to hospital.  Hospital may or may not be a strong advocate with developer.
  2. It’s very hard to migrate from one system to another.  EHR purchase / deployment / optimization is a multi-year initiative.  If you bought a lemon, you may try to make lemonade as the though of migrating to something else will give you R11.0.
  3. Shame.  You don’t want your patients / competitors / peers to know that your EHR doesn’t work as expected.  You made a mistake.  Human nature is to hide our mistakes rather than treasure them as educational opportunities.

After Health IT Certification

The program isn’t perfect.  I was on the developer side of this work from ~2006 – 2010 during the when CCHIT was the only certification path, and then for ONC’s first iteration of certification (2010 – 2011).  Indeed, the imperfection of the program was a motivating factor for me to join the government in 2011.  I wanted to help evolve the program toward perfection.  Certification criteria needed to be less prescriptive (more flexible) but still provide sufficient guidance/ structure so that they could be reliably tested and replicated.  This balance is hard to get right.  Sometimes we got it wrong.  Some have appropriately argued that there remains quite a bit of “check the box” busywork wherein health IT developers need to spend time building and configuring their software just to have it certified.  “Trust us and don’t make us do this busywork” was the persistent message from the developer community.  Recently, there have been renewed calls for scaling back the certification program and its many criteria, citing the maturation of the EHR incentive programs (meaningful use) and the fact that some of the certification criteria define capabilities that are not invoked by the incentive programs.  The argument is that ONC has no place creating certification criteria for capabilities that aren’t part of “meaningful use.”   I disagree.  The ECW case is a great example of why I disagree.

About the eClinicalWorks settlement

What happened?  You’ve read the press release from the Department of Justice by now.  Excerpts (my emphasis added):

In its complaint-in-intervention, the government contends that ECW falsely obtained that certification for its EHR software when it concealed from its certifying entity that its software did not comply with the requirements for certification. For example, in order to pass certification testing without meeting the certification criteria for standardized drug codes, the company modified its software by “hardcoding” only the drug codes required for testing. In other words, rather than programming the capability to retrieve any drug code from a complete database, ECW simply typed the 16 codes necessary for certification testing directly into its software. ECW’s software also did not accurately record user actions in an audit log, and in certain situations did not reliably record diagnostic imaging orders or perform drug interaction checks. In addition, ECW’s software failed to satisfy data portability requirements intended to permit healthcare providers to transfer patient data from ECW’s software to the software of other vendors. As a result of these and other deficiencies in its software, ECW caused the submission of false claims for federal incentive payments based on the use of ECW’s software.

As part of the settlement, ECW entered into a Corporate Integrity Agreement (CIA) with the HHS Office of Inspector General (HHS-OIG) covering the company’s EHR software. This innovative 5-year CIA requires, among other things, that ECW retain an Independent Software Quality Oversight Organization to assess ECW’s software quality control systems and provide written semi-annual reports to OIG and ECW documenting its reviews and recommendations. ECW must provide prompt notice to its customers of any safety related issues and maintain on its customer portal a comprehensive list of such issues and any steps users should take to mitigate potential patient safety risks. The CIA also requires ECW to allow customers to obtain updated versions of their software free of charge and to give customers the option to have ECW transfer their data to another EHR software provider without penalties or service charges. ECW must also retain an Independent Review Organization to review ECW’s arrangements with health care providers to ensure compliance with the Anti-Kickback Statute.

Summary:

  1. ECW faked their certification testing.  The examples above are just examples.  There are other instances wherein ECW faked the testing and convinced the testing body that the software could do something that it could not do.
  2. Since the software was not certified, any physician or hospital who received incentive money and attested to the use of certified software was in fact fraudulently attesting to the meaningful use of certified software.
  3. The Corporate Integrity Agreement (CIA) commits ECW to:
    1. Oversight from an independent third party
    2. Notify customers of any safety risks (there are many)
    3. Provide software updates for free
    4. Support migration to other EHRs for free

So what?

Reminder:  The purpose of any certification program is to create a method for the purchasers of a product to confirm that the product safely and reliably does what it is supposed to do.

  • ECW is a big company, with ample resources.  Their software is used by tens of thousands of clinicians every day.  The lives of their patients (many of whom are Medicaid beneficiaries – as many federally qualified health centers use ECW) depend on the safety and reliability of this software.
  • ECW’s software doesn’t do what they claim.  Now that the government has investigated, and is holding them accountable, the message is clear:  the public’s interest is being protected.
  • ECW is not the only organization that cheated on certification.  They may have been the biggest, boldest violator, but they were not the only one.  Others have already started internal reviews of their own performance, and those who have not yet done so are very likely to do so tomorrow.  This case is a wake-up call for the CEOs of all EHR development organizations to dig deep, and have honest direct conversations with the small teams that prepared the software for certification testing, and performed certification testing with the test labs.  My advice to these CEOs:  look these teams in the eye and ask “is the software that was tested exactly the same as the software that is available to all of our customers?”  In many cases, the answer is no.  As Farzad tweeted today, a very common case is in the domain of interoperability:  can the system exchange data as it was certified to do?  All too often, the answer, when we speak with the EHR developers, is “yes, if you pay us $$$ to enable that capability for you.”

Today’s announcement is the culmination of several years of work – all focused on protecting the public and holding a company accountable.   As I’ve asserted both privately and publicly, the regulatory infrastructure of EHR certification is important because sometimes there will be bad actors.  While the “good actors” may be inconvenienced and annoyed by processes that seem unnecessary, for such important elements of our nation’s health infrastructure, we can’t have government abdicate this responsibility.  With or without the incentives programs, or MIPS, or 21st Century Cures, there is a set of capabilities that these systems need to have, and for which they must be certified.  The breadth of this set of requirements, and the depth of certification testing for any criterion are the needles that must be threaded.  I’m confident that Don Rucker and the team at ONC can navigate the balance well.

National Coordinator 6.0: A Blueprint for Success

 

1.0 Brailer
2.0 Kolodner
3.0 Blumenthal
4.0 Mostashari
4.1 me
5.0 DeSalvo
5.1 Washington
5.2 White
6.0 Rucker

Now that it’s public, I’ll offer my thoughts on the next steps for Don and ONC.  Don Rucker is a good pick for the nation, and will be a great National Coordinator.  I’ve gone on record as saying that some others are not qualified, and as many of you know – I don’t mince words.  Don is smart, focused, thoughtful, intentional, and will make good decisions for ONC and HHS.  I have known Don for 20 years.  He’s got a long track record of integrity, he’s a nice person, he deeply understands the challenges, limitations, and opportunities of Health IT.  I have no doubt that he’ll do a good job.  He’s got a lot on his plate.

Where should he focus?

  1. Stay the course with health IT certification.  I disagree with the growing meme that ONC has broadened its certification scope too far.  Certification has one purpose:  to provide consumers with a way to be confident that the product they are purchasing will do what the seller says it does.  Some people seem to have forgotten (or don’t know) that some of the companies that sell health IT solutions have claimed that the products do things they do not do.  There needs to be a process by which these claims are tested, verified and, yes, certified.  If this program is scaled back, health IT systems will be less safe, less interoperable, less usable, and less reliable.  #KeepCertification.
  2. Keep the Enhanced Oversight Rule in place.  My former colleagues (and Don’s former colleagues) in the vendor community will disagree, as do some of the house Republicans.  As Don will learn first hand in his initial few weeks as NC, some of the companies that have been selling certified health IT products have been misbehaving.  In some cases, products have been de-certified.  In other cases, there have been investigations and resolution of problems without de-certification.  ONC is protecting the public by doing what Congress asked it to do initially.  The certification program is more than testing of products in a petri dish, it’s about what happens with the products in the real world.  Surveillance is therefore a necessary part of making sure that the products do what they were certified to do.  #KeepOversight.
  3. Trim ONC.  Under National Coordinators 1.0 and 2.0, the organization was small, and focused on two things:  policy and standards/certification.  With ARRA, the organization grew to support the REC program, the HIE program, the SHARP program, and many smaller grant/cooperative agreement programs.  ONC staff grew fivefold, and with that growth came the distractions of the grant programs, the expense of salaries and physical space required to support such a large team. ARRA is over, and ONC now has responsibility for a small number of grants.  ONC should retain its autonomy (it should not become a daughter of NIH or CMS) but should now retract back to the small organization it once was.  Grants (with the people managing them) should migrate to AHRQ.  The policy work of ONC should focus on interoperability (much of the work assigned to it by congress in the 21st Century Cures Act), certification, and the usability and safety of health IT.  ONC’s standards work should focus on acceleration of standards for health IT systems, through very tight collaboration with HL7 (also required by 21st Century Cures). #TrimONC  #FocusOnCertandStandards

That’s it.  The three-legged stool of ONC’s future success.  On a silver platter, for ya, Don!  Have fun!  The people at ONC are hard-working, dedicated public servants.  They are excited to work with you.

BTW, thanks, Jon.  You will forever be 5.2 to me.  Great job.

 

Career Transitions

I had lunch yesterday with an industry colleague who was recently let go from his job with a big company.  He is now doing what everyone told him to do:  network, network, network – to find the next big thing.  Our lunch was of course part of that networking.  Who do I know?  Which companies might be hiring people with his skills?  His sense of urgency (despite the generous severance package) was palpable.

Initially, I answered his questions:  we brainstormed about this company or that company, and I offered to introduce him to some CEOs who may have interest in him.

But after the server brought our lunch, and the cadence of our conversation slowed a bit as we ate, I offered some unsolicited advice:

SLOW DOWN

I suggested that he take the advice expressed in this HBR article and multiply by 10.  Take a month and go away.  Really unplug.  Reconsider what “success” looks like.

He enthusiastically (almost) agreed, and reflected on a trip he’ll take soon to California in which he’ll interact with start-up companies and perhaps land a role with one of them.

I smiled.  “That sounds like networking to find a new job.”

“Got me.    It is.    Hmmmm”

This is normal, and common, and an unfortunate consequence of the hamster wheel many of us get trapped on.  As this post or this one .. remind us, we’ve been trained to define ourselves through the notion that we will BE happy after we HAVE ____ so we can DO ____.    The loss of a job gives us a great opportunity to question this flawed logic:  “When I HAVE (the perfect job, the promotion, the corner office ..) then I will DO (important work) and I will BE (perfectly happy). One can fill in the blanks and substitute anything else:  money, nice car, giant house, etc.  HAVING seems to come first.  But with loss, we can flip this all on its head if we give ourselves time to question.

What if we can BE (happy) first?  Will that change what we HAVE and DO?  Might it change how we approach our careers, our relationships, our aspirations?

Yup.

It’s (very) hard to make this shift.  Cognitively, it seems to make sense.  My lunch partner nodded and agreed that he needed to take time off to really let go.  But to him, “time off” was a weekend.  He’s lost the thing that has defined his identity:  the job, the work, the salary.

cd920aab13d3a784f33b2029eff9ae2aAnd it’s easy for me to sit across from him and tell him to re-think how he’s defined success.  He’s had a “successful” career, just like Barack Obama and Hillary Clinton and thousands of political appointees who used to work for the federal government.  Now they get to re-think how success is defined.  It’s an opportunity!

 

 

Why do I run marathons?

That’s me over there in the white shirt.  Six days ago.  It was a day 804900_1190_0043that I had worked toward since ~ March.  I’ve done this every year since 2010.  For about 15 minutes after the marathon, I was sure that it was my last.  I ran a 1/2 marathon about 3 weeks ago, and it was amazing.  I loved it.  I was downright gleeful afterward.  Not so after 26.2 – especially NYC, which has a very difficult last 5 miles.  Just walking the ~ 3/4 mile after the finish was painful.   Why do I do this every year?

  1. Resilience
  2. Discipline

That’s it.  If you know someone who has run a marathon, you know a person who has chosen to find both of these within themselves.

Notice that I don’t say that these people HAVE resilience or HAVE Discipline.  As Carol Dweck describes in Mindset, resilience is a core component of human success.  We can learn resilience, and we can un-learn resilience.  Every day – we are offered opportunities to give up, ease back, let it slide, be sad, be angry, or stop trying to do our best.  Do you struggle with these challenges?  Yeh – I thought so.  Me too.

Discipline, not unlike resilience, is hard to find, hard to maintain, and easy to lose.  In a world burdened by attention deficit trait (warning: pdf), it’s easy to get sidetracked away from the good habits we try to build.

It’s impossible to run a marathon without both of these.  The marathon is – for me – a way to keep myself focused enough, resilient enough, and disciplined enough to get it done – whatever “it” is today.

You can too.  What’s your marathon?

Service Innovation

“Shift the business models”
“Re-align the incentives”
“Fee-for-value”

These phrases are not new.
Nor are the concepts they represent.

Yet we’re starting to see new experiments from the federal government, from states, and even small communities that demonstrate a new willingness to deeply engage in understanding and overcoming the barriers to true change in how we improve health.

Notice that I said “improve health.”  I didn’t say “Improve health care.”  This not just an insignificant semantic nuance.  When we conflate care and health, we accept the fundamentally flawed assumption that in order for people to be healthy, we must in some way intervene and care for them. This assumption forms the basis of many traditions that pervade our broken system: in medical school and residency, I was taught that the individual with depression needs a medication, rather than improved coping skills.  I was taught that the individual with diabetes needs a nutritionist rather than an exercise partner.  I was taught that the the individual with hypertension or hyperlipidemia needed medications, regular lab work, and bi-annual follow-up visits, and I was taught that otherwise healthy adults needed an annual physical exam.

We now know that this medical education I received – as have tens of thousands of physicians, nurses, care coordinators, quality managers, hospital and health plan administrators and government officials – is in many cases based on a set of traditions rather than science.  Marcia Angell’s compelling work on our (mis)management and misunderstanding of mental illness is a sobering review of how we’ve managed to create a generation of people who are dependent on the medications that we thought would help “cure” them.  Zeke Emanuel has reminded us of the paucity of evidence for the “annual physical” and makes a strong case for eliminating it entirely.    Finally the evidence for exercise as an essential component of prevention of (and management of) diabetes is well known, but when I recently asked a 3rd year family medicine resident what would be his choice as first-line intervention (I chose my words carefully) for a patient with newly diagnosed type 2 diabetes, his proud and instantaneous response was “metformin” rather than “exercise.”

These traditions, steeped in the very human need to be needed, find their common ancestor in the assumption that these people need us to get better.  We sought careers in health care so that we can care for others.  So that we can help them.  We can rescue them. We can “make a difference.”  Early in my career, as a young medical school faculty member, these are the words I would hear as I interviewed medical school applicants.  Help.  Care.  Save.  I never heard the words that will form the basis of our new model of health:  Empower, Educate, Witness, Listen, Learn, Share.

The genesis of this new thinking comes from several communities – all working at the edge of public service.  The edges, as I’ll discuss below, are where we see the birth of true innovation.

Positive Deviance

The term “Positive Deviance” has a long history – dating back to the nutrition research literature of the 1960’s,  popularized in the 1990 book (pdf) by Zeitlin, Ghassemi and Mansour.  The model is based on observations that “positive deviants are children who grow and develop adequately in low-income families living in impoverished environments, where a majority of children suffer from growth retardation and malnutrition.”  What is different about the positive deviants?  Can we learn from them, and amplify their success by sharing their success with others?  Can we empower the community to find strength and success, rather than import and impose our own views?  Of course.  Over the last three decades, the Positive Deviance Initiative has used these principles to learn from communities, empower them, and facilitate better health and better lives for millions of people worldwide.

Motivational Interviewing

William Miller and Stephen Rollnick summarize the work of many psychologists, social workers and physicians through the 1980’s in their book Motivational Interviewing, published first in 1990.  The basis of the work that framed MI is the same principle expressed in a joke that my dad (a psychiatrist) used to tell:

 

Q:  “how many psychiatrists does it take to change a light bulb?”

A:  “only one, but the light bulb has to want to change.”

 

MI reminds us that we can’t change people. People change themselves.  Sometimes with our facilitation, sometimes despite our intervention.  Always from within.

Both Motivational Interviewing and Positive Deviance place the important emphasis where it belongs: in the wants/needs/hopes and wishes of the individual.  The smoker who chooses to keep smoking will always smoke, regardless of our judgement of them.  Can we motivate rather than judge?  Can we empower rather than diagnose?  Can we really listen?  (Alas, no.  As this study reminds us – physicians interrupt patients after 12 seconds.)

In last week’s Sloan Management Review, Clayton Christensen observes that “… when the business world encounters an intractable management problem, it’s a sign that business executives and scholars are getting something wrong — that there isn’t yet a satisfactory theory for what’s causing the problem, and under what circumstances it can be overcome.”

So here’s my theory:  traditions have shaped how health care delivery has evolved in the US and most Western cultures.  Inherited from “expert-based medicine” of the 1950’s and 1960’s, the paternalistic medicalization of much of our societal challenges, and compounded by economic forces that have positively reinforced intervention over empowerment, education, and true engagement.  The “patient centered medical home” of 2016 is no more patient centered than most primary care practices of the 1990’s, despite the dedicated work of many at NCQA and elsewhere to describe the attributes of a true “patient centered” experience.  In order to break away from these traditions, we need to begin at the edges.

A colleague asked me today how I would shape a DSRIP program if I were to design one from scratch.  My response is a confluence of the theories of Motivational Interviewing, Positive Deviance, and Clayton Christensen’s theory of disruptive innovation.  Christensen argues that a new-market disruption is an innovation that enables a larger population of people who previously lacked the money or skill to begin buying and using a product (or service).    DSRIP Background:  DSRIP programs exist in New York, New Jersey, Massachusetts, Texas, Kansas and California, and are a product of a CMS innovation program that:

  • Gives states autonomy to spend medicaid money in new ways – toward a set of “triple aim” goals.
  • Must be budget neutral for CMS
  • Should facilitate transformative changes in the care delivery system

Here’s what I told her:

DSRIP 2.0 programs should focus on a small set of very tangible goals that align with the triple aim:

  1. Cost of care for a medicaid population should be reduced by at least 25%
  2. Quality of care should improve (yes – this is hard to measure)
  3. The experience of individuals should improve (also hard to measure)
  4. Eliminate process measures, and any central attempt to dictate how the DSRIP participants achieve the goals. Yes – there can (and must) be accountability, but the accountability will exist in the form of reporting on progress toward achievement of the “triple aim” goals, rather than achievement of a set of prescribed milestones that must be traversed.

My colleague wasn’t happy.  My explanation was too simple! “Shouldn’t we hold them accountable for solutions we know are effective in improving the health of these vulnerable populations?”   I replied that this is exactly what the early Peace Corps volunteers did wrong when they imported ideas from Washington DC to communities in Asia: they assumed that they knew what was right.  “No.  DSRIP participants should be exposed to programs that have been successful, but they should have the freedom to achieve the goals in any manner they choose.”

A model for DSRIP 2.0

Positive Deviance teaches us that what works in one community may work in one community and only one community. The needs of a community are best understood and met by the members of that community.  Folks who enter and seek to improve the lives of those in such a community will need to be ethnographers first, and “fixers” second.  This calls for teams of DSRIP leaders who are trained in anthropology, design thinking, population health, and social work.  Doctors and nurses?  But they are in the back seat.
Who is doing this today?  Companies like ChenMed in Miami have Tai Chi classes, free transportation, and proactive care managers.

Motivational Interviewing teaches us that individuals make decisions because of internal incentives, not because authority figures tell them what to do. This informs a DSRIP approach that is focused in listening rather than speaking, on amplifying individuals’ own interests in healthier living, and offering ideas that will facilitate change during teachable moments rather than mandating new behaviors, or imposing penalties for behaviors that are unhealthy.  A DSRIP program might therefore hire teams of health educators from within a community: a trusted, yet trained group of people who can listen, empower, and facilitate change.
Who is doing this today? Community health workers in Massachusetts (pdf)  have helped to reduce costs, improve health, and improve health experience for thousands of residents.

Disruptive innovation teaches us that the non-consumers of services (in this case – it is health – and not health care services that are not being consumed) is the best entry point for new market entrants and new product creation. We’re not going to create new hospitals in the next five years, nor will we change how they operate, how the incentivize their employees, or how they market their services.  So hospitals are the wrong places to invest DSRIP dollars.  Rather, DSRIP money will be best spent on community initiatives (see above) and innovative “point solutions” that help can communities reach the triple aim, by addressing the health needs of individuals proactively.
Who is doing this today?   Vital Score (I’m an advisor and investor), identifies individuals at peak moments of receptivity and matches them to services that will improve their health.  Cohero Health created a metered dose inhaler that enables a care coordinator to track and monitor inhaler use in real time, and detects not just whether the inhaler was used, but whether proper technique was used.

As William Gibson may have said, the future is already here — it’s just not very evenly distributed.  While we often think of the fitbit-wearing, Volvo-driving soccer parents as opportunities for innovation in health, my hypothesis is that true change in the delivery of more health (rather than more care) will arrive in the form of DSRIP and other innovation programs.  The opportunities to build successful programs, successful companies, and healthy communities are (finally) plentiful – if we know where to look.  Inertia, combined with traditional payment models in traditional care delivery organizations will work against any of the innovations that will truly serve these communities, and this is why the greatest improvements in health will occur as a byproduct of work that precedes “health care.”  This won’t be easy.  But we can do it.  As Yoda said:  “do or do not.  There is no try.”

Advice to the new National Coordinator

Two and a half years ago, John posted an entry with this title – and I recall that it was a good summary of the state of the industry.  While I didn’t agree with all of his suggestions, I enjoyed the review and it offered a good set of guiding principles.  Since I was Acting National Coordinator for about the same duration as Vindell will serve, (Fall of 2013 – after Farzad Mostashari departed, and before Karen DeSalvo arrived) I’ll offer some thoughts from one who has been in his position.


  1. Certification.  The health IT certification program is the core of ONC’s responsibility to the nation.  While some have called for the eradication or reduction of the certification program, I would argue that this would be akin to scaling back Dodd-Frank.  Yeh – crazy.  As a product of ONC’s certification program, we now have health IT systems that do what their developers claim they do.  Before this program existed, creative health IT salespeople would assure customers that systems had functionality that simply didn’t exist, or was nonfunctional.  The program, like certification programs in other industries (telecommunications, transportation, etc.) is in place to assure the purchasers of products that these products do what developers claim.   Is the certification program perfect?  No.  Of course not.  The program needs to iterate with the evolution of the industry and the standards that are evolving.  Revisions to the certification program must therefore continue, so that the certification requirements don’t point to obsolete standards.  A focused “2015R2” certification regulation would therefore be an appropriate component of ONC’s fall work – so that something can be “shovel ready” for a new administration for ~ February release – with final rule in ~ April/May of 2017.

  2. The 2017 Spend Plan.  The 2017 federal budget appears to be on track to pass @ some point soon – and ONC’s appropriation for 2017 is looking like it will land at a steady ~ $60M ($65M if the extra $5M for narcotic abuse prevention lands).  The National Coordinator defines the “spend plan” for how the organization allocates this money – and the plan needs to be developed and executed at the beginning of the fiscal year: October, 2016.  The new National Coordinator is therefore making decisions now about how the funds will be spent over the next 12 months.  Office Directors are preparing proposed budgets for the year:  new FTEs, new projects that they want to launch.  Every year, it’s the same – just as it is in any large organization – proposals are submitted and the proposals represent 2x-3x the $$ available.  Tough calls need to be made.  The NC makes these calls. It’s hard to do this when you don’t know who your successor will be in January – or what their preferences will be.  When I was in this position, I worked closely with the Office Directors and the ONC Chief Operating Offer (Lisa Lewis), to identify the components of the organization’s work that were essential, and which were not.  We delayed decisions on about $2M to give Karen some flexibility to fund programs that were important to her.  As I mentioned in my response to Politico’s request for comments on the next phase of ONC’s path, my view is that it’s time to wind down ONC’s grants and health IT evangelism activity.  Perhaps it’s just my personality coming through here – as I am a well-known introvert, with little interest in quadrant 1 of the sizzle-substance 2 x 2 matrix (kudos to Janhavi for its invention), but I am concerned that it’s not government’s role to convince the public of the value/need for health IT.  If health IT has value (and I believe it does) then this value will be tangible and self-evident to the public.  If not, then no annual conference, blog post, or challenge grant will change this fact – or anyone’s perception of it.  ONC’s annual meeting – an event that costs several hundred thousand dollars and attracts the same participants every year – adds rather little to the nation’s progress toward improved health through the strategic use of health IT.  Kill the conference.  Kill the health IT flag-waving.  There’s already plenty of that to go around, and the taxpayer need not pay for it.

  3. Focus on quality.  No – not quality measures.  Quality of health, quality of care, quality of decisions.  Do these need to be measured?  Of course they do – and with the growth of value based payment in federal programs upon us, measurement of quality is imperative.  But we have conflated the concepts of quality and measurement.  As many know, I’ve long been concerned that the way that we use clinical quality measures in health care is fundamentally flawed.  Indeed, it was my concern about these flaws that led me to join ONC in the first place:  as the CMIO at Allscripts, I was responsible for helping our EHR development teams meet the requirements of Stage 1 of the EHR incentive programs (“meaningful use”) and it became clear that the accuracy of quality measure reporting would be terrible across the industry.  Why was this?  Because the 2011 certification criteria and Stage 1 meaningful use requirements were too vague about the data that would be used to measure quality.  For example, a quality measure might express that patients with “severe congestive heart failure” would be expected to be on a certain class of medications.  But there was no clarity for how “severe” was to be assessed, and many EHRs didn’t even formally capture ejection fraction, which would be an imperative component of an assessment of the severity of one’s CHF.  For Stage 2/2012 certification, we changed all of this, and while most readers don’t know or care about the details, these quiet changes represent the first important step toward improved quality measurement:  the data elements that are required for quality measures are explicitly identified in the certification regulation, and no measures are required that exceed the scope of these data elements. Read the last sentence again if you need to – as it’s very important and this guiding principle remains ignored by NQF, by many commercial health plan quality measures, and by many state Medicaid programs that are trying to implement quality programs.Simply put:  it’s impossible to report on data that was never captured.  A “quality measure” that assumes the presence of information in an IT system that is not present will be an invalid quality measure.  Period.  I thought / hoped we solved this problem in 2012.  Unfortunately, we did not.  Quality measures are still proposed without consideration for the data that EHRs have captured.  It’s now easy to know what the EHR can capture (what it can capture and what it has captured may of course differ).  Start with the NLM’s Data Element Catalog (Jesse James won the naming competition).  If the concept that you want to measure isn’t in here, then re-design your measure, because the EHRs don’t capture the data in a uniform manner.   If it is there, then the likelihood is high (but not certain) that the data can be captured, queried, and transmitted.

    Recall that I said our method of measuring quality is flawed.  Why is it flawed?  Because all of our focus is on quality measures rather than quality improvement, and improvement is a product of measurement and decision support.  Let’s parse this statement, beginning with the difference between measures and measurement.  A measure is an explicit logical statement about care delivery and its alignment with a very specific expectation.  For example, there is some evidence that individuals with diabetes will live longer if their blood sugar is well controlled, so there is a quality measure for this:  IF (individual has diabetes) AND (blood sugar is well controlled) THEN (quality measure satisfied).  Each of the logical expressions can be defined explicitly.  This measure can then be applied to thousands of care providers and their “scores” on the quality of care they presumably offer can be compared.  But what if blood sugar control isn’t so important?  What if there becomes a better way to measure individuals’ optimal health?  Measuring care quality with a list of measures is like having a speedometer in your car that measures 10, 15,25, 37 and 55 miles-per-hour and nothing in between.  It’s a set of measures -hard-coded into the system rather than measurement:  a fluid, adaptable system that enables us to see how we are doing and therefore enabling us to adjust our work dynamically if necessary.   How do we adjust?  With clinical decision support (CDS)!  As you will read in the chapter I wrote for Eta Berner’s just-published book on CDS, the federal government has done a great deal of work to enhance CDS capability in health IT systems, and to align it with quality measurement.  We’re not there yet – but we are well on the way.  Keep this on the front burner, and the path to the triple aim will be shorter and much less bumpy.

  4. As my friend Jerry Osheroff always says – focus on the most important things:  TMIT.  Are we helping improve the health of people?  That’s most important.  Don’t lose sight of it.  Karen DeSalvo taught me many things – but the one I’ve internalized the most was something that she taught me very early in her time at HHS:  we need shift our conversation from how to improve “health care” to how we improve health.

Self-Driving Health

Lots of news about this recently.  Five years ago, you would shake your head and say “no way – not in my lifetime.”  Now you know that this is our future.  It will be safer, will save billions of dollars, and will be have positive consequences we can barely imagine.  The kids need to go to soccer practice?  Send them.  Get the dog to the vet for his check-up?  Plop him in the car and off he goes. It’s real. It will happen.  Soon.

So why is it so hard for us to imagine self-driving health?  Do we have a crisis of under-supply of primary care?  Yes.  Today we do .  But I wonder if that’s because we’re asking the wrong question.  Earlier this week, I heard that we would need 60,000 additional primary care visits in our community to reduce the demand for non-urgent visits in our emergency departments.  If a primary care provider can see 25 patients a day – then we need ten additional providers in our community (250/day = 1250/week = 5000/month = 60,000/year).   But what if those 25 visits that didn’t need an ED visit ALSO didn’t require a primary care visit?  What if “visits” in 8 x 10 exam rooms with white-coated professionals weren’t the solution?  Let’s play the “five why” exercise:

Donald Duck went to the Emergency Department

  1. Why did he go to the emergency department?   Because he didn’t feel well and wanted to feel better.
  2. Why didn’t he feel well?  Because he had a fever and cough and the medicine he bought at CVS didn’t help.
  3. Why did he have a fever?  Why didn’t the medicine he bought help?  Because he had a bad cold – maybe even the flu (didn’t get a flu shot) and wasn’t sure what to buy at CVS.  He bought some kombucha and aspirin.
  4. Why didn’t he get a flu shot?  Because he doesn’t like going to the doctor.  Only goes (to the ED) when he feels sick.
  5. Why doesn’t he like going to the doctor?  Because they never seem to listen to him – and doctors are for sick people anyway.  Why bother?

So what’s going to prevent Donald – and 24 of his friends – from going to the ED?  Is it another doctor with an open appointment?  No.  Education, empathy, caring people – who can help Donald understand what’s available to him to prevent illness, and what’s available to him when he is ill:  a phone call, some good trustworthy advice, and (yes) if necessary – a visit with a care provider. But I’d argue that this is much less frequent than we assume.  Adding 60,000 visits is a short sighted (and impractical) way to solve this problem.  We need to help Donald to find self-driving health:  tools that help navigate, understand his goals, and get him from were he is to where he needs to be.

Volume to Value: it’s about the caboose

In his post last week, John Halamka expressed optimism:

I left HIMSS this year with great optimism. Vendors, technologies, and incentives are aligned for positive change. 2016 will be a great year.

Perhaps we were seeing different sides of HIMSS.

Yes. there is a “buzz” around the migration from volume to value. Walking the floor of the exhibit hall, it was hard to avoid companies – old and new – describing their population health / care coordination / analytics tools.

Yet I didn’t see very much that was really new – really focused on value. I saw re-configured versions of old stuff. One company has re-packaged off-the-shelf tools to create a “population health analytics toolkit.” Their marketing is fantastic – but peeling the onion – I couldn’t find anything that a smart team couldn’t put together themselves – for a fraction of the cost. Another multi-billion dollar company has re-branded the products they used to sell to the payor market – and is now pitching the same tools to the provider/ACO/CIN/DSRIP market(s).

Another facet of HIMSS that I can’t help but notice: insulting the consumer. Do they really think the market is this unsophisticated? (Dare I say “dumb?”) – massive booths, expensive displays – and cryptic product offerings abound. I listened to one company’s pitch, and walked away with my head spinning. I had no idea what they do. I’ve been involved in this industry for nearly three decades. If I didn’t understand it – I’d be very surprised if a new customer can.

This was my first HIMSS in many years where I attended with a buyer’s mindset: as acting CIO for one of the New York DSRIP PPS communities, I was viewing the market through a new lens. If we look carefully at the continuum of the current market – we see silos of activity:

 

 

    • Data Entry. This is today’s EHR. Despite some rudimentary embedded decision support and quality measure reporting, the EHR is a data entry tool, and unfortunately, the physicians are the ones doing the data entry. Is the UX better than it was a decade ago? Yes. Barely.

 

  • Data and information gathering

     

      • Note that I differentiate between the two. Data is reliable and based on an objective assessment of the natural world: a lab test result, a blood pressure reading, the fact that a procedure occurred such as a CABG or a BKA. Information is a byproduct of human thought (and therefore subject to a 50% error rate): a diagnosis, a patient’s past medical history, even a medication list should be considered information rather than data. Data has a much better predictive value. Information should always be viewed with suspicion.

     

      • Data and information need to be aggregated, normalized and analyzed. This is the step that is often called “analytics” and is the domain where we see many companies currently engaging. 

         

          • Most have descriptive analytics leading their product offerings: they offer a dashboard to show us where our best opportunities for improvement are. Which diabetics are not well controlled? Which patients have used the hospital the most?

         

        • Some companies also offer (or say they offer) predictive modeling of some kind – and will mention the use of natural language processing (NLP) and machine learning (ML) to describe how they are different from their peers. There is no shortage of such companies. If they say their peers are not doing something super special and top secret and incredibly unique – they are usually wrong. Everyone has now invested in a small (and growing) data science team. This is the future. It’s just not evenly distributed. Such tools won’t just tell us which patients were sick or poorly managed, it will tell us both who will be sick or poorly managed – and (much more important) how to prevent them from getting sick. For this patient – which intervention(s) will be best?

         

     

    • Action. At the end of the analytics event – we’re still left with something abstract: a chart on a dashboard, a list of high risk patients, or even a list of things that could/should be done for a population of patients. It’s a list. A list of who and perhaps even a list of what (they need). For example – i might have a list of who needs a flu shot in my community. We know that a (much) better way to manage this opportunity every Fall would be to find a way to get them all a flu vaccine – but still – in 2016 – the vast majority of the time, we will wait for them to come in to an 8 x 10 exam room, wait for an alert to “fire” and distract a busy physician – and then hope that the alert causes an action. It’s amazing to me that we can’t do a better job than this. “Action” is the silo of the market that’s not yet been cracked. Analytics tools tell us what to do for whom – but they don’t deploy that knowledge to where it can be done. At HIMSS, I saw a tiny number of companies describing such “last mile” solutions – and yet this is the most important part. I know one thing for sure: the EHR (see above) is a terrible place to send the actions. It remains the data capture tool – and EHRs weren’t built to accept actions from elsewhere and / or deploy them to community workers, public health nurses, nutritionists, pastors or rabbis. They were built as the engine end of this value chain – not the caboose.

     

So if EHR was built to capture data and information, and wasn’t built to catch and deploy actions – then perhaps it’s time to focus on the caboose. Most “care management” and “population health” tools were built for insurance companies – and therefore deploy the actions to a case manager: generally a nurse sitting in an office building. These folks are effective at what they do (managing the care of the 5% of the sickest members of a population) but they don’t scale — and they don’t get out into the community — where the real humans live.

The end of the chain, then, is the community worker, the public health nurse, the individual, the family member. How can we empower these folks? How can we tell them what needs doing? Capture feedback from them (was it done? Was it not done? Why? What other barriers are there to optimal health?)

The caboose is a set of point solutions that leverage the lists generated by analytics. If analytics is the platform, the caboose is a set of applications that deploy the right actions to the right people, and then capture (new) data and information in a much more granular way. These new tools — will replace the EHR in the long run — and will feed data back to it in the short term.

Why was I not so optimistic as John about HIMSS? Because I don’t see the market creating these solutions yet. I see them re-packaging their old stuff and putting “population health” labels on it.

Which just won’t do. We deserve better.

Open the APIs. Build the platform. Trust.