The 10th Annual Utah Health Services Research Conference: Data: What's available and how we are use it is changing. By: Danielle A. Lloyd, MPH - Premier
Health Services Research Conference: March 16, 2015
Patient Centered Research Methods Core, University of Utah, CCTS
6. In three words, our vision for improving health delivery is about better, smarter, healthier.
If we find better ways to pay providers, deliver care, and distribute information:
Encourage the integration and coordination of clinical care services
Improve population health
Promote patient engagement through shared decision making
Incentives
Create transparency on cost and quality information
Bring electronic health information to the point of care for meaningful use
Focus Areas Description
Care Delivery
Information
Promote value-based payment systems
– Test new alternative payment models
– Increase linkage of Medicaid, Medicare FFS, and other payments to value
Bring proven payment models to scale
HHS Announcement
Better Care. Smarter Spending. Healthier People
We can receive better care.
We can spend our health dollars more wisely.
We can have healthier communities, a healthier economy, and a healthier country.
Source: CMS
7. Target percentage of Medicare FFS payments
linked to quality and alternative payment
models in 2016 and 2018
2016
All Medicare FFS (Categories 1-4)
FFS linked to quality (Categories 2-4)
Alternative payment models (Categories 3-4)
2018
50%
85%
30%
90%
Source: CMS
8. Payment Taxonomy Framework
Payment Taxonomy Framework
Category 1:
Fee for Service—
No Link to Quality
Category 2:
Fee for Service—Link to
Quality
Category 3:
Alternative Payment Models Built on Fee-
for-Service Architecture
Category 4:
Population-Based Payment
Description
Payments are
based on volume
of services and not
linked to quality or
efficiency
At least a portion of
payments vary based on
the quality or efficiency
of health care delivery
Some payment is linked to the effective
management of a population or an
episode of care. Payments still triggered by
delivery of services, but opportunities for
shared savings or 2-sided risk
Payment is not directly
triggered by service delivery
so volume is not linked to
payment. Clinicians and
organizations are paid and
responsible for the care of a
beneficiary for a long period
(e.g. >1 yr)
MedicareFFS
Limited in
Medicare fee-
for-service
Majority of
Medicare
payments
now are
linked to
quality
Hospital value-
based purchasing
Physician Value-
Based Modifier
Readmissions/Hosp
ital Acquired
Condition
Reduction Program
Accountable care organizations
Medical homes
Bundled payments
Comprehensive primary care
initiative
Comprehensive ESRD
Medicare-Medicaid Financial
Alignment Initiative Fee-For-Service
Model
Eligible Pioneer
accountable care
organizations in years 3-
5
Source: CMS
A quick intro to Premier, for those of you who may be rusty.
We are an alliance of about 3,400 hospitals – that’s about 68% of US community hospitals – coming together with a mission of improving the health of communities by transforming healthcare. And we are about doing that from the inside out.
[Stats on right…]
We have three strategic objectives…
So our strategy addresses a risky paradox – it is about surviving what’s left of FFS in a reform environment and do all the cost cutting linked to that, while investing in a pop health future with no blueprint.
And your forum here today is about ENVISIONING THE FUTURE. So my hope is to share some of the experiences and lessons learned within our alliance – so that you all can thread those through your discussions today.
Let’s first set some context for this discussion – and maybe some good context for your day as well.
Who we are
What we do (focus on collab)
How we gather and use data
Some of the obstacles we hace encountered
In three words, our vision for improving health delivery is about better, smarter, healthier.
If we find better ways to deliver care, pay providers, and distribute information, we can receive better care, spend our dollars more wisely, and have healthier communities, a healthier economy, and a healthier country.
We understand that it’s our role and responsibility to lead … and we will.
What we won’t do – and can’t do – is go it alone. Patients, physicians, government, and business all stand to benefit if we get this right, and this shared purpose calls out for deeper partnership.
So we will continue to work across sectors and across the aisle for the goals we share: better care, smarter spending, and healthier people.
$170B bill, expect to pay for $70B-
debating 2-5 year SCHIP reauthorization ($20B-25B)
Care/caid Extenders for 2 years ($25B)
An extension of ‘two midnights’ RAC audit moratorium, before it expires on March 31;
The Premier-supported readmissions bill introduced this week by Reps. Jim Renacci (R-OH) and Eliot Engel (D-NY), the Establishing Beneficiary Equity in the Hospital Readmission Program Act of 2015 (H.R. 1343) (more on that below). Rep. Renacci has conveyed to leadership and the Ways and Means Committee his desire to have it included in the legislative package.
The Protecting the Integrity of Medicare Act (PIMA) of 2015 (H.R. 1021); and
The Notice of Observation Treatment and Implication for Care Eligibility Act or the NOTICE Act (H.R. 876).
SGR repeal & annual updates
0.5% increase in physician payments for 4 yrs; Freeze through 2023 – NOTE, Senate Finance Committee has 0% instead
Beyond 2023: physicians in advanced payment models (APMs) receive 2% annual updates, all others receive 1%
Time to develop quality measures & clinical improvement activities
Value-Based Performance (VBP) Payment Program
2017, payments adjusted for physicians’ performance in prior period
2017: Consolidate PQRS, VBM & EHR MU into VBP
4% tied to performance in 2017; 6% in 2018; 8% in 2019; 10% in 2020 & beyond. Secretary can increase funding pool in 2021 and beyond to no more than 12%
Maximum upside and downside adjustment equal to funding pool % (e.g. +/- 4% in 2017)
Professionals will be measured on:
Quality
Resource use
Clinical practice improvement activities
EHR MU
Encouraging provider participation in APMs
APM participating providers exempt from VBP; receive annual 5% (2017-2022)
Significant share of revenues must be from APM with 2-sided risk and quality measurement
Reimbursed according to payment arrangements of model
NIH Federal data sharing: LIMITATION.—Subsection (a) does not authorize 5 the Director of NIH to require the sharing of— 6 ‘‘(1) any individually identifiable information 7 with respect to a human subject participating in the 8 research; or 9 ‘‘(2) any trade secret or commercial or financial 10 information that is privileged or confidential.’’.
Data sharing framework: Not later than 90 days after 25 the date of enactment of this Act, the Secretary of Health and Human Services shall convene a meeting of stakeholders (including patients, researchers, physicians, industry representatives, health information technology providers, and the Food and Drug Administration) to provide advice to the Secretary on enhancements to the clinical trial registry data bank under section 402(j) of the Public Health Service Act (including enhancements to usability, functionality, and search capability) that are necessary to implement paragraph (7) of section 10 402(j) of such Act, as added by subsection (a).
The Secretary, acting through the Commissioner of Food and Drugs and the Director of the National Institutes of Health, shall enter into a collaborative agreement, to be known as the Clinical Trial Data System Agreement, with one or more eligible entities to implement a system to make de-identified clinical trial data from qualified clinical trials available for purposes of conducting further research.
Software Act: Not later than 24 months after the date of enactment of this section, the Secretary shall promulgate final regulations to establish standards, policies, and procedures for—‘‘(A) classifying medical software; ‘‘(B) standards for the development of medical software; ‘‘(C) standards for the validation and verification of medical software; ‘‘(D) review of medical software; ‘‘(E) modifications to medical software; ‘‘(F) manufacturing of medical software; ‘‘(G) quality systems for medical software; ‘‘(H) labeling requirements for medical software; and ‘‘(I) postmarketing requirements for reporting networks and the reporting of adverse 18 events.
Interoperability: Placeholder section to allow for greater interoperability (no language in the discussion draft)
Using health data for research: The Secretary shall allow the use and disclosure of protected health information by a covered entity for research purposes, including studies whose purpose is to obtain generalizable knowledge, to be treated as the use and disclosure of such information for health care operations described in subparagraph (1) of the definition of health care operations in section 164.501 of title 45, Code of Federal Regulations (or any successor regulations).
Biomedical research working group: The Director of the National Institutes of Health shall serve as the Chairperson of such working. The Biomedical Research Working Group shall (1) review literature and reports on— (A) administrative burdens of researchers funded by the National Institutes of Health; (B) improving replicability of research funded by the National Institutes of Health; (2) provide recommendations to the Director of the National Institutes of Health to—(A) reduce such administrative burdens, including with respect to the extent to which (and how) the grant proposal submission and 24 progress report requirements of the National Institutes of Health should be restructured, streamlined, and simplified; and (B) improve replicability of research funded by the National Institutes of Health; (3) evaluate and provide recommendations on the extent to which it is required for Congress to provide any statutory authority to implement any recommendation proposed pursuant to paragraph (2); and (4) prepare a plan, including timeframes, for implementing recommendations proposed pursuant to paragraph (2) for which congressional action is 13 not required.
Telemedicine: Beginning not later than 4 years after the date of the enactment of this section, the Secretary shall implement a methodology to provide for coverage and payment for a telehealth service (or episodes of such services). The Secretary may waive any provision of such section that applies a limitation on what qualifies as an originating site, any geographic limitation, or any limitation on the type of health care provider who may furnish such services.
At Premier, we think that measurement should begin with the end in mind – what is it that we are trying to accomplish
As a company, we start with a strategy – which is something that is relatively fixed. From there we understand what goals we need to achieve and what key initiatives will drive that goal. Once we understand the goals and the processes we intend to use to achieve them, the measures follow.
Something obvious to achieving success and worth stating, is that each goal and each initiative has an owner – that is someone who is unambiguously accountable for success or failure.
Let’s contrast this to healthcare. Do we nation know what it is we want to achieve? Do we determine the processes that are likely to get us there? Do we agree on who is accountable? If not, why do we measure?
One of our Premier information products is a solution for organizations who need to monitor physician performance as a part of the Joint Commission OPPE requirement. [ongoing physician performance evaluation]. And it is helpful at looking at such things as mortality rates, readmission rates, complications, etc. It is amazing to me how many times physicians can look at this data and say, “Oh, that’s not my patient.” It’s an interesting paradox; three, four and five physicians, maybe more, will see the patient, and submit a bill, but no one seems to be responsible for the patient as a whole.
Another thing that might not be obvious is that to be successful, you don’t need a whole lot of measures. Just because something is easy to measure does not mean it is useful to measure. When the Medicare Shared Savings model first came out, it proposed something in excess of 60 measures of quality. Premier had proposed a much smaller set, and after push-back from the provider community, CMS pared the list down to 33.
In hospital parking lot
Walking distance to major academic medical center
Across the street from fire station with paramedics
Yet, no access– are we measuring what really matters?
Distance to doctor might mean a bit, but are they accepting patients with your insurance?
Are we honoring patient values (use the term broadly) – Want to ride a bike 100 miles again
Not measuring taste of food or parking- but can have an effect if can’t get back into car.
This is the power of hospitals coming together to collaborate. We measure and compare results. We use data to find and target improvement opportunities. We test simple ideas, see if they work and spread them across providers….building bridges of knowledge and improvement across the nation.
Through QUEST, hospitals transparently share their data, creating consistent measures for top performance - measures that we make publicly available so that anyone can replicate them and use them to assess their own performance.
They also share their experiences and knowledge. And they work together to rethink status quo ways of doing things. Having 350 hospitals working together leads to breakthroughs that are implemented quickly, broadly and consistently. It scales the best ideas and ensures we don’t have pockets of excellence, but system-wide excellence. And it provides ongoing monitoring to ensure any progress we find gets locked in and made permanent.
And this is how QUEST hospitals are collectively achieving big, system-wide improvements.
To give you a better understanding of exactly how QUEST hospitals are making these improvements, I’d now like to turn it over to Dr. Knych from Adventist Health System – one of the largest health systems in QUEST.
We can include the new QUEST reports – this may be too detailed.
Plus we know our approach is working.
Here is just a sample of what we have been able to accomplish
And because we have some real skeptics on our advisory committee, we wanted to make sure that these results were not simply due to selection bias. So our research built econometric models to include hospitals effects as well as secular trends and were able to show there was still some factor, we call it a “quest effect” that was active in the QUEST hospitals but not in the others.
SCRIPTING
QUEST collaborative members are achieving impressive results. If we look at what has taken place in the QUEST cohort, we see an impressive story. Data for the 1st 4 and a half years of the collaborative show that QUEST participants:
Prevented an inpatient death 91,840 times
Note this is not the same as 91,840 lives saved:
the death could have occurred in another setting
one patient can have his/her death prevented several times (every time they are admitted) but represents only one unique life.
Reduced healthcare spending by nearly 9.13 billion, and
Provided approximately 80,128 additional patients with all appropriate, evidence-based care for the clinical conditions assessed throughout the entire time period (subset of EBC measures!).
All QUEST measures and standards of top performance are publicly available, on our Web site, and can be adopted by any hospital nationwide.
METHODOLOGY NOTES
These results are based upon all QUEST members that joined in 2011 and are cumulative. The data source for mortality and EBC was the PAD (not all QUEST members are represented) and the data source for cost is the OperationsAdvisor database (missing data dropped some QUEST hospitals from being included)
Deaths Avoided
Based on mortality rates and case counts for all QUEST hospitals in our cohort with mortality data in CY 2011.
Cumulative results are shown in the table
Calculated by comparing changes in observed to expected mortality rates between baseline and each year following, up through the end of Q2 2012
Baseline for each hospital was dependent on when that hospital joined QUEST
Data vintage factors applied as they are for our cohort mortality trend analyses
All-cause inpatient mortality, no exclusions
For reference, equation used:
Deaths avoided = cases in current year x ((observed mortality baseline rate – observed mortality rate in the current year)-(expected mortality baseline rate – expected mortality baseline rate in current year))
This was repeated for each year of the QUEST performance period
Results were summed across years
Dollars Saved
Based on cost per discharge and case counts for all QUEST hospitals in our cohort with cost data in CY 2011
Cumulative results are shown
Calculated by comparing changes in average ADJUSTED (adjusted for inflation using Bureau of Labor and Statistics inflation estimates of inpatient hospital services) between the baseline period and each year following, up through the end of Q2 2012
Baseline for each hospital was dependent on when that hospital joined QUEST
For reference, equation used:
Total cost savings per year =
(Average Cost per Discharge in current year – Average cost per discharge in baseline) multiplied by total discharges in the current year
This was repeated for each year of the QUEST performance period
Results were summed across years
EBC Improved
Based on EBC cases and total case counts for all QUEST hospitals in our cohort with EBC data in CY 2011
Cumulative results are shown
EBC measures are limited to those present across the entire 4.5 years of the QUEST performance period (dropped measures and new measures NOT included)
Calculated by comparing changes in the rate of EBC performance between the baseline period and each year following, up through the end of Q2 2012
Baseline for each hospital was dependent on when that hospital joined QUEST
For reference, equation used:
Total EBC improvement per year =
Patients receiving EBC in current year – (rate of patients receiving EBC in baseline X total patients eligible for EBC in current year)
This was repeated for each year of the QUEST performance period
Results were summed across years
Extrapolation figures
If all hospitals could achieve these results:
Mortality – deaths avoided over 4.5 years = 91,840 (QUEST) * 10.4 (national/QUEST disch) = 950,000 deaths avoided over 4.5 years. Note that about 750,000 die in hospital each year.
EBC – additional patients getting EBC over 4.5 years = 80,128 (QUEST) * 13.5 (national/QUEST disch) = over 1 million additional patients getting EBC nationally over 4.5 years.
Only using the ratio of acute-care discharges, which is 3.8 million for QUEST and 39 million nationally; so the ratio is 39/3.8 = 10.2. Multiplying by $9.13 billion gives $93 billion saved over 4.5 years.
There has been growing interest in the use of performance improvement programs, like QUEST, to rapidly achieve positive change. However, evidence on the effectiveness of these types of collaboratives has been mixed due to their highly variable nature. So, it is essential to begin to understand if and which types of collaborative methodologies are effective.
So, we took a look at it, comparing the inpatient mortality rates of our QUEST members to a set of Premier hospitals not participating in the program.
Results from this research were published online ahead of print in the Journal of Patient Safety.
We were able to demonstrate what we call a “QUEST effect” via several multivariate analyses, meaning we set out to isolate as many variables as possible to account for the fact that this study was not randomized since it is based off of a group of volunteer hospitals. We took into account, for instance, hospital characteristics, severity of patient illness and background, and secular trends to see if we could find evidence that QUEST hospitals perform better than non-QUEST hospitals.
The different models used in the study (which adjust for standard, fixed and random effects), determined that QUEST had an impact on hospital mortality rates, with participating hospitals performing as much as 10% better than non-QUEST hospitals.
In addition, the non-QUEST hospitals studied were comparable Premier hospitals with similar characteristics that had access to the same quality and safety tools as QUEST participants.
In other words, we believe the improvements we found can be attributed to the focused interventions and collaborative improvement framework we provide through QUEST.
We believe these findings provide further evidence that transparent peer-to-peer collaboration and data sharing, coupled with a holistic framework to facilitate change, is impacting hospitals outcomes, and specifically reducing mortality.
QUEST informed us that better, broader measures of harm were needed. We found them.
We drew from analytic experience with coded hospital discharge abstract summaries, including present on admission flag, to devise an algorithm to identify a broad set of harm indicators. We call these the Premier Identified Complications (PICs).
They include 138 measures of patient harm (inclusive of the CMS hospital acquired conditions).
They are much broader and will occur commonly enough to allow us to adequately measure them and identify opportunity areas for improvement.
As can be seen in the graphic, in the approximately 11 million patients used for this analysis we found that about 16% of them had at least one Premier Identified Complications, whereas if we just used the CMS defined HACs we were only able to detect less than 1% of patients.
With a broader set of complication/harm measures we feel like we will be able to adequately identify signals of variation from what would be expected. This will allow us to better measure harm and point our collaborative members toward real opportunities for improvement.
PIC impact on Cost of Care:
Lastly, using that sample of 500,000 patients in our database, we evaluated the marginal effect of PICs on cost of care. We found that, in sum , there were more than $471 million in excess costs attributable to the newly identified complications. As this graph illustrates, there was at least $28 million of that associated with infections alone (there was even more not shown in this graph). Here again we see the major costs associated with harm that is occurring in the hospital setting.
QUEST sprints Collaboratives related to this graph:
Sepsis-Bacteremia: Sepsis Collaborative, Early ID of Sepsis Sprint, Inpatient Surgical Mortality Collaborative
Embolism / Thrombus: SCIP VTE 2 Sprint (offered 2x)
C. Diff Enteritis: C. Diff Sprint
Acute Myocardial Infarction: Readmissions Collaborative
There will be sprints in PFP this year in the following compatible topics:
CAUTI (catheter assoc. urinary tract infection), CLBSI (central line associated blood stream infection), VAP (ventilator associated pneumonia-) all can lead to sepsis/ bacteremia
SSI (surgical site infection)
VTE (venous thromboembolism)
Of course we all share the big picture goal of creating healthier patients and healthier communities
So premier is finding some innovative ways of measuring population health as I’ll show you on the next slide.
In 2012 we published in the Health Affairs Blog, (need to put the citation on slide) a framework for advanced measurement that was the result of an ASD sponsored in Wash DC that included a broad group of stakeholders including NQF, AHRQ, IHI, Dartmouth and others.
We began with the premise that our Aim was to measure Value. We defined that at the time as Outcomes + Experience over expenditures. Later Gene Nelson from Dartmouth, who was quite instrumental in putting this framework together, commented that the math should really be Outcomes * Experience / Expenditure, so that all three parameters are on equal footing, which is what we intended.
One of the things we discovered, which is not on this slide, is that the high level “big dot” measures you need for comparing value, are not the same as the operational measures one needs to track and trend on a daily basis to make sure that the big dots are achieved. This is an important point, because we see in healthcare today a trend to in a sense, micro-manage the provider by measuring processes and sub processes. We don’t need to do this if we have a clear idea of what we are trying to achieve and we are willing to hold people accountable for results.
Being overly prescriptive in our measurement stifles innovation. Let providers figure out how to get there – let’s just agree on the “big dots” we are trying to achieve.
Another thing we discovered was that outcomes and experiences most relevant to the patient and to the purchaser were things that only the patient could report. Thus you see a lot of patient reported outcomes on this slide. And we recognized that there might be specific sub domains one would want to look at, in addition to the overall measures. When we looked at Expenditures – we defined these broadly as the total cost to the purchaser + the cost to the consumer. We also felt that there needed to be explicit measures of overuse because generating waste is the opposite of creating value.
Finally, something that may not be obvious from the chart. Just as most businesses do, we rely on a set of “leading measures” and “lagging” measures. A leading measure may be someone’s Health Risk, whereas the lagging measure may be the number of healthy days. In healthcare we tend to talk about “process measures” and “outcome measures” as if the two were unrelated. But aren’t some of these “process measures” really “leading measures.” If we know someone has gotten all the evidence based AMI care that is appropriate for example, doesn’t serve in a sense as a “leading measure” of AMI mortality. Aren’t both really important.
Three pay for performance federal quality programs – inpatient quality reporting (IQR) is pay for report only
Although these programs appear completely separate, there are direct and indirect measure crossovers:
All harm measures in Inpatient VBP overlap with measures in HAC Reduction
Readmissions and complications (harm measures) contribute to Medicare spending per beneficiary performance
Updated September 2014 – Reflects Final IPPS FY 2015 rule policies. Other adjustments for FY 2014 (Oct 2013) are Admission and Medical Review Criteria Reduction (only included for FY 2014 payments, not into perpetuity)
Sequestration was effectively 1% cut to payments in FY 2013 (Oct 2012) because it was only for 6 months of the fiscal year
DCA will be 0.8% in FY 2014, 1.6% in FY 2015, 2.4% in FY 2016, and 3.6% in FY 2017.
Prepared by Marla Kugel, February 2015
Based on IPPS final rule FY 2015 data, quality data released December 2014. N = 3376 hospitals, Maryland and Puerto Rico hospitals excluded from analysis (N = 100). Some hospitals with no penalty didn’t qualify for quality programs due to not meeting data minimum requirements (i.e. small hospitals)
Slide updated by Marla Kugel January 2015
Shows final weight changes for FY 2017
Prepared by Marla Kugel, February 2015
Biggest take away: “VBP program changed measures and domains considerably between FY 2013 and FY 2015. This as well as hospital performance contributed to shifts in winners/losers over time. By FY 2015 there are more extremes in winners and losers. Relaxing the minimum domains from 3 to 2 allowed for more hospitals to be included in the program (more small hospitals in that had previously dropped out). The percent win/loss looks inflated, but likely this is because of increase of small hospitals now included in the program.”
Ranges on X-axis should be interpreted as, “Values between 0.991 and 0.992, including 0.991”. 519 hospitals were exempt from VBP in FY 2013, 778 were exempt in FY 2014, and only 388 in FY 2015. 100 of these are MD or PR hospitals. The rest were exempt because they didn’t meet the minimum data requirements. Percentages shown instead of counts to control for differences in hospital counts between VBP program in FY 2013, FY 2014, and FY 2015.
Slide updated February 2015 by Marla Kugel
Descriptive only – not necessarily statistically significant difference
Only includes hospitals eligible for the VBP program based on minimum data requirements, IPPS hospital, not in MD or Puerto Rico
Some types of hospitals are consistent winners or losers under VBP, but some switch as the program changes
Rural hospitals lost under FY 2014 program but won under FY 2015 program
Urban, Teaching, and DSH hospitals consistently lose
DSH status did not determine win/loss under FY 2015 program
Determined by FY 2013, FY 2014 and FY 2015 average payment adjustment factors (neutral defined as between 1.000 and 0.9997)
Slide updated February 2015 by Marla Kugel
Descriptive only – not necessarily statistically significant difference
Only includes hospitals eligible for the VBP program based on minimum data requirements, IPPS hospital, not in MD or Puerto Rico
Small hospitals and rural hospitals win under FY 2015 VBP
Medicare spending per beneficiary may push rural hospitals to win – efficiency or just lack of post-acute care resources?
Determined by FY 2013, FY 2014, and FY 2015 average payment adjustment factors (neutral defined as between 1.000 and 0.9997)
(g) Regulations; interagency consultations; definitions, safeguards, and procedures, including procedures and criteria for issuance and scope of orders
Except as provided in subsection (h) of this section, the Secretary, after consultation with the Administrator of Veterans' Affairs and the heads of other Federal departments and agencies substantially affected thereby, shall prescribe regulations to carry out the purposes of this section. These regulations may contain such definitions, and may provide for such safeguards and procedures, including procedures and criteria for the issuance and scope of orders under subsection (b)(2)(C) of this section, as in the judgment of the Secretary are necessary or proper to effectuate the purposes of this section, to prevent circumvention or evasion thereof, or to facilitate compliance therewith.
(Subsection (h) was superseded by section 111(c)(3) ofPub. L. 94-581. The responsibility of the Administrator of Veterans' Affairs to write regulations to provide for confidentiality of drug abuse patient records under Title 38 was moved from 21 U.S.C. 1175 to 38 U.S.C. 4134.)
Effect of PICs on Mortality:
We looked at a sample of 500,000 patients in our database, and for each PIC we evaluated the marginal effect it had on mortality. We found that, in sum, there were more than 2,500 excess deaths attributable to the newly identified complications. As this graph illustrates, there were nearly 200 of that 2500 associated with sepsis alone. This study confirmed our earlier assessments that sepsis should be a focus area for reducing mortality.
[**Note: 200 # listed above is sum of Sepsis/Bacteremia (26)and Septic Shock (173)]**
There were QUEST sprints/collaboratives related to this graph:
Septic Shock / Sepsis-Bacteremia: Sepsis Collaborative, Early ID of Sepsis Sprint, Inpatient Surgical Mortality Collaborative
Acute Myocardial Infarction: Readmissions Collaborative
There will also be sprints in PFP this year in the following compatible topics-
CAUTI (catheter assoc. urinary tract infection), CLBSI (central line associated blood stream infection), VAP (ventilator associated pneumonia-) all can lead to sepsis/ bacteremia
SSI (surgical site infection)
VTE (venous thromboembolism)
PIC effect on LOS:
Again, we looked at a sample of 500,000 patients in our database. For each PIC we evaluated each PICs contribution to LOS in our sample. We found that in sum there were more than 199,000 excess days of stay attributable to the newly identified complications. As this graph illustrates, there were nearly 20,000 of the 199,000 associated with infection alone.
[**Note: 20,000 number listed above includes Sepsis/Bacteremia, C.Diff, post-operative or perioperative infection, and cellulitis/ skin infection**]
There were QUEST sprints/collaboratives related to this graph as well:
Sepsis-Bacteremia: Sepsis Collaborative, Early ID of Sepsis Sprint, Inpatient Surgical Mortality Collaborative
C. Diff Enteritis: C. Diff Sprint
Embolism / Thrombus: SCIP VTE 2 Sprint (offered 2x)
There will be sprints in PFP this year in the following compatible topics:
CAUTI (catheter assoc. urinary tract infection), CLBSI (central line associated blood stream infection), VAP (ventilator associated pneumonia-) all can lead to sepsis/ bacteremia
SSI (surgical site infection)
VTE (venous thromboembolism)
Slide created by Marla Kugel
Slide updated by Marla Kugel, IPPS FY 2015 final rule
Hospital IQR percentage stays 2.0% until 2015, when it will drop to 0.725%, or one-fourth reduction of the market basket update
Slide updated by Marla Kugel, January 2015
Performance periods for FY 2016 ended December 2014
Slide updated by Marla Kugel January 2015
Shows final weight changes for FY 2017. New domain structure – outcomes measures (maroon-colored sections) split between “Safety” domain and “Clinical Care” domain.
Slide updated February 2015 by Marla Kugel
All IPPS hospitals N = 3,476
100 hospitals exempt because they are located in MD or PR
288 hospitals exempt because they didn’t meet minimum measure data requirements
N = 3,088 eligible hospitals in VBP FY 2015 program
Biggest changes in FY 2015 VBP
Introduction of Medicare spending per beneficiary domain (20% of score)
Relaxation of required domains for total score (down from 3 to 2)
Created by Marla Kugel on Oct 2, 2014 based on data released with IPPS FY 2015 final rule
MD and PR hospitals excluded – exempt from readmissions program