Don’t Let Incremental Overtime Plague Your Healthcare Organization!

Get to the Root Cause: Increase Productivity and Patient Care While Reducing Labor Costs

The Causes and Consequences of Incremental Overtime

Incremental overtime may be costing your healthcare organization thousands of dollars unnecessarily and result in decreased employee morale and poor productivity, so it’s important to understand its root causes by gaining the ability to track overtime. A Labor Productivity/Labor Management solution that delivers key analytics provides specific answers to the root causes of incremental overtime.  Common causes include:

  • Early clock-in/late clock-out
  • Inability to complete required tasks by end of shift
  • Shift transition conflicts (i.e. last minute attending to patient needs or handoff not yet completed)

The Solution and its Benefits

A Labor Productivity solution provides data for labor hours so that ratios can be derived based on each organization’s definition of incremental overtime, and this leads to a clear understanding of the root causes of incremental overtime so that corrective action can be taken, including:

  • Ensure management visibility at change of shifts
  • Employee coaching/staff meetings to aid time management skills
  • Provide daily reports/analysis to managers to establish protocol for handling incremental overtime risks
  • Designate a synchronized clock that employees should rely on (i.e. department wall clock)
  • Educate employees on OT authorizations – cite repeated behavior in performance evaluations

Incremental Overtime 1

By addressing the causes of incremental overtime using data provided by a Labor Productivity solution, providers can deliver great patient care while decreasing labor costs by thousands of dollars and increasing productivity.

Incremental Overtime 2.jpg

 

Standardization of Comparative Analytics in Healthcare

A Comprehensive Solution for Value-Based Care

As healthcare providers are quickly consolidating and purchasing smaller health systems, standardization is paramount to enable comparative reporting across organizations or sites that facilitates changing attitudes, decreased costs, and better, more cost effective care. Provider systems need to operate independently using a standardized enterprise system process to effectively make decisions around costs, health outcomes, and patient satisfaction.  Without standardization, the analysis of metrics can require considerable work and time and create issues when comparing like sites since appropriate metrics can mean totally different things at the underlying base member calculation.

A standardized solution is simple – an enterprise-based model that allows data to be shared across systems and applications to facilitate comparative analytics with data integrity:

MH Image 1

Such a solution offers the ability to compare productivity indices across departments against national standards using a standard calculation approach with federated master data across all toolsets, resulting in comparative analytics to drive efficiencies and value-based care:

MH Image 2

When FDM Isn’t an Option…Using Essbase to Map Data

lots-of-arrowsThere are times when you do not have an option of using FDM to do large data mapping exercises prior to loading data into Essbase. There are many techniques for handling large amounts of data mappings in Essbase, I have used the technique oultined here several times for large mappings and it continues to exceed my expectations from a performance and repeatability perspective.

Typically, without FDM or some other ETL process, we would simply use an Essbase load rule to do a “mapping” or a replace. However, there are those times when you need to do a mapping based on multiple members. For example, if account = x and cost center = y then change account to z.

Let’s first start with the dimensionality that is in play based on the example below: Time, Scenario, Type, NOU, Account, Total Hospital, and Charge Code

Dimension Type Members in
Dimension
Members
Stored
Time Dense 395 380
Scenario Dense 13 6
Type Sparse 4 4
NOU Sparse 25 18
Account Sparse 888 454
Total Hospital Sparse 5523 2103
Charge Code Sparse 39385 39385

You then need to be able to identify the logic of where the mapping takes place.  I will want to keep the mapping data segregated from all other data so I will load this to a Mapping scenario (Act_Map).  I load a value of ‘1’ to the appropriate intersection, always level0.  Since the mapping applies to all Period detail I will load to a BegBalance member.  The client will then update this mapping file from a go forward basis based on new mapping combinations.

Here is a sample of what the mapping file looks like that gets loaded into Essbase:
NOU STATUS Revised DEPT ACCT # CDM Data
SLJ   IP            2CC      2         0012013         1
SLJ   IP            2CC      2         0012021         1
SLJ   IP            2CC      2         0012062         1

Here is what it looks like when you do a retrieve.  So for 4410CC->2600427->IP->67->SVM there is a value of 1 and for 4410CC->2600435->IP->67->SVM as well.

Essbase Mapping

The next step in the process is to load the actual data that ultimately needs to be mapped. I will load this data based on the detail and dimensionality I have, again at level0.  In my experience, the data is missing a level of detail (GL account for project based planning, Unit/Stat for charge master detail, etc.). So this data gets loaded to specific “No_Dimension” member via a load rule or a generic member. Again, I load this data to a separate scenario as well (Act_Load).

In the example below you will see I am loading Account detail (67 & 68 in the above screenshot) to the Stat_Load member. The data comes across missing the account detail.

essbase mapping

The final step is to calculate the Actuals scenario based on the two scenarios above. You will see that after we run the calculation, Current Yr Actuals is calculated correctly in that the data resides where it should reside.

essbase mapping

Keeping all the data segregated in different scenarios allows you to easily clear data should anything be wrong with one of the loads, thereby keeping the other datasets intact. This process runs on the entire year in less than 2 minutes and not only performs the calculation but also does an aggregation for the Current Yr Actuals.