Don’t Let Incremental Overtime Plague Your Healthcare Organization!

Get to the Root Cause: Increase Productivity and Patient Care While Reducing Labor Costs

The Causes and Consequences of Incremental Overtime

Incremental overtime may be costing your healthcare organization thousands of dollars unnecessarily and result in decreased employee morale and poor productivity, so it’s important to understand its root causes by gaining the ability to track overtime. A Labor Productivity/Labor Management solution that delivers key analytics provides specific answers to the root causes of incremental overtime.  Common causes include:

  • Early clock-in/late clock-out
  • Inability to complete required tasks by end of shift
  • Shift transition conflicts (i.e. last minute attending to patient needs or handoff not yet completed)

The Solution and its Benefits

A Labor Productivity solution provides data for labor hours so that ratios can be derived based on each organization’s definition of incremental overtime, and this leads to a clear understanding of the root causes of incremental overtime so that corrective action can be taken, including:

  • Ensure management visibility at change of shifts
  • Employee coaching/staff meetings to aid time management skills
  • Provide daily reports/analysis to managers to establish protocol for handling incremental overtime risks
  • Designate a synchronized clock that employees should rely on (i.e. department wall clock)
  • Educate employees on OT authorizations – cite repeated behavior in performance evaluations

Incremental Overtime 1

By addressing the causes of incremental overtime using data provided by a Labor Productivity solution, providers can deliver great patient care while decreasing labor costs by thousands of dollars and increasing productivity.

Incremental Overtime 2.jpg

 

Standardization of Comparative Analytics in Healthcare

A Comprehensive Solution for Value-Based Care

As healthcare providers are quickly consolidating and purchasing smaller health systems, standardization is paramount to enable comparative reporting across organizations or sites that facilitates changing attitudes, decreased costs, and better, more cost effective care. Provider systems need to operate independently using a standardized enterprise system process to effectively make decisions around costs, health outcomes, and patient satisfaction.  Without standardization, the analysis of metrics can require considerable work and time and create issues when comparing like sites since appropriate metrics can mean totally different things at the underlying base member calculation.

A standardized solution is simple – an enterprise-based model that allows data to be shared across systems and applications to facilitate comparative analytics with data integrity:

MH Image 1

Such a solution offers the ability to compare productivity indices across departments against national standards using a standard calculation approach with federated master data across all toolsets, resulting in comparative analytics to drive efficiencies and value-based care:

MH Image 2

Upgrading Oracle Business Intelligence to 12c

Ranzal was recently invited to participate in a number of chalk talks for the Healthcare Industry User Group (HIUG) in San Antonio, TX. One of these chalk talks covered how an organization should prepare for and execute an upgrade to Oracle Business Intelligence (OBI) 12c.  Since the technical steps are already covered in numerous blog posts as well as Oracle documentation, our conversation focused on a strategic approach to the upgrade.  Our conversation essentially came down to four topics:

  1. An overview of the simplicity of the upgrade from 11g to 12c
  2. Organizational mindset while preparing for the upgrade
  3. The new technical infrastructure and the implications for the organization
  4. How to introduce users to the new features and how this might impact governance

We wanted to formally document this lively and fast-paced discussion to help other organizations as well as the HIUG chalk talk participants who were furiously scribbling notes and may not have had the opportunity to take it all in.

We started our discussion with how relatively easy Oracle has made this upgrade. For those of you who experienced the difficult and buggy process of upgrading from OBI 10g to 11g and may be dreading the upgrade to 12c, we have some advice:  relax.  First, the upgrade to 12c is an “in place upgrade” which means your 11g environment remains intact while the metadata and configuration gets “lifted and shifted” into 12c.  Speaking of “lift and shift,” 12c comes with a tool that extracts the metadata and much of the configuration from 11g into one tidy package that is then pushed into 12c.  There is a small amount of manual configuration that has to occur; however, this will only slow down customers with highly customized environments.  Once this lift and shift has occurred, an Oracle validation tool checks that your Dashboards and Analysis are working and alerts you to potential issues.  While there are bugs with 12c (what new or old software does not have bugs?), we have not found any major issues that will cause a full stop for an organization.

So how does an organization prepare for this upgrade? First, we encourage clients to view this upgrade as an opportunity to “clean house,” especially for customers that have been building OBI assets for years.  Used properly, analytic tools lend themselves to experimentation and evolution.  Experimentation can result in partially formed or broken logical objects such as presentation columns and facts, Analysis, or entire Dashboards.  The developers of these objects have the best intention of coming back to either fix or complete development on these objects or, at the very least, delete them.  Developers are busy people though, and eventually the existence of these objects is forgotten.  Evolution of both the tool and the focus on analytics results in objects becoming stale and/or obsolete.  Take this opportunity to clean house on your OBI environment.  Run the consistency check in the BI Administration tool and resolve those lingering issues.  Evaluate your usage statistics and determine if unused Analysis and Dashboards are still needed.  Fix or discard broken Analysis and Dashboards.  As with any technology tool, have a process in place to document, communicate, archive, backup, and restore, if necessary.

The most substantial change to come with OBI 12c is the underlying technical architecture. Fusion Middleware and Enterprise Management has a new look and feel.  Additionally, some actions are no longer performed within these tools.  For instance, deploying the RPD is no longer done through Enterprise Manager (in fact, the RPD is now the BAR file).  The directory has significantly changed which is a bit unfortunate for those of us who like to go directly to the log files in the directory rather than relying on Enterprise Manager and Session Manager.  Finally, many of the RPD . . . that is . . . BAR functions, such as deploying and copying, are done through a new command line interface.  So, while most of your users will be able to log into 12c and quickly adapt to the new look and feel, your OBI support team will have some learning to do.  Again, the goal of this post is to provide an overarching upgrade strategy, so we will not delve any deeper into these changes.  There is plenty of quality content online regarding these changes and you should always review Oracle documentation before performing implementation or upgrades.

End users will initially feel that, with the exception of a new look and feel, nothing much has changed. However, with new graphing options, statistical analytics capabilities based on R, as well as Visual Analyzer, users have an opportunity to expand their analytical capacity.  The organizational challenge is to go back to the change management playbook that was used when OBI was initially introduced, re-evaluate, and update so that end users can get the most out of this upgrade.  Evaluate how to train users on where to properly use the new graphs and charts.  Determine (or re-determine) who your power users are who need or want the new statistical capabilities.  Review existing Dashboards and Analysis and make appropriate upgrades.

Potentially the biggest challenge will be evaluating and understanding the capabilities of the new Visual Analyzer tool which, among other features, allows you to perform data mashups. This new tool will require that your organization determines some use cases and user groups as well as some additional training.  While users uploading data into the OBI system and combining it with existing data models opens up entirely new possibilities for insight, it also creates a governance challenge.  How do you separate and maintain the organizational “one version of the truth” while encouraging and properly promoting new analytic insight?  How will security be handled and users trained to adhere to this model?  How will you handle the archiving and deletion of potentially huge numbers of Excel spreadsheets uploaded onto the OBI server?  While this all sounds intimidating, keep in mind that your organization has already been through these exercises once during the original OBI implementation.  Adapt your existing knowledge.

Thus far, Ranzal has had a positive experience with the 12c upgrade. The underlying technical architecture has resulted in some real gains in performance, especially when leveraging EPM as a data source.  The upgrade is well thought out and simple, especially if you go through a system checkup and resolve issues.  While your technical BI support team will have some homework and learning to do to continue to fill that role, your users will be able to jump right into using 12c.  Despite this ease of user adoption, be sure to have a change management plan in place and take advantage of the new features and capabilities of 12c.

If you are thinking of doing an upgrade and have questions, feel free to reach out to us. Also, keep an eye out for an upcoming webcast on upgrading to 12c with an interactive question and answer session.

Data Discovery In Healthcare — 1st Installment

Interested to understand how cutting edge healthcare providers are turning to data discovery solutions to unlock the insights in their medical records?  Check out this real-world demonstration of what a recent Ranzal customer is doing to unlock a 360 degree view of their clinical outcomes leveraging all of their EMR data — both the structured and unstructured information.

Take a look for yourself…

Tag 100 Times Faster — Introducing Branchbird’s Fast Text Tagger

BBFTTClip
Text Tagging is the process of using a list of keywords to search and annotate unstructured data. This capability is frequently required by Ranzal customers, most notably in the healthcare industry.

Oracle’s Endeca Data Integrator provides three different ways to text tag your data “out of the box” .

  • The first is the “Text Tagger – Whitelist” component which is fed a list of keywords and searches your text for exact matches.
  • The second is the “Text Tagger – Regex” component which works similarly but allows for the use of regular expressions to expand the fuzzy matching capabilities when searching the text.
  • The third is using “Endeca’s Text Enrichment” component (OEM’ed from Lexalytics) and supplying a model (keyword list) that takes advantage of the component’s model-based entity extraction.

Ranzal began working on a custom text tagging component due to challenges with the aforementioned components at scale. All of the above text taggers are built to handle tagging with relatively small inputs — both the size of the supplied dictionary and the number (and size) of documents.

1,000 EMRs 10,000 EMRs 100,000 EMRs 1,000,000 EMRs
Fast Text Tagger (FTT) 250 docs/sec 1,428 docs/sec 4,347 docs/sec 6172 docs/sec
Text Enrichment (TE) 6.5 docs/second 5 docs/second N/A N/A
TE 4 threads 17.5 docs/second 15 docs/second 15 docs/second N/A

In one of our most common use cases, customers analyzing electronic medical records with Endeca need to enrich large amounts of free text (typically physician notes) using a medical ontology such as SNOMED-CT or MeSH. Each of these ontologies has a large number of medical “concepts” and their associated synonyms. For example, the US version of SNOMED-CT contains nearly 150,000 concepts. Unfortunately, the “out of the box” text tagger components do not perform well beyond a couple hundred keywords. To realize slightly better throughput during tagging, Endeca developers have traditionally leveraged the third component listed above — the  Lexalytics-based “Text Enrichment” component — which offers better performance than the other options listed above.

However, after extensive use of the “Text Enrichment” component, it became clear that not only was the performance still not acceptable at high scale, the recall of the component was inadequate especially with Electronic Medical Records (EMRs). The Text Enrichment component is NLP-based and relies on accurately parsing sentence structure and word boundaries to tokenize the document before entity extraction begins. EMRs typically have very challenging sentence structure due both to the ad hoc writing style of clinicians at point of entry and the observational metrics embedded in the record. Because of this, Text Enrichment of even small documents at high scale can be prohibitive for simple text tagging. A recent customer of ours, using very high end enterprise equipment, was experiencing 24 hour processing times using Text Enrichment text tagging with SNOMED-CT concepts to process approximately six million EMRs.

To improve both the performance and recall issues, Ranzal set out to build a simple text tagger component for Integrator that would be easy to setup and use. The Ranzal “Fast Text Tagger” was built using a high performance dictionary matching algorithm that ingests the list of terms (and phrases) into a finite state pattern matching machine which can then be used to process the documents. One of the largest benefits of these search algorithms is that the document text only needs to be parsed once to find all possible matches within the supplied dictionary.

The Ranzal Fast Text Tagger is intended to replace the stock “Text Tagger – Whitelist” component and the use of the “Text Enrichment” component for whitelisting. Our text tagger is intended for straight text matching with optional restrictions to allow for matching on word boundaries. If your use cases require more fuzzy-style text matching, then you should continue to use the “Text Tagger – Regexp” at low scale and “Text Enrichment” at higher scales.

Performance Observations

To go further on the metrics shown above, and duplicated here, you can see the remarkable performance of the Ranzal Fast Text Tagger as compared to “Text Enrichment” even when Text Enrichment is configured to consume 4 threads. Furthermore, the rate of the BB FTT tends to increase with the number of documents, before starting to level off near 1 million documents, whereas Text Enrichment stays relatively constant.

1,000 EMRs 10,000 EMRs 100,000 EMRs 1,000,000 EMRs
BB FTT 1 thread 250 docs/sec 1,428 docs/sec 4,347 docs/sec 6172 docs/sec
TE 1 thread 6.5 docs/second 5 docs/second N/A N/A
TE 4 threads 17.5 docs/second 15 docs/second 15 docs/second N/A

As a final performance note, the previously mentioned customer with the 24 hour graph run just for text tagging, the same process was done on this same test harness with the same data in just shy of 20 minutes. It took longer to read the data from disk than it took to stream it all through the Fast Text Tagger.  This implies that, in typical use cases, the Fast Text Tagger will not be a limiting component in your graph. For those of you curious about the benchmarking methods used, please continue below.

Test Runs

We built a graph that could execute the different test configurations sequentially and then compile the results. Shown below are four separate test runs and a screen capture of Integrator at test completion. Below each screen cap is a list of metrics:

  • Match AVG: The average number of concepts extracted over the corpus
  • Total Match: The total number of concepts extracted over the corpus
  • Misses: The number of non-empty EMRs where no concept was found
  • Exec Time: The total execution time of the test configuration

Note that Text Enrichment’s poor recall negatively impacts its precision (Match AVG). If you remove the (significant number of) misses, TE has precision nearly as high as our Fast Text Tagger.

Test 1: 1,000 EMRs

1000EMRs

Test ID Match AVG Total Match Misses Exec Time
BB FTT (1 thread) 9 9,126 14 4 secs
TE (1 thread) 4 4,876 260 153 secs
TE (4 threads) 4 4,876 260 57 secs

Test 2: 10,000 EMRs

10000EMRs

Test ID Match AVG Total Match Misses Exec Time
BB FTT (1 thread) 12 127,617 14 7 secs
TE (1 thread) 5 55,567 3,739 2,010 secs
TE (4 threads) 5 55,567 3,739 675 secs

Test 3: 100,000 EMRs

100000EMRs

Test ID Match AVG Total Match Misses Exec Time
BB FTT (1 thread) 13 1,380,258 17 23 secs
TE (4 threads) 5 546,598 38,466 6,555 secs

Test 4: 1,000,000 EMRs

1000000EMRs

Test ID Match AVG Total Match Misses Exec Time
BB FTT (1 thread) 14 14,834,247 17 162 secs

Benchmarking Notes

Tests conducted on OEID 3.1 using the US SNOMED-CT concept dictionary (148,000 concepts) against authentic Electronic Medical Records. Physical hardware used: PC, 4 core i7 with hyperthreading, 32 GB RAM on SSD drives.

The “Text Tagger – Whitelist” was discarded as unusable for this test setup. “Text Enrichment” with 1 thread was discarded after the 10,000 document run and TE with 4 threads was discarded after the 100,000 document run.