Connecting the Value of IT: A Disciplined Solution for Service Costing and Chargeback

This post corresponds to the webinar “Connecting the Value of IT: A Disciplined Solution for Service Costing and Chargeback,” the last in our “Let Your Profitability Soar” webinar series. You can access the recording here.

 

Within an organization, technology is mission-critical to most business strategies, and IT costs represent a significant portion of back office spend.

Among their many responsibilities, the CFO and the CIO must make sure that:

  • Technology spending is aligned with business strategy
  • Business applications and end-user services are delivered efficiently and cost-effectively
  • Coherent project portfolios that grow and transform the business are created and nurtured

Within this new economy, a key ongoing goal of the CIO is to make sure that IT is aligned with business strategy.

Generally, this IT-to-Business Strategy alignment is achieved in two ways:

  1. Running the business: Providing a cost-effective level of internal services necessary for sustaining business activity.
  2. Building the business: Managing and delivering portfolio development projects that are prioritized and aligned with all key business initiatives aiming to improve efficiency and aid in gaining competitive advantages.

The Nature of the Problem

One challenging pattern we see time and again is the ongoing disconnect between the CIO and the CFO.

Some might say this disconnect is an inevitable result of the fact that technology is moving so fast and we don’t always have the time to stop and assess its value. Understandably, it can be difficult for a CFO to get away from all the checks and balances just to get the financial books closed, let alone turn attention to the books that measure performance at greater depths, like line of business.

In general, as a function of the role, the CFO does not talk servers, desktop deployments, applications or other semantics of the technology business. Conversely, with many companies establishing Technology Shared Service Centers, pressure is placed on the CIO to operate the business of IT with the same financial disciplines the CFO requires of all lines of business. The CIO must connect the value of IT services and capabilities to internal business partners. To achieve this, IT Finance teams require performance management solutions that are IT-specific, yet are connected to Finance, to ensure efficient allocation of resources and effective delivery of internal services.

Part of the CFO’s role is to look at the technology projects and initiatives and think about how all of this technology is adding value. CIOs have to fill information voids, while also having to build their own financial models and performance management book of record using their own resources.

Two seemingly differing views of value can be hard to navigate and leverage. If two divergent approaches are not connected in a common view among the key stakeholders, then—more often than not—there is ongoing value-related confusion. Ultimately, the dissonance between the line of business owners can stall or even paralyze decision-making.

A Better Language Is Needed

For the good of your organization, it’s imperative that the CIO and the CFO speak the same shared language of value and that they connect in an effort to move forward in the most aligned and productive manner possible.

Speaking a shared language—one that offers a unified financial model view and is based on shared definition of value—is a key to finding a solution. The disciplines of ITFM (IT Financial Management) is about equipping both of these executive-level offices and their teams with a better language.

With an ITFM solution, you are able to:

  • Reduce the time that IT Finance spends on managing the business processes, providing more time for value-added analytical activities
  • Give IT Managers more detailed, timely, accurate data to better understand the cost & effectiveness of the services and projects they are delivering
  • Provide Line-of-Business managers with cost transparency into IT allocations and chargebacks, allowing them to better align their consumption of services with their business goals

ITFM focuses on these finance business processes:

  • IT Planning: Budgeting & forecasting of IT Operating and Capital Spend
  • IT Costing: Linking supply side financial cost structures with demand side consumption for services and projects
  • IT Chargebacks: Equitably charging lines of business for internal services and projects performed (or Showback)

IT Finance Organizations typically manage these processes through a series of multiple systems and offline spreadsheets. These processes are not ideal, as they create pain as far as inefficiencies and ineffectiveness in terms of results.

Our preferred solution for IT Service Costing—co-developed with Oracle—is based on PCMCS (Profitability and Cost Management Cloud Service). Oracle’s PCMCS is a cloud-based, packaged performance management application. It offers, in one package, a rules engine for cost allocations, embedded analytics and data management platform.

When developing the solution with PCMCS, the following were top priorities for our team:

  • That it required no large initial investment
  • That it was accessible to all
  • That it was always updated/up-to-date
  • That limited IT involvement was needed

Oracle IT Financial Management Solution Overview

Connecting Value of IT Image 1

The ITFM solution, a joint development effort with Oracle and based on valuable feedback and results from multiple Ranzal customer implementations, offers all of the following in one package:

  • Pre-Packaged Content for Cloud or On-Premise
  • Pre-Built Data Model
  • Pre-Built Costing Model & Reporting Content
  • Pre-Built Interface Specifications

A key component of the PCMCS IT Costing & Chargeback Template is its approach to modeling IT Like a Service Business, which includes the following modules:

  • Model Financials & Projects: This first step is focused on modeling financial projects, allowing you to combine multiple data sources, perform cost center allocations and, for those customers without an existing project costing system in place, to perform basic project costing and project allocation functions.
  • Complete Costing of IT Operations: This second pillar of the solution provides a flexible framework that allows you to combine data from multiple sources, perform resource costing and perform service costing.
  • IT as a Business Service Provider: This third leg of the solution service considers catalogue & bill rates, contribution cost trace, consumer showbacks and consumer chargebacks.

 We Have Options, You Have Options

Our Flexible Maturity Model allows customers to start where they feel most comfortable, and progress in a way that is focused on maximum flexibility for maximum effectiveness. No one size fits all, and we believe in starting right where you are.

Connecting Value of IT Image 2

 

For more information or to request a demo, email us. Be sure to ask if your company qualifies for our one-day complimentary PCMCS assessment of your IT Service Costing needs.

Full Circle Planning, Cost Management, & Profitability in the Manufacturing Industry

This post corresponds to the webinar “Full Circle Planning, Cost Management & Profitability in the Manufacturing Industry.” You can access the recording here.

As we are all aware, today’s manufacturing industry faces multiple ongoing challenges, including:

  • Changing customer/consumer demands
  • Shrinking operating margins
  • Ever-changing compliance and regulatory pressures
  • Increasingly globalizing economy
  • Lowered availability and visibility of detailed information

Now more than ever, manufacturers’ focus is not just on growth, but, more specifically, on profitable growth.

 

Managing Profitable Growth

When it comes to profitable growth and insight into profitability, the first place to start is the consolidated P&L.

But while the P&L offers information on profitable growth, it does not help manage profitable growth. The financial P&L provides limited insight into costs, profits and their underlying drivers, from the perspective of their lines of business, products, customers, markets and channels. Cost bases are imperfect and are limited to legacy standard costing and unstructured cost extracts. Results lack a matching of costs and revenue to manage margins at the same strategic view as revenue.

 

The Need to Focus on Strategic P&Ls

To address and contend with these challenges, we recommend a greater focus on more strategic P&Ls for the manufacturing industry.

Strategic P&Ls provide insight into both direct costs and indirect costs.

  • Direct Costs include costs directly associated with:
    • The making of a product or delivery of a service
    • Parts for the product
    • Labor for Service Delivery
    • Costs directly attributed to the selling to a customer or client
    • Shipping and handling expenses
    • Customer processing expenses
  • Indirect Costs include costs that are not directly attributable to the making of a product, delivery of a service, or the selling to a customer:
    • Operating costs (e.g., Call Center, Distribution)
    • Selling costs (e.g., Sales & Marketing)
    • Investment costs (e.g., R&D, Initiatives)
    • G&A costs (e.g., IT, HR, Finance, Admin)
    • Finance charges for Cost of Capital Employed

Measurement of indirect costs in particular can be difficult.

 

What Would A Solution for the Manufacturing Industry Look Like?

With all of this in mind, it’s important to look at the big picture when determining what manufacturers can do to attain strategic P&Ls and overcome their challenges?

The ideal solution for the manufacturing industry would:

  • Design, support and evolve to an integrated financial process
  • Leverage operating metrics and key assumptions to:
    • Link business drivers behind financial performance
    • Modify drivers and assumptions to plan future performance and attain strategic P&Ls
    • Drive accountability to Lines of Business
  • Offer a consistent and transparent framework to support indirect cost attribution
  • Use integrated applications and tools to support and adapt to changing business processes
  • Provide robust reporting to business for transparency into causal factors

A true full-circle planning, costing and reporting solution that aligns and adapts to an integrated financial process includes the following:

  • Driver-based revenue planning and departmental expenses leveraging the actual financial data, operational metrics
  • Integrated costing capabilities that can allocate indirect expenses to lines of business by leveraging the same actuals, plans and drivers used in the planning process
  • Robust and real-time reporting to surface strategic P&Ls by Customer, Product and other Lines of Business

 

Some Solutions are Ineffective and Unsustainable

Our team at Ranzal has seen many manufacturers attempt to piece together a solution using various combinations of spreadsheets, ERP, custom and packaged applications.

Typically, spreadsheets are the most common ingredient given their flexibility and accessibility. But spreadsheets tend to be error-prone, highly manual/labor-intensive and prone also to risk regarding controls and governance. We’ve also seen customizing the ERP as a common solution-oriented approach, but this can be too expensive, overly IT-centric and can also be somewhat of a “black box.” And lastly, custom applications are slow to adapt, can promote high effort and cost and also function like a “black box.”

 

Oracle’s EPM as the Foundation for Full-Circle Planning

We recommend Oracle EPM’s packaged applications to be the foundation to configuring the right full-circle planning, costing and reporting solution that avoids the constraints and risks other avenues bring on.

The specific Oracle EPM offerings that support a full-circle planning, costing and reporting solution involve:

  • Planning & Budgeting Cloud Service (PBCS)
    • Best-in class solution for financial planning, budgeting and forecasting
    • Align top-down and bottom-up processes
    • Consistency of assumptions, calculations and methodologies
    • And many more features here
  • Profitability & Cost Management Cloud Service (PCMCS)
    • Computes Profitability for Units, Segments and Services
    • Pre-Built Framework for profitability modeling: Dimensions, Support for Multiple Cost Allocation methodologies, Validation reporting
    • Graphical Interactive Traceability Maps & Dashboards
    • Measures, Allocates and Assigns Cost and Revenues via User-defined Rules
    • And many more features here
  • Tightly integrated with the Oracle EPM Cloud
    • Consistent Administration with EPM Cloud Offerings
    • Shared Reporting Tools like Financial Reports & Smart View for Office
    • Proven Technology Stack

We believe a comprehensive solution focused on a “Technology Trio” of Integrated Business Analytics, or the convergence of: EPM, BI and BD solutions. Experience and results have shown us that this combination provides the tools and answers needed for improved business performance, increased innovation, better vision, and increased business value.

For more information or to request a demo, email us. Be sure to ask about our complimentary one-day Profitability and Cost Management assessment and how the newly-released Oracle Profitability and Cost Management Cloud Service (PCMCS) can help modernize your solution.

Standardization of Comparative Analytics in Healthcare

A Comprehensive Solution for Value-Based Care

As healthcare providers are quickly consolidating and purchasing smaller health systems, standardization is paramount to enable comparative reporting across organizations or sites that facilitates changing attitudes, decreased costs, and better, more cost effective care. Provider systems need to operate independently using a standardized enterprise system process to effectively make decisions around costs, health outcomes, and patient satisfaction.  Without standardization, the analysis of metrics can require considerable work and time and create issues when comparing like sites since appropriate metrics can mean totally different things at the underlying base member calculation.

A standardized solution is simple – an enterprise-based model that allows data to be shared across systems and applications to facilitate comparative analytics with data integrity:

MH Image 1

Such a solution offers the ability to compare productivity indices across departments against national standards using a standard calculation approach with federated master data across all toolsets, resulting in comparative analytics to drive efficiencies and value-based care:

MH Image 2

Don’t Fear the Statistics – Using OBI for Statistical Analysis Part 1

Recently, Ranzal has been working with a client in the healthcare space implementing Oracle Business Intelligence (OBI), and a requirement surfaced to translate a scorecard report into an OBI dashboard. One of the data elements was simply captioned “Trend” and colored red, yellow, and green.  It was discovered that this Trend was the slope of a linear regression plot (more on what that means in a moment) and the color was based on an arbitrarily chosen number.  This immediately raised some concerns from the Ranzal team who then made some suggestions for more pertinent statistical analysis.

To set the stage, this healthcare client’s summarized (and greatly simplified) income statement divides Revenue into Inpatient and Outpatient and Expenses into Total Labor and Non Labor. Revenue and expenses are the primary focus of much of the analytics at an aggregate level.  A single (seemingly arbitrarily chosen) number was used to determine the colored flags for each of these measures.  This was despite Inpatient Revenue and Non Labor Expenses comprising the majority of the revenue and expense amounts (respectively).  If we were to plot out these categories for the first five months of a fiscal year, we see the following (all data have been altered to preserve client confidentiality without overly affecting the overall analytic output):

figure-1

Figure 1 Revenue and Expense Trend Plot

The trouble with plotting a trend of numbers is that it is sometimes difficult to understand, at a glance, how the organization is performing. In the plots above, clear downward and upward trends can be seen for Inpatient Revenue and Total Labor Expense (respectively).  However, upon closer examination of Outpatient Revenue and Non Labor Expense, there are two upward trending months and two downward trending months.  The overall trend is difficult to discern.

With the introduction of Oracle Business Intelligence Enterprise Edition (OBIEE)12c, a Trendline function was introduced that allows the creation of a linear regression trendline. Once this is applied, the above trend plots can be augmented to get a clearer picture of performance:

figure-2

Figure 2 Revenue and Expense Linear Regression

This trendline uses a simple linear regression formula that is comprised as the slope (commonly represented by the letter m) and Intercept (commonly represented by the letter b) in the following formula:

y = mx + b

In our trend plots, the letter y represents the revenue and expense categories and x represents the fiscal periods.

The intercept is where the trendline crosses the y-axis when x is equal to zero. For most statistical analyses, the intercept is unimportant.  The slope can be thought of the average change over the two parameters.  Using OBI, the slope of each revenue and expense category can be calculated and the dashboard updated:

figure-3

Figure 3 Linear Regression Slope

In the example above, the slope of the Inpatient Revenue can be thought as decreasing an average of $291,000 a month.

One issue with using the slope is that it is subjective. As was mentioned, our healthcare client had chosen a single arbitrary slope for each of the revenue and expense categories.  The slopes in the example above range from 29 thousand to -291 thousand.  Complicating matters, the client wanted the ability to run these Analysis for individual hospitals which can dramatically affect the slope.  For instance, a hospital operating in Kansas City will probably not have the same revenue growth (or shrinkage) as a hospital operating in New York City.  To use the slope as a quantifiable objective properly, a target slope would have to be determined for the enterprise and at each granular level expected to be benchmarked (hospital, department, etc.).  This creates some obvious maintenance issues.

A more objective approach is to use the correlation coefficient, a number on a range from negative one to positive one. A correlation ranking of one indicates a positive correlation while a ranking of negative one indicates a negative correlation.  For instance, for most companies, the number of units sold is often has a high degree of positive correlation to revenue.  This would correspond to a correlation coefficient of close to one.  For many companies working in the commodities market, the more competitor’s revenue increases, the lower the possible market share.  This would be a negative correlation and result in a correlation coefficient calculation of negative one.  A correlation coefficient of zero indicates a lack of any correlation.  For instance, the number of broken arms set in a New York hospital is probably uncorrelated to the number of bowls of soup served by Panera Bread in Kansas City.

It is worth noting that correlation does not mean causation. For example, consider the number of pirate attacks and users of Microsoft Internet Explorer (IE) users:

figure-4

Figure 4 IE Usage and Pirate Attacks

The number of pirate attacks and IE users have both been in decline since 2009. As can be seen by the scatter graph on the right, the more pirate attacks, the greater the use of IE.  Regardless, naval security experts are probably not asking for adoption rate reports from Microsoft.

Returning to the client’s use case, adding the correlation coefficient to the dashboard provides a greater understanding of how the company is objectively performing:

figure-5

Figure 5 Month and Revenue / Expense Category Figure Correlation

Inpatient Revenue has a correlation of -0.69, which is moderately significant for a metric most businesses want to increase. Conversely, the Outpatient Revenue has a slightly negative correlation of -0.36.  While this should be a cause for concern, a “wait and see” approach (or deeper dive into Outpatient Revenue Categories) might be more prudent.  Because the range of the correlation coefficient is negative one to one, filtering this analysis down to a more granular level, such as a hospital or department, will return an objective number that can be subjected to independent interpretation.

There are cases in which the subjectivity of the slope is particularly useful. In the case of our client, a full year budget was prepared at the beginning of the fiscal year and periodically updated as the year progressed. The slope of this budget could be used to generate the average dollar change desired per month.  The advantage of this is that it reduces the possible volatility of a particular month into a single number that can be compared to the benchmark.  As a final addition to the dashboard, a full year budget slope was added:

figure-6

Figure 6 Full Year Budget Slope

With the exception of Non Labor Expenses, this organization is missing the mark on all of their budgetary goals, and the trend indicated by the actual slope and correlation coefficient means this situation is likely to get worse.

A word of warning about statistics in general and the use of slope and correlation coefficient in particular: micro and macro trends can should be considered and extreme outliers can mask actual trends.

For an example of micro and macro trends, consider JCPenney, a retailor that has been struggling since 2010. The following visualization (created using Oracle Data Visualization Desktop) charts the quarterly revenue from 2004 Q3 to 2016 Q4 along with the trendline for the entire period.  The bars represent the correlation coefficient to that particular quarter (i.e. the first bar is the correlation between 2004 Q3 and 2004 Q4 while the second bar is the correlation between 2004 Q3, 2004 Q4, and 2005 Q1, etc.):

figure-7

Figure 7 JCPenney Revenue Trend and Correlation

Notice that the first correlation bar is equal to one. When there are only two data points, the correlation coefficient will be equal to one, negative one, or zero.  The next data point and correlation for 2005 Q1 (JCPenney recognizes holiday revenue in Q1 of each year) continues the high correlation streak, however, the following quarter drops the correlation down to 0.35.  The correlation fluctuates quarterly until about 2012 Q2 when the definite downward trend is established.

A savvy analyst will break JCPenney’s performance during this time range into three distinct trends. Upward trending from 2004 to 2008 Q1, diminished upward trend from 2008 Q2 to 2012 Q1, and then a flat, but greatly reduced revenue from there:

figure-8

Figure 8 JCPenney Distinct Trends

As an example of how an extreme outlier can affect statistical analysis, consider GTx Incorporated, a pharmaceutical drug developer. In December 2010, GTx recognized $49.9 million dollars in revenue from a partnership with Merck& Co., Inc., which spiked GTx’s revenue (previously averaging $2 million a quarter) to $56.7 million dollars:

figure-93

Figure 9 GTx Incorporated Revenue Trend

In the visualization above, the orange projected trendline was calculated using revenue from 2004 Q1 through 2009 Q4. The purple trendline is the projected calculated using 2010 Q1, which includes the huge revenue spike.  Obviously, the orange trendline is the more accurate due exclusion of the extreme data point.

Statistical analytics is part science/technology and part art. As with any data and visualizations, a certain degree of intelligent interpretation is needed to determine what it all really means.  Functional users should be focused on what the various statistical interpretations mean and not be distracted on the complexity of the underlying mathematical functions.  Trend visualizations can aid users in understanding how to interpret these statistical calculations.  Many organizations miss opportunities because of individuals unwilling to embrace statistical methods due to the lack of solid education and guidance about what these numbers really mean.  Training, change management, and the creation of rich visualizations can help enterprises harness the capabilities of statistical analysis and extend the role of their business intelligence systems.

Process Simplification – Migrating from HPCM Standard Profitability to Management Ledger

With the introduction of Hyperion Profitability and Cost Management (HPCM), many organizations have recognized the power of this breakthrough solution to build sophisticated and powerful cost models. As such, HPCM has been successfully in use for several years, and in numerous cases, its use has been expanded.

Since the initial release of HPCM, Oracle has developed additional variations of HPCM to provide a full suite of capabilities in costing and profitability that can more specifically provide the right tool for the right job (RTRJ). These additional offerings include HPCM-Detailed Profitability and HPCM-Management Ledger, the latter of which is available either in the on premise version (HPCM-ML) or the cloud version – Profitability & Cost Management-Cloud Service (PCMCS).  The original solution of HPCM is now referred to as HPCM-Standard Profitability (HPCM-Standard).

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). This experience has prompted the notion that given the multiple offerings that are now available, it is worthwhile to evaluate the applicability of the new solutions to an organization’s existing use cases and consider making a change where appropriate.  In particular, Management Ledger offers benefits of flexibility and process simplification to warrant consideration of conversion of an HPCM Standard model to HPCM-ML or PCMCS.   This article discusses that process.

Background

Since HPCM’s introduction, it has been seen that there is not necessarily a one-size-fits-all solution for the set of needs in cost allocations and profitability. All allocations fundamentally follow the basic formula, A = S x F x D/Sum(D) where A = the target Allocated amount, S = Source amount, F = Factor, i.e. percent of source amount to be allocated, often 100%, D = Driver quantity, and Sum(D) = Sum of Driver quantities across target values.

However, this fundamental formula is where similarities end and distinctions begin. The original solution, HPCM-Standard, is well suited for cases where highly complex allocation models are utilized.  It is also well positioned where adherence to a highly-structured framework is sought, and it provides capability for highly detailed graphical tracing of allocations in the user interface.

Alternatively, Detailed Profitability, which can be deemed as the “heavy-lifter” of the offerings, requires that users define relatively simple allocation rules through a single allocation stage. However, in exchange for this concession, the solution can apply those rules across a wide range of dimensions and is able to do so at a very granular level of detail.  Also referred to as “Microcosting,” this solution leverages source pools and rates applied to a high volume of transactions or near-transactions.  Firms within industries such as consumer goods, transportation and distribution, retail banking, and healthcare are among those that may want to leverage this capability.  This solution enables capture of variation in cost at the shipment, order, transaction, or encounter level of detail, and then aggregates those values to higher levels such as product, service, or customer for analysis.

The third offering, Management Ledger, combines aspects of both of the other two solutions, such as some of the metadata granularity of Detailed Profitability, along with the logic complexity of Standard. This enables users to define custom models with fewer restrictions on the framework and fewer limits on the level of detail required for reporting.  Management Ledger is also flexible to accommodate future changes through its Rule Set/Rule sequencing construct.  Subsequent allocation logic changes can be of a substantial nature, potentially up to a near redesign.  Also, the rules building process itself is simplified in Management Ledger and it is one that aligns well with the intuition of finance users.  Further, Oracle’s current strategic direction is with Management Ledger, most notably seen in the recent release of PCMCS.

What is the benefit of conversion?

Management Ledger offers several key capabilities that can improve, streamline, or otherwise address existing challenges in a Standard Profitability environment.

  1. Management Ledger does not rely on a back-end staging table paradigm for data loading as does HPCM-Standard. Such reliance requires the availability of resources with the database skills required to support SQL interfaces to automate model processes, as well as to perform maintenance when metadata updates are made. For some user sites, the availability of these skills is limited.
  2. Management Ledger is an ASO application. It is not subject to the metadata restriction faced when deploying the HPCM-Standard calculation cube, which is BSO, and is subject to reaching the maximum number of potential blocks due to metadata duplication. Since Management Ledger does not duplicate the dimensions, it makes reporting easier for end-users and can eliminate the need for a “simplified” HPCM reporting application that is often created in an implementation of HPCM-Standard.
  3. Management Ledger does not require the use of pre-defined stages and an associated limit of three dimensions per stage as utilized in HPCM-Standard. This framework drove design decisions and influences future changes at certain user sites.
  4. Management Ledger is flexible to accommodate new methods of allocating data. The presence of a dimension in an application allows for its selection and filtering without the need for re-design.
  5. Management Ledger provides an interface that can be quickly learned by business users. Its set-up and maintenance simply requires the identification of sources, destinations, and the driver bases of allocations. Because it does not rely upon or require use of any specific methodology, existing Planning and HFM users can quickly learn the navigation and logic of Management Ledger. As shown below, the process for rules building in Management Ledger is straightforward.
    1-16-1

    Management Ledger Rules Building Interface

    1-16-2

  6. Management Ledger offers a multitude of standard reports for model documentation, rules validation, rule balance summaries of the results, and graphic traceability. PCMCS adds Business Intelligence visualizations such as scatter plots, cumulative profitability “whale curves,” and KPIs.

 

PCMCS Visuals

1-16-31-16-41-16-5

With all of these potential benefits, there are also offsetting considerations. Management Ledger may require more maintenance than HPCM-Standard due to a higher number of allocation rules, which is required in order to enable parallel processing.  Further, the graphical built-in traceability screen in Management may be considered by some as being less intuitive than the screen provided with HPCM-Standard.  Therefore, not in every case where Management Ledger is seen as a useful fit, will the advantages over Standard Profitability be sufficient justification to undertake the time and effort of a conversion.

What are the criteria for undertaking a Management Ledger Conversion?

To help evaluate whether it is worthwhile to pursue migrating a Standard Profitability model to Management Ledger, the following questions can be asked:

  1. Is there a major re-organization pending that is prompting a re-evaluation of the overall stages framework?
  2. Will there be future changes in which new allocation processes are added, such as moving beyond organizational allocations to ones that include other dimensions such as product or customer?
  3. Do changes in allocation methodologies occur often? Will business users be required to make these updates/changes and without the support of IT staff?
  4. Are new scenarios such as What-If or Ad-Hoc planned and is there an interest in testing different allocation methodologies versus the existing live production models?
  5. Are the theoretical limits associated with the Block Storage Outline (BSO) being approached?
  6. Is the process for updating the Standard Profitability staging tables considered to be time consuming and/or is the automation for populating the staging tables viewed as complex or poorly understood?
  7. Are there currently other Management Ledger models in the organization and is there a need or desire to achieve communization of platforms?
  8. Is there an objective to move applications to the Cloud?

 

What are the steps to migrate?

If the answer to any of the above questions is yes, then there is a potential opportunity to convert a Standard Profitability model to Management Ledger. In such a case, a prototype to test the concept should be created.  This prototype should be loaded with a sample of data and rules, typically for at least one POV, and calculated and validated.  Though each situation will have unique requirements, the overall steps are as follows:

Prototype Build -> Rules Creation -> Testing -> Validation -> Adjustment -> Migration

General Steps to Migrating to Management Ledger

  1. Migrate the Standard model to the same environment where the Management Ledger test will be built.
  2. Run a calculation of the Standard model to obtain a benchmark performance time.
  3. Create a new cube and database and copy the dimensions from the existing cube. A new Master application should be created and the dimensionality copied from the existing Standard Profitability Master application. This is so that the dimensionality from the calculation cube isn’t used, in order to avoid duplicate dimensions.
  4. Copy the dimensions from the old to the new cube. Make Cube Outline Updates.
    • Change the NoMember dimension member in each dimension to NoDimensionName.
    • Determine the dimension for the Drivers, usually the DataType or Account dimension.
    • Add the drivers from the Measures dimension to the Account or a DataType dimension.
    • Delete Measures and AllocationType dimensions (used with Standard model).
    • Add the Rule and Balance dimensions (used with Management Ledger models.
    • Add UDAs for potential rule filtering requirements.
    • Should both Source and Target allocation details be required for reporting, dimensions may need to be duplicated or split, such as in a case with Initial Cost Pool and Final Cost Pool.
  5. Create a new Management Ledger Profitability application that references the new cube.
  6. Deploy the Management Ledger Essbase Calculation engine.
  7. Choose and create a single POV to start.
  8. Import data from the existing cube to the new one utilizing the various methods available such as free form loading without rules, structured loading with rules, spreadsheet add-ins such as SmartView or other tools such as FDM/FDMEE. Note: For PCMCS, flat files of dimensions and data are employed.
  9. Document the allocation rules in a template.
  10. Enter the allocation rules through the ML user interface.
  11. Run Model Validation to check the new Rule Sets and Rules for errors before calculating.
  12. Launch a calculation. Start with running a single rule.
  13. Validate the Results. Progressively select more rules for successive calculation as rules are validated.
  14. Adjust methods iteratively.
  15. Create and update a report to demonstrate the validations to end-users as well as how the results are consumed.
  16. Migrate, once validation is complete including acceptability of both the results values and the processing times.

 

Some thoughts on building allocation rules

Upon having a Management Ledger outline, the allocation rules from Standard should be constructed through the user interface. There should be an association between the Stages in a Standard model versus the Rule Sets in a Management Ledger.  As a starting point, the Rule Set sequence flow should match the stages, though it may be found necessary to break the stages into multiple rule sets.

1-16.6.png

Once the rule sets are determined, the rules themselves should be documented in a template (Excel, Word, etc.) that is easy to manage and understand. The example that follows shows the dimensionality of the Source, Destination, Driver Basis, and Source Offset.

This template becomes part of the documentation of the prototype. Upon completion of the template, a user should build the rule sets and rules in the Management Ledger interface.  One of the key benefits of Management Ledger is to reference parent level values in the assignment rules.  This provides the ability to create many-to-many source-destination associations with few keystrokes.  This not only saves time in initial set-up, but also makes the entire process data driven such that when new dimension members such as new accounts, cost centers, products, or customers are added, the allocation rules automatically accommodate them without the need for editing or updating.  The ability to select at the parent level also reduces the need for automation routines of the types that are frequently created in Standard Profitability implementations, such as those used to update staging tables (Management Ledger does not have staging tables).

Users should start with referencing the highest-level parents to make the process as automated as possible. If performance becomes an issue, it may be necessary to reference mid or lower level parents.  Rules should be tested iteratively, i.e. run individually and then in groups to validate both the answers and to track processing time.

If calculation times exceed requirements or expectations, then start moving references to lower level parents. Avoid going to children as that will increase maintenance in the future.

Validation Concepts

Use the Rule Balancing Report to validate the cost flow and confirm that allocations in and out match expectations. Users should also generate a set of SmartView queries from the control HPCM-Standard Model and compare those to a set of SmartView queries from the HPCM-ML prototype.  Input and Stage amounts from HPCM-Standard should compare to Rule Set amounts in HPCM-ML, including checks that rule sets are using drivers correctly.  Calculation time and performance should also be tracked and benchmarked.

1-16-7

Conclusion

The advent of HPCM Management Ledger in both the on premise and cloud-based versions provides organizations with an opportunity to consider their existing solution and whether a migration to Management Ledger is warranted. Multiple considerations must be evaluated in this decision, and a prototype-based assessment is recommended as part of the process.  Edgewater Ranzal provides an Assessment service offering to assist organizations with this evaluation, as well as a subsequent implementation.  With over twenty experienced full-time consultants across the Americas and EMEA, and with more than twenty-five successful HPCM projects delivered since 2009, Edgewater Ranzal is the leading Oracle partner in delivering all versions of HPCM. Its comprehensive multi-product delivery approach can incorporate other tools such as Planning, DRM, FDMEE, & OBIEE.  These qualifications, along with its close relationship with Oracle Development, make Edgewater Ranzal the premier partner for client success.

 

Accelerate Your Ride to the Cloud: Extending ERP with Oracle Profitability & Cost Management Cloud Service (PCMCS) for Standard Cost Rate Development

A common need among manufacturing organizations is improvement in the process of developing annual labor and overhead standards to use as input into standard cost rates for product cost and inventory valuation. In spite of the investments that have been made in ERP solutions, it is typically an offline Excel-based exercise that is required to take historical data from the ERP to determine the updated direct labor rate & overhead rate components of a product standard cost for an upcoming fiscal year.  The release of Oracle Profitability and Cost Management-Cloud Service (PCMCS) in October 2016 provides a unique opportunity for manufacturers to ease, streamline and document the process of generating the cost-per-direct labor hour or cost-per-machine-hour rates that are requisite in standard costing.

Background

Generally accepted accounting principles (GAAP) allow for one of multiple methods for the valuation of inventory to a manufacturer: Last-In, First-Out (LIFO); First-In, First-Out (FIFO); or a Weighted Average.

Because prices for labor and materials fluctuate throughout a year and inventory is built or drawn, it is difficult to track inventory on an on-going basis using these methods. Further, from a management perspective, it is more meaningful to separate the effects of price changes and inventory builds/draws from values associated with normal business.  Pricing decisions, incentive compensation and matching expenses to the physical flow of goods would all be adversely impacted by trying to constantly manage to these methods.

A common approach to achieve meaningful inventory and cost of goods sold values is to establish a “standard cost” for every product and then adjust the value of inventory on a separate line at year-end, to bring it to the GAAP basis.

This standard cost requires direct labor, direct material and an inclusion of an amount representing the “absorption” of certain of plant-related overhead costs into the inventory value.

There are two forms of overhead that must be included in the inventory value from a GAAP perspective: 1) Labor overhead and 2) Manufacturing overhead, sometimes called Indirect Overhead.

  1. Labor overhead represents the costs of direct labor resources above and beyond their direct hourly wage rate. This amount includes payroll taxes, retirement and health care benefits, workers’ compensation, life insurance and other fringe benefits.
  2. Manufacturing overhead includes a grouping of costs that are related to the sustainment of the manufacturing process, but are not directly consumed or incurred with each unit of production. Examples of these costs include:
  • Materials handling
  • Equipment Set-up
  • Inspection and Quality Assurance
  • Production Equipment Maintenance and Repair
  • Depreciation on manufacturing equipment and facilities
  • Insurance and property taxes on manufacturing facilities
  • Utilities such as electricity, natural gas, water, and sewer required for operating the manufacturing facilities
  • The factory management team

The most common first step for determining the value of overheads in inventory is to use a predetermined rate that represents a cost charge per direct labor hour or cost per machine hour. From product bills of material and routings, the total number of hours or labor or machine usage for a unit volume of production is known. The value of the overhead cost rate per direct labor hour (or machine hour) x the number of hours required per unit of production, yields the overhead cost rate per unit. In the example below, the ERP will calculate the cost per work center, but it is reliant on the Direct Labor and Overhead Rates to complete this process.

dp-image-1jpg

The challenge comes when calculating the applicable pre-determined rate for overhead per direct labor hour or machine hour by the applicable cost or work center. PCMCS can assist with automating and updating this process.

A Better Solution: The Ranzal PCMCS Standard Cost Solution

PCMCS provides the ability to quickly and flexibly put the creation of multi-step allocation processes into the hands of business users. It also provides for the management of hierarchies without the need for external dimension management applications as well as standard file templates for data upload.  Further, a series of standard dashboard and report visuals augment the viewing and monitoring of results.  These capabilities allow organizations to quickly load and allocate expenses to applicable overhead cost pools and then merge those cost pools with applicable labor or machine hour values to obtain the relevant overhead rates.

PCMCS allows users to quickly select the cost centers or work centers that are applicable as sources to be included in the overhead rate:

dp-image-2jpg

Users then can easily select the targets for collecting these costs into relevant pools,

dp-image-3

as well as the operational metric to use to assign these overhead costs to their applicable pools.

dp-image-4

Users then can easily select the targets for collecting these costs into relevant pools,

dp-image-5

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). Following the release of PCMCS, Ranzal will be announcing a Cloud servicing offering that will leverage the power of the Cloud to provide an accelerated method of producing the required inputs for overhead allocation in standard costing.

More than just Standard Costing

Additionally, while PCMS provides an excellent way to develop overhead rates for standard costing, it can simultaneously be utilized to determine allocations and costing valuations that leverage other methodologies for product and customer costing and profitability. Much has been written about the potential for inaccuracies if the standard cost basis of overhead allocation in product costing were to be used universally or exclusively for management analysis.  Overhead has become such a large portion of the total cost, that in many cases, overhead rates can be three or four times higher than their respective direct labor rates.  This suggests a general lack of causality between overhead and direct labor hours in many cases, and this has led to the evolution of other methods for costing.  Activity Based Costing is one such example, while simply allocating manufacturing variances to product lines is another.

PCMCS can be used to meet the requirements for both the externally reported methods and the management methods of product costing.

All of the Results in One Place

Determining the method by which overhead should be captured in the cost of different products of inventory is an important process because it represents a step by which a large number of dollars is moved from an expense to an asset, usually temporarily but sometimes permanently, and this can impact profitability and stock share price.

For the purpose of valuing inventory for statutory reporting, the overhead rate method is considered acceptable and it is widely used. It is therefore important that organizations find a way to develop and manage these cost valuations in a manner that is well-documented, has transparent methodology and is one that reduces the amount of time spent on the process.  However, it is not the only method that should be used for considering overhead in product and customer costing and profitability analysis.  Further, selling, general and administrative expenses (SG&A) represents another layer of cost that while not part of standard inventory cost, should be considered in overall product costs from a management perspective.

To this end, the Edgewater Ranzal PCMCS Standard Cost solution will provide an opportunity to fulfill multiple needs in costing and profitability and will do so in a manner that will be faster and more user-friendly than what has previously been experienced.

A Comparison of Oracle Business Intelligence, Data Visualization, and Visual Analyzer

We recently authored The Role of Oracle Data Visualizer in the Modern Enterprise in which we had referred to both Data Visualization (DV) and Visual Analyzer (VA) as Data Visualizer.  This post addresses readers’ inquiries about the differences between DV and VA as well as a comparison to that of Oracle Business Intelligence (OBI).  The following sections provide details of the solutions for the OBI and DV/VA products as well as a matrix to compare each solution’s capabilities.  Finally, some use cases for DV/VA projects versus OBI will be outlined.

For the purposes of this post, OBI will be considered the parent solution for both on premise Oracle Business Intelligence solutions (including Enterprise Edition (OBIEE), Foundation Services (BIFS), and Standard Edition (OBSE)) as well as Business Intelligence Cloud Service (BICS). OBI is the platform thousands of Oracle customers have become familiar with to provide robust visualizations and dashboard solutions from nearly any data source.  While the on premise solutions are currently the most mature products, at some point in the future, BICS is expected to become the flagship product for Oracle at which time all features are expected to be available.

Likewise, DV/VA will be used to refer collectively to Visual Analyzer packaged with BICS (VA BICS), Visual Analyzer packaged with OBI 12c (VA 12c), Data Visualization Desktop (DVD), and Data Visualization Cloud Service (DVCS). VA was initially introduced as part of the BICS package, but has since become available as part of OBIEE 12c (the latest on premise version).  DVD was released early in 2016 as a stand-alone product that can be downloaded and installed on a local machine.  Recently, DVCS has been released as the cloud-based version of DVD.  All of these products offer similar data visualization capabilities as OBI but feature significant enhancements to the manner in which users interact with their data.  Compared to OBI, the interface is even more simplified and intuitive to use which is an accomplishment for Oracle considering how easy OBI is to use.  Reusable and business process-centric dashboards are available in DV/VA but are referred to as DV or VA Projects.  Perhaps the most powerful feature is the ability for users to mash up data from different sources (including Excel) to quickly gain insight they might have spent days or weeks manually assembling in Excel or Access.  These mashups can be used to create reusable DV/VA Projects that can be refreshed through new data loads in the source system and by uploading updated Excel spreadsheets into DV/VA.

While the six products mentioned can be grouped nicely into two categories, the following matrix outlines the differences between each product. The following sections will provide some commentary to some of the features.

Table 1

Table 1:  Product Capability Matrix

Advanced Analytics provides integrated statistical capabilities based on the R programming language and includes the following functions:

  • Trendline – This function provides a linear or exponential plot through noisy data to indicate a general pattern or direction for time series data. For instance, while there is a noisy fluctuation of revenue over these three years, a slowly increasing general trend can be detected by the Trendline plot:
Figure 1

Figure 1:  Trendline Analysis

 

  • Clusters – This function attempts to classify scattered data into related groups. Users are able to determine the number of clusters and other grouping attributes. For instance, these clusters were generated using Revenue versus Billed Quantity by Month:
Figure 2

Figure 2:  Cluster Analysis

 

  • Outliers – This function detects exceptions in the sample data. For instance, given the previous scatter plot, four outliers can be detected:
Figure 3

Figure 3:  Outlier Analysis

 

  • Regression – This function is similar to the Trendline function but correlates relationships between two measures and does not require a time series. This is often used to help create or determine forecasts. Using the previous Revenue versus Billed Quantity, the following Regression series can be detected:
Figure 4

Figure 4:  Regression Analysis

 

Insights provide users the ability to embed commentary within DV/VA projects (except for VA 12c). Users take a “snapshot” of their data at a certain intersection and make an Insight comment.  These Insights can then be associated with each other to tell a story about the data and then shared with others or assembled into a presentation.  For those readers familiar with the Hyperion Planning capabilities, Insights are analogous to Cell Comments.  OBI 12c (as well as 11g) offers the ability to write comments back to a relational table; however, this capability is not as flexible or robust as Insights and requires intervention by the BI support team to implement.

Figure 5

Figure 5:  Insights Assembled into a Story

 

Direct connections to a Relational Database Management System (RDBMS) such as an enterprise data warehouse are now possible using some of the DV/VA products. (For the purpose of this post, inserting a semantic or logical layer between the database and user is not considered a direct connection).  For the cloud-based versions (VA BICS and DVCS), only connections to other cloud databases are available while DVD allows users to connect to an on premise or cloud database.  This capability will typically be created and configured either by the IT support team or analysts familiar with the data model of the target data source as well as SQL concepts such as creating joins between relational tables.  (Direct connections using OBI are technically possible; however, they require the users to manually write the SQL to extract the data for their analysis).  Once these connections are created and the correct joins are configured between tables, users can further augment their data with data mashups.  VA 12c currently requires a Subject Area connected to a RDBMS to create projects.

Leveraging OLAP data sources such as Essbase is currently only available in OBI 12c (as well as 11g) and VA 12c. These data sources require that the OLAP cube be exposed as a Subject Area in the Presentation layer (in other words, no direct connection to OLAP data sources).  OBI is considered very mature and offers robust mechanisms for interacting with the cube, including the ability to use drillable hierarchical columns in Analysis.  VA 12c currently exposes a flattened list of hierarchical columns without a drillable hierarchical column.  As with direct connections, users are able to mashup their data with the cubes to create custom data models.

While the capabilities of the DV/VA product set are impressive, the solution currently lacks some key capabilities of OBI Analysis and Dashboards. A few of the most noticeable gaps between the capabilities of DV/VA and OBI Dashboards are the inability to:

  • Create the functional equivalent of Action Links which allows users to drill down or across from an Analysis
  • Schedule and/or deliver reports
  • Customize graphs, charts, and other data visualizations to the extent offered by OBI
  • Create Alerts which can perform conditionally-based actions such as pushing information to users
  • Use drillable hierarchical columns

At this time, OBI should continue to be used as the centerpiece for enterprise-wide analytical solutions that require complex dashboards and other capabilities. DV/VA will be more suited for analysts who need to unify discrete data sources in a repeatable and presentation-friendly format using DV/VA Projects.  As mentioned, DV/VA is even easier to use than OBI which makes it ideal for users who wish to have an analytics tool that rapidly allows them to pull together ad hoc analysis.  As was discussed in The Role of Oracle Data Visualizer in the Modern Enterprise, enterprises that are reaching for new game-changing analytic capabilities should give the DV/VA product set a thorough evaluation.  Oracle releases regular upgrades to the entire DV/VA product set, and we anticipate many of the noted gaps will be closed at some point in the future.