Data Governance in the Cloud: An Integrated Strategy; A Unified Solution

Are you tasked with making organizational decisions that have placed you in a major dilemma? As a decision-maker in today’s fast-paced economy, you must wonder how you can cut costs, improve the bottom line, and still maintain the data quality necessary to make strategic decisions.

Take heart because it IS possible to achieve a balance of on-premise and off-premise Enterprise Performance Management (EPM) software while maintaining integrity and control of your data to provide the quality and data assurance needed for success – AND benefit financially from new Cloud technologies.

Success is a combination of understanding what each data tract requires and creating an integration strategy consisting of the necessary business processes and software tools that deliver consistency and integrity of your EPM strategic data.

Past trends called for a tight on-premise coupling of all EPM software to achieve the best results. This strategy required maintenance of a large hardware and software infrastructure and related personnel to keep everything running smoothly.  The new Cloud “POD” subscriptions are geared toward reducing the high costs of infrastructure which is a financial benefit. As in all things in life, there is a consequence of moving to Cloud technology.   An unexpected consequence of Pod technology is the creation of isolated silos of information, but there is an easy resolution!  The key to overcoming this limitation is to gain an understanding of what each component offers and demands, and creating an integration strategy to bridge that gap.

If you are interested in learning how to create this strategy to bring the various pieces together as a unified solution or if your organization plans to migrate to the EPM Cloud platform in the future, this whitepaper helps to define a process to pre-build the integration strategy and make moving to the Cloud easier with reduced time to migrate.

Download our whitepaper: Data Relationship Management (DRM) for Cloud-Based Technologies:  Using DRM for Data Governance in the Cloud

Full Circle Planning, Cost Management, & Profitability in the Manufacturing Industry

This post corresponds to the webinar “Full Circle Planning, Cost Management & Profitability in the Manufacturing Industry.” You can access the recording here.

As we are all aware, today’s manufacturing industry faces multiple ongoing challenges, including:

  • Changing customer/consumer demands
  • Shrinking operating margins
  • Ever-changing compliance and regulatory pressures
  • Increasingly globalizing economy
  • Lowered availability and visibility of detailed information

Now more than ever, manufacturers’ focus is not just on growth, but, more specifically, on profitable growth.

 

Managing Profitable Growth

When it comes to profitable growth and insight into profitability, the first place to start is the consolidated P&L.

But while the P&L offers information on profitable growth, it does not help manage profitable growth. The financial P&L provides limited insight into costs, profits and their underlying drivers, from the perspective of their lines of business, products, customers, markets and channels. Cost bases are imperfect and are limited to legacy standard costing and unstructured cost extracts. Results lack a matching of costs and revenue to manage margins at the same strategic view as revenue.

 

The Need to Focus on Strategic P&Ls

To address and contend with these challenges, we recommend a greater focus on more strategic P&Ls for the manufacturing industry.

Strategic P&Ls provide insight into both direct costs and indirect costs.

  • Direct Costs include costs directly associated with:
    • The making of a product or delivery of a service
    • Parts for the product
    • Labor for Service Delivery
    • Costs directly attributed to the selling to a customer or client
    • Shipping and handling expenses
    • Customer processing expenses
  • Indirect Costs include costs that are not directly attributable to the making of a product, delivery of a service, or the selling to a customer:
    • Operating costs (e.g., Call Center, Distribution)
    • Selling costs (e.g., Sales & Marketing)
    • Investment costs (e.g., R&D, Initiatives)
    • G&A costs (e.g., IT, HR, Finance, Admin)
    • Finance charges for Cost of Capital Employed

Measurement of indirect costs in particular can be difficult.

 

What Would A Solution for the Manufacturing Industry Look Like?

With all of this in mind, it’s important to look at the big picture when determining what manufacturers can do to attain strategic P&Ls and overcome their challenges?

The ideal solution for the manufacturing industry would:

  • Design, support and evolve to an integrated financial process
  • Leverage operating metrics and key assumptions to:
    • Link business drivers behind financial performance
    • Modify drivers and assumptions to plan future performance and attain strategic P&Ls
    • Drive accountability to Lines of Business
  • Offer a consistent and transparent framework to support indirect cost attribution
  • Use integrated applications and tools to support and adapt to changing business processes
  • Provide robust reporting to business for transparency into causal factors

A true full-circle planning, costing and reporting solution that aligns and adapts to an integrated financial process includes the following:

  • Driver-based revenue planning and departmental expenses leveraging the actual financial data, operational metrics
  • Integrated costing capabilities that can allocate indirect expenses to lines of business by leveraging the same actuals, plans and drivers used in the planning process
  • Robust and real-time reporting to surface strategic P&Ls by Customer, Product and other Lines of Business

 

Some Solutions are Ineffective and Unsustainable

Our team at Ranzal has seen many manufacturers attempt to piece together a solution using various combinations of spreadsheets, ERP, custom and packaged applications.

Typically, spreadsheets are the most common ingredient given their flexibility and accessibility. But spreadsheets tend to be error-prone, highly manual/labor-intensive and prone also to risk regarding controls and governance. We’ve also seen customizing the ERP as a common solution-oriented approach, but this can be too expensive, overly IT-centric and can also be somewhat of a “black box.” And lastly, custom applications are slow to adapt, can promote high effort and cost and also function like a “black box.”

 

Oracle’s EPM as the Foundation for Full-Circle Planning

We recommend Oracle EPM’s packaged applications to be the foundation to configuring the right full-circle planning, costing and reporting solution that avoids the constraints and risks other avenues bring on.

The specific Oracle EPM offerings that support a full-circle planning, costing and reporting solution involve:

  • Planning & Budgeting Cloud Service (PBCS)
    • Best-in class solution for financial planning, budgeting and forecasting
    • Align top-down and bottom-up processes
    • Consistency of assumptions, calculations and methodologies
    • And many more features here
  • Profitability & Cost Management Cloud Service (PCMCS)
    • Computes Profitability for Units, Segments and Services
    • Pre-Built Framework for profitability modeling: Dimensions, Support for Multiple Cost Allocation methodologies, Validation reporting
    • Graphical Interactive Traceability Maps & Dashboards
    • Measures, Allocates and Assigns Cost and Revenues via User-defined Rules
    • And many more features here
  • Tightly integrated with the Oracle EPM Cloud
    • Consistent Administration with EPM Cloud Offerings
    • Shared Reporting Tools like Financial Reports & Smart View for Office
    • Proven Technology Stack

We believe a comprehensive solution focused on a “Technology Trio” of Integrated Business Analytics, or the convergence of: EPM, BI and BD solutions. Experience and results have shown us that this combination provides the tools and answers needed for improved business performance, increased innovation, better vision, and increased business value.

For more information or to request a demo, email us. Be sure to ask about our complimentary one-day Profitability and Cost Management assessment and how the newly-released Oracle Profitability and Cost Management Cloud Service (PCMCS) can help modernize your solution.

Don’t Fear the Statistics – Using OBI for Statistical Analysis Part 2

Nearly every client Edgewater Ranzal partners with uses statistical averages in their analytic and reporting solutions. As far as statistical functions go, it is probably the easiest to understand, however; the limitation of using the average is that it can be difficult to determine how to rate the individual performance of contributors to that average.  Consider the following examples:

  • The average cost of a gallon of milk is $3.20 and the corner convenience store is selling it for $3.45, is that a significant deviation from the average?
  • If the average NFL player’s base salary is $1.86 million and Tennessee Titan’s Marcus Mariota made $5.5 million, is this an exceptional payout? Is the salary significant when his role as the team’s starting Quarterback is considered?
  • Suppose the average gross margin percent for a company’s business units is 58% and one particular business unit’s actual gross margin is 46%. Is that business unit truly underperforming?

It turns out that the average of a particular measurement is very subjective. In this post, we explore how the standard deviation of the average can be used to mitigate subjectivity and how it can be incorporated into data visualizations to identify true outliers.

The NASDAQ-100 is comprised of the largest domestic and international non-financial companies (based on market capitalization) listed on the Nasdaq Stock Exchange. It includes technology giants such as Apple and Alphabet (parent company of Google) along with consumer services such as Bed, Bath, & Beyond.  The quarterly gross margin percent from 2007 to Q3 2016 was downloaded and loaded into a data mart leveraged by Oracle Business Intelligence Enterprise Edition (OBIEE) 12c.  (Q4 2016 data was not available for all companies).  With the exception of Figure 1, the following visualizations were created in OBIEE 12c.

The standard deviation can be thought of as ranges that can be used to classify individual contributors to the average. For instance, the average gross margin percent for the NASDAQ-100 in Q4 2014 was calculated to be 59.9% with a standard deviation of 22.7%.  This can be visualized on a number line as such:

Figure 1 NASDAQ-100 Q4 2014 Gross Margin % Performance Ranges

dont-fear-statistics-part-2-figure-1

Many real world events that have variability follow a predictable distribution pattern. For instance, it is expected that approximately 34.1% of the contributors will fall between the average and one standard deviation up.  From the figure above, it is estimated that approximately 34 of the NASDAQ-100 will have a gross margin percent between 37.2% and 59.9%.  The actual distribution can be visualized as such:

Figure 2 Distribution of NASDAQ-100 Gross Margin %

dont-fear-statistics-part-2-figure-2

The NASDAQ-100 companies do not perfectly follow the distribution; there is a fatter spread into the Negative and Positive buckets (Two Standard Deviations down and up). Other, more advanced statistical methods can be used to redefine ranges, but are beyond the scope of this post.

Of course, this visualization simply confirms statistical theories that were proven over a hundred years ago. The true value of analytics is to take statistical theories and turn them into informative visuals.  One method of visualizing the ranking of companies using the standard distribution in OBIEE 12c is through a Treemap:

Figure 3 NASDAQ-100 Distribution Treemap Visualization

dont-fear-statistics-part-2-figure-3

The size of the box represents the Gross Margin % while the color aligns with the distribution ranking from Figures 1 and 2. This visualization allows the viewer to understand both the rankings and relative performance at a glance.  It is easy to discern the delineation between above and below average (border between yellow and light green) as well as which companies are herding together.

One of the most powerful and essential aspects of business analytics is the ability to dimensionalize data so it can be sliced and diced. One (of many) reasons this is done is to be sure that there is an “apples to apples” comparison.  For instance, comparing the gross margin percent comparison between Qualcomm (QCOM), a semiconductor and telecommunications company, and Ross Stores (ROST), a discount department store, can create misconstrued distributions.  Filtering the visualization in Figure 3 by the NASDAQ industry classifications for Technology companies results in the following Treemap:

Figure 4 NASDAQ-100 Technologies Companies Treemap

dont-fear-statistics-part-2-figure-4

Notice that Qualcomm has slipped from “Moderately Positive” to “Moderately Negative.” Averages and standard deviations can change dramatically when looking at the components of the whole.  To demonstrate this, consider the following visualization comparing the average and deviation spread of the three largest categories (by number of companies) of the NASDAQ-100:

Figure 5 Average and Standard Deviation by Categories

dont-fear-statistics-part-2-figure-5

The border between yellow and light green represents the average while each band represents one standard deviation. Notice that the average gross margin % as well as the standard deviation is higher for Healthcare than for Technology.  Healthcare companies are going to skew the performance perspective of Technology companies.  This skew worsens when comparing against companies classified as Consumer Service.

As a general rule, a single point is not the best indicator of long term performance. Although the average and standard deviation for a single quarter was calculated through the agglomeration of one hundred companies, it should be considered a single data point.  Consider the following visualizations that show a comparative trend for four different companies for the entire date range downloaded:

Figure 6 Gross Margin % Trend for Adobe, Amazon, Electronic Arts, and Priceline

dont-fear-statistics-part-2-figure-6

At a glance, viewers can see that Adobe (upper left) consistently beats the average performance while consumer goods and technology giant Amazon (upper right) has been performing below average until recently. Electronic Arts (lower left), a video game developer, seems to have erratic gross margin % returns; however, looking past the noise, the company is nearly always between moderately positive and moderately negative when compared against other NASDAQ-100 companies.  Finally, Priceline (lower right) has been increasing gross margin % consistently and steadily pulling ahead of other NASDAQ-100 companies.  If Priceline’s gross margin % trend continues and the performance of the other companies remains constant, Priceline will move into the “Extremely Positive” gross margin % ranking in Q4 2016 or Q1 2017.

Returning to the questions posed at the beginning of this post:

  • The average cost of a gallon of milk is $3.20 with a standard deviation of $0.08. The corner grocery store selling milk for $3.44 is three standard deviations above the average!
  • The average NFL base salary is $1.86 million with a standard deviation of $2.80 million. Comparatively, Marcus Mariota’s $5.50 million salary is one standard deviation above average. However, with the average quarterback base salary being $5.69 million with a standard deviation of $7.17 million, he is actually minimally undercompensated.

For the final question, we ask the reader to evaluate his enterprise:

  • Calculate the average gross margin percent for your company’s business units for the quarter and find the business unit that is approximately 10% less than that average. Are they truly underperforming? Are you able to properly classify these business units to gain the greatest insight into relative performance?

Average and standard deviation can be applied to any metric by which a company wishes to evaluate itself. It can be used in combination with external data to create industry benchmarks.  For instance, if you were to plot your company’s gross margin % performance against the trends above, how would it look?

We want to close this post with the same idea that we closed Part 1 of the “Don’t Fear the Statistics” post: statistical analytics is part science/technology and part art.  Reducing statistical calculations to consumable visualizations is the key.  In the visualizations above, references to “standard deviation” were diligently omitted in favor of familiar terms such as “Moderately Negative.”  Approaches such as this help with change management, adoption, and the acceleration from simple reporting to true analytical insight into business process improvement based on data.

Don’t Fear the Statistics – Using OBI for Statistical Analysis Part 1

Recently, Ranzal has been working with a client in the healthcare space implementing Oracle Business Intelligence (OBI), and a requirement surfaced to translate a scorecard report into an OBI dashboard. One of the data elements was simply captioned “Trend” and colored red, yellow, and green.  It was discovered that this Trend was the slope of a linear regression plot (more on what that means in a moment) and the color was based on an arbitrarily chosen number.  This immediately raised some concerns from the Ranzal team who then made some suggestions for more pertinent statistical analysis.

To set the stage, this healthcare client’s summarized (and greatly simplified) income statement divides Revenue into Inpatient and Outpatient and Expenses into Total Labor and Non Labor. Revenue and expenses are the primary focus of much of the analytics at an aggregate level.  A single (seemingly arbitrarily chosen) number was used to determine the colored flags for each of these measures.  This was despite Inpatient Revenue and Non Labor Expenses comprising the majority of the revenue and expense amounts (respectively).  If we were to plot out these categories for the first five months of a fiscal year, we see the following (all data have been altered to preserve client confidentiality without overly affecting the overall analytic output):

figure-1

Figure 1 Revenue and Expense Trend Plot

The trouble with plotting a trend of numbers is that it is sometimes difficult to understand, at a glance, how the organization is performing. In the plots above, clear downward and upward trends can be seen for Inpatient Revenue and Total Labor Expense (respectively).  However, upon closer examination of Outpatient Revenue and Non Labor Expense, there are two upward trending months and two downward trending months.  The overall trend is difficult to discern.

With the introduction of Oracle Business Intelligence Enterprise Edition (OBIEE)12c, a Trendline function was introduced that allows the creation of a linear regression trendline. Once this is applied, the above trend plots can be augmented to get a clearer picture of performance:

figure-2

Figure 2 Revenue and Expense Linear Regression

This trendline uses a simple linear regression formula that is comprised as the slope (commonly represented by the letter m) and Intercept (commonly represented by the letter b) in the following formula:

y = mx + b

In our trend plots, the letter y represents the revenue and expense categories and x represents the fiscal periods.

The intercept is where the trendline crosses the y-axis when x is equal to zero. For most statistical analyses, the intercept is unimportant.  The slope can be thought of the average change over the two parameters.  Using OBI, the slope of each revenue and expense category can be calculated and the dashboard updated:

figure-3

Figure 3 Linear Regression Slope

In the example above, the slope of the Inpatient Revenue can be thought as decreasing an average of $291,000 a month.

One issue with using the slope is that it is subjective. As was mentioned, our healthcare client had chosen a single arbitrary slope for each of the revenue and expense categories.  The slopes in the example above range from 29 thousand to -291 thousand.  Complicating matters, the client wanted the ability to run these Analysis for individual hospitals which can dramatically affect the slope.  For instance, a hospital operating in Kansas City will probably not have the same revenue growth (or shrinkage) as a hospital operating in New York City.  To use the slope as a quantifiable objective properly, a target slope would have to be determined for the enterprise and at each granular level expected to be benchmarked (hospital, department, etc.).  This creates some obvious maintenance issues.

A more objective approach is to use the correlation coefficient, a number on a range from negative one to positive one. A correlation ranking of one indicates a positive correlation while a ranking of negative one indicates a negative correlation.  For instance, for most companies, the number of units sold is often has a high degree of positive correlation to revenue.  This would correspond to a correlation coefficient of close to one.  For many companies working in the commodities market, the more competitor’s revenue increases, the lower the possible market share.  This would be a negative correlation and result in a correlation coefficient calculation of negative one.  A correlation coefficient of zero indicates a lack of any correlation.  For instance, the number of broken arms set in a New York hospital is probably uncorrelated to the number of bowls of soup served by Panera Bread in Kansas City.

It is worth noting that correlation does not mean causation. For example, consider the number of pirate attacks and users of Microsoft Internet Explorer (IE) users:

figure-4

Figure 4 IE Usage and Pirate Attacks

The number of pirate attacks and IE users have both been in decline since 2009. As can be seen by the scatter graph on the right, the more pirate attacks, the greater the use of IE.  Regardless, naval security experts are probably not asking for adoption rate reports from Microsoft.

Returning to the client’s use case, adding the correlation coefficient to the dashboard provides a greater understanding of how the company is objectively performing:

figure-5

Figure 5 Month and Revenue / Expense Category Figure Correlation

Inpatient Revenue has a correlation of -0.69, which is moderately significant for a metric most businesses want to increase. Conversely, the Outpatient Revenue has a slightly negative correlation of -0.36.  While this should be a cause for concern, a “wait and see” approach (or deeper dive into Outpatient Revenue Categories) might be more prudent.  Because the range of the correlation coefficient is negative one to one, filtering this analysis down to a more granular level, such as a hospital or department, will return an objective number that can be subjected to independent interpretation.

There are cases in which the subjectivity of the slope is particularly useful. In the case of our client, a full year budget was prepared at the beginning of the fiscal year and periodically updated as the year progressed. The slope of this budget could be used to generate the average dollar change desired per month.  The advantage of this is that it reduces the possible volatility of a particular month into a single number that can be compared to the benchmark.  As a final addition to the dashboard, a full year budget slope was added:

figure-6

Figure 6 Full Year Budget Slope

With the exception of Non Labor Expenses, this organization is missing the mark on all of their budgetary goals, and the trend indicated by the actual slope and correlation coefficient means this situation is likely to get worse.

A word of warning about statistics in general and the use of slope and correlation coefficient in particular: micro and macro trends can should be considered and extreme outliers can mask actual trends.

For an example of micro and macro trends, consider JCPenney, a retailor that has been struggling since 2010. The following visualization (created using Oracle Data Visualization Desktop) charts the quarterly revenue from 2004 Q3 to 2016 Q4 along with the trendline for the entire period.  The bars represent the correlation coefficient to that particular quarter (i.e. the first bar is the correlation between 2004 Q3 and 2004 Q4 while the second bar is the correlation between 2004 Q3, 2004 Q4, and 2005 Q1, etc.):

figure-7

Figure 7 JCPenney Revenue Trend and Correlation

Notice that the first correlation bar is equal to one. When there are only two data points, the correlation coefficient will be equal to one, negative one, or zero.  The next data point and correlation for 2005 Q1 (JCPenney recognizes holiday revenue in Q1 of each year) continues the high correlation streak, however, the following quarter drops the correlation down to 0.35.  The correlation fluctuates quarterly until about 2012 Q2 when the definite downward trend is established.

A savvy analyst will break JCPenney’s performance during this time range into three distinct trends. Upward trending from 2004 to 2008 Q1, diminished upward trend from 2008 Q2 to 2012 Q1, and then a flat, but greatly reduced revenue from there:

figure-8

Figure 8 JCPenney Distinct Trends

As an example of how an extreme outlier can affect statistical analysis, consider GTx Incorporated, a pharmaceutical drug developer. In December 2010, GTx recognized $49.9 million dollars in revenue from a partnership with Merck& Co., Inc., which spiked GTx’s revenue (previously averaging $2 million a quarter) to $56.7 million dollars:

figure-93

Figure 9 GTx Incorporated Revenue Trend

In the visualization above, the orange projected trendline was calculated using revenue from 2004 Q1 through 2009 Q4. The purple trendline is the projected calculated using 2010 Q1, which includes the huge revenue spike.  Obviously, the orange trendline is the more accurate due exclusion of the extreme data point.

Statistical analytics is part science/technology and part art. As with any data and visualizations, a certain degree of intelligent interpretation is needed to determine what it all really means.  Functional users should be focused on what the various statistical interpretations mean and not be distracted on the complexity of the underlying mathematical functions.  Trend visualizations can aid users in understanding how to interpret these statistical calculations.  Many organizations miss opportunities because of individuals unwilling to embrace statistical methods due to the lack of solid education and guidance about what these numbers really mean.  Training, change management, and the creation of rich visualizations can help enterprises harness the capabilities of statistical analysis and extend the role of their business intelligence systems.

Process Simplification – Migrating from HPCM Standard Profitability to Management Ledger

With the introduction of Hyperion Profitability and Cost Management (HPCM), many organizations have recognized the power of this breakthrough solution to build sophisticated and powerful cost models. As such, HPCM has been successfully in use for several years, and in numerous cases, its use has been expanded.

Since the initial release of HPCM, Oracle has developed additional variations of HPCM to provide a full suite of capabilities in costing and profitability that can more specifically provide the right tool for the right job (RTRJ). These additional offerings include HPCM-Detailed Profitability and HPCM-Management Ledger, the latter of which is available either in the on premise version (HPCM-ML) or the cloud version – Profitability & Cost Management-Cloud Service (PCMCS).  The original solution of HPCM is now referred to as HPCM-Standard Profitability (HPCM-Standard).

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). This experience has prompted the notion that given the multiple offerings that are now available, it is worthwhile to evaluate the applicability of the new solutions to an organization’s existing use cases and consider making a change where appropriate.  In particular, Management Ledger offers benefits of flexibility and process simplification to warrant consideration of conversion of an HPCM Standard model to HPCM-ML or PCMCS.   This article discusses that process.

Background

Since HPCM’s introduction, it has been seen that there is not necessarily a one-size-fits-all solution for the set of needs in cost allocations and profitability. All allocations fundamentally follow the basic formula, A = S x F x D/Sum(D) where A = the target Allocated amount, S = Source amount, F = Factor, i.e. percent of source amount to be allocated, often 100%, D = Driver quantity, and Sum(D) = Sum of Driver quantities across target values.

However, this fundamental formula is where similarities end and distinctions begin. The original solution, HPCM-Standard, is well suited for cases where highly complex allocation models are utilized.  It is also well positioned where adherence to a highly-structured framework is sought, and it provides capability for highly detailed graphical tracing of allocations in the user interface.

Alternatively, Detailed Profitability, which can be deemed as the “heavy-lifter” of the offerings, requires that users define relatively simple allocation rules through a single allocation stage. However, in exchange for this concession, the solution can apply those rules across a wide range of dimensions and is able to do so at a very granular level of detail.  Also referred to as “Microcosting,” this solution leverages source pools and rates applied to a high volume of transactions or near-transactions.  Firms within industries such as consumer goods, transportation and distribution, retail banking, and healthcare are among those that may want to leverage this capability.  This solution enables capture of variation in cost at the shipment, order, transaction, or encounter level of detail, and then aggregates those values to higher levels such as product, service, or customer for analysis.

The third offering, Management Ledger, combines aspects of both of the other two solutions, such as some of the metadata granularity of Detailed Profitability, along with the logic complexity of Standard. This enables users to define custom models with fewer restrictions on the framework and fewer limits on the level of detail required for reporting.  Management Ledger is also flexible to accommodate future changes through its Rule Set/Rule sequencing construct.  Subsequent allocation logic changes can be of a substantial nature, potentially up to a near redesign.  Also, the rules building process itself is simplified in Management Ledger and it is one that aligns well with the intuition of finance users.  Further, Oracle’s current strategic direction is with Management Ledger, most notably seen in the recent release of PCMCS.

What is the benefit of conversion?

Management Ledger offers several key capabilities that can improve, streamline, or otherwise address existing challenges in a Standard Profitability environment.

  1. Management Ledger does not rely on a back-end staging table paradigm for data loading as does HPCM-Standard. Such reliance requires the availability of resources with the database skills required to support SQL interfaces to automate model processes, as well as to perform maintenance when metadata updates are made. For some user sites, the availability of these skills is limited.
  2. Management Ledger is an ASO application. It is not subject to the metadata restriction faced when deploying the HPCM-Standard calculation cube, which is BSO, and is subject to reaching the maximum number of potential blocks due to metadata duplication. Since Management Ledger does not duplicate the dimensions, it makes reporting easier for end-users and can eliminate the need for a “simplified” HPCM reporting application that is often created in an implementation of HPCM-Standard.
  3. Management Ledger does not require the use of pre-defined stages and an associated limit of three dimensions per stage as utilized in HPCM-Standard. This framework drove design decisions and influences future changes at certain user sites.
  4. Management Ledger is flexible to accommodate new methods of allocating data. The presence of a dimension in an application allows for its selection and filtering without the need for re-design.
  5. Management Ledger provides an interface that can be quickly learned by business users. Its set-up and maintenance simply requires the identification of sources, destinations, and the driver bases of allocations. Because it does not rely upon or require use of any specific methodology, existing Planning and HFM users can quickly learn the navigation and logic of Management Ledger. As shown below, the process for rules building in Management Ledger is straightforward.
    1-16-1

    Management Ledger Rules Building Interface

    1-16-2

  6. Management Ledger offers a multitude of standard reports for model documentation, rules validation, rule balance summaries of the results, and graphic traceability. PCMCS adds Business Intelligence visualizations such as scatter plots, cumulative profitability “whale curves,” and KPIs.

 

PCMCS Visuals

1-16-31-16-41-16-5

With all of these potential benefits, there are also offsetting considerations. Management Ledger may require more maintenance than HPCM-Standard due to a higher number of allocation rules, which is required in order to enable parallel processing.  Further, the graphical built-in traceability screen in Management may be considered by some as being less intuitive than the screen provided with HPCM-Standard.  Therefore, not in every case where Management Ledger is seen as a useful fit, will the advantages over Standard Profitability be sufficient justification to undertake the time and effort of a conversion.

What are the criteria for undertaking a Management Ledger Conversion?

To help evaluate whether it is worthwhile to pursue migrating a Standard Profitability model to Management Ledger, the following questions can be asked:

  1. Is there a major re-organization pending that is prompting a re-evaluation of the overall stages framework?
  2. Will there be future changes in which new allocation processes are added, such as moving beyond organizational allocations to ones that include other dimensions such as product or customer?
  3. Do changes in allocation methodologies occur often? Will business users be required to make these updates/changes and without the support of IT staff?
  4. Are new scenarios such as What-If or Ad-Hoc planned and is there an interest in testing different allocation methodologies versus the existing live production models?
  5. Are the theoretical limits associated with the Block Storage Outline (BSO) being approached?
  6. Is the process for updating the Standard Profitability staging tables considered to be time consuming and/or is the automation for populating the staging tables viewed as complex or poorly understood?
  7. Are there currently other Management Ledger models in the organization and is there a need or desire to achieve communization of platforms?
  8. Is there an objective to move applications to the Cloud?

 

What are the steps to migrate?

If the answer to any of the above questions is yes, then there is a potential opportunity to convert a Standard Profitability model to Management Ledger. In such a case, a prototype to test the concept should be created.  This prototype should be loaded with a sample of data and rules, typically for at least one POV, and calculated and validated.  Though each situation will have unique requirements, the overall steps are as follows:

Prototype Build -> Rules Creation -> Testing -> Validation -> Adjustment -> Migration

General Steps to Migrating to Management Ledger

  1. Migrate the Standard model to the same environment where the Management Ledger test will be built.
  2. Run a calculation of the Standard model to obtain a benchmark performance time.
  3. Create a new cube and database and copy the dimensions from the existing cube. A new Master application should be created and the dimensionality copied from the existing Standard Profitability Master application. This is so that the dimensionality from the calculation cube isn’t used, in order to avoid duplicate dimensions.
  4. Copy the dimensions from the old to the new cube. Make Cube Outline Updates.
    • Change the NoMember dimension member in each dimension to NoDimensionName.
    • Determine the dimension for the Drivers, usually the DataType or Account dimension.
    • Add the drivers from the Measures dimension to the Account or a DataType dimension.
    • Delete Measures and AllocationType dimensions (used with Standard model).
    • Add the Rule and Balance dimensions (used with Management Ledger models.
    • Add UDAs for potential rule filtering requirements.
    • Should both Source and Target allocation details be required for reporting, dimensions may need to be duplicated or split, such as in a case with Initial Cost Pool and Final Cost Pool.
  5. Create a new Management Ledger Profitability application that references the new cube.
  6. Deploy the Management Ledger Essbase Calculation engine.
  7. Choose and create a single POV to start.
  8. Import data from the existing cube to the new one utilizing the various methods available such as free form loading without rules, structured loading with rules, spreadsheet add-ins such as SmartView or other tools such as FDM/FDMEE. Note: For PCMCS, flat files of dimensions and data are employed.
  9. Document the allocation rules in a template.
  10. Enter the allocation rules through the ML user interface.
  11. Run Model Validation to check the new Rule Sets and Rules for errors before calculating.
  12. Launch a calculation. Start with running a single rule.
  13. Validate the Results. Progressively select more rules for successive calculation as rules are validated.
  14. Adjust methods iteratively.
  15. Create and update a report to demonstrate the validations to end-users as well as how the results are consumed.
  16. Migrate, once validation is complete including acceptability of both the results values and the processing times.

 

Some thoughts on building allocation rules

Upon having a Management Ledger outline, the allocation rules from Standard should be constructed through the user interface. There should be an association between the Stages in a Standard model versus the Rule Sets in a Management Ledger.  As a starting point, the Rule Set sequence flow should match the stages, though it may be found necessary to break the stages into multiple rule sets.

1-16.6.png

Once the rule sets are determined, the rules themselves should be documented in a template (Excel, Word, etc.) that is easy to manage and understand. The example that follows shows the dimensionality of the Source, Destination, Driver Basis, and Source Offset.

This template becomes part of the documentation of the prototype. Upon completion of the template, a user should build the rule sets and rules in the Management Ledger interface.  One of the key benefits of Management Ledger is to reference parent level values in the assignment rules.  This provides the ability to create many-to-many source-destination associations with few keystrokes.  This not only saves time in initial set-up, but also makes the entire process data driven such that when new dimension members such as new accounts, cost centers, products, or customers are added, the allocation rules automatically accommodate them without the need for editing or updating.  The ability to select at the parent level also reduces the need for automation routines of the types that are frequently created in Standard Profitability implementations, such as those used to update staging tables (Management Ledger does not have staging tables).

Users should start with referencing the highest-level parents to make the process as automated as possible. If performance becomes an issue, it may be necessary to reference mid or lower level parents.  Rules should be tested iteratively, i.e. run individually and then in groups to validate both the answers and to track processing time.

If calculation times exceed requirements or expectations, then start moving references to lower level parents. Avoid going to children as that will increase maintenance in the future.

Validation Concepts

Use the Rule Balancing Report to validate the cost flow and confirm that allocations in and out match expectations. Users should also generate a set of SmartView queries from the control HPCM-Standard Model and compare those to a set of SmartView queries from the HPCM-ML prototype.  Input and Stage amounts from HPCM-Standard should compare to Rule Set amounts in HPCM-ML, including checks that rule sets are using drivers correctly.  Calculation time and performance should also be tracked and benchmarked.

1-16-7

Conclusion

The advent of HPCM Management Ledger in both the on premise and cloud-based versions provides organizations with an opportunity to consider their existing solution and whether a migration to Management Ledger is warranted. Multiple considerations must be evaluated in this decision, and a prototype-based assessment is recommended as part of the process.  Edgewater Ranzal provides an Assessment service offering to assist organizations with this evaluation, as well as a subsequent implementation.  With over twenty experienced full-time consultants across the Americas and EMEA, and with more than twenty-five successful HPCM projects delivered since 2009, Edgewater Ranzal is the leading Oracle partner in delivering all versions of HPCM. Its comprehensive multi-product delivery approach can incorporate other tools such as Planning, DRM, FDMEE, & OBIEE.  These qualifications, along with its close relationship with Oracle Development, make Edgewater Ranzal the premier partner for client success.

 

Accelerate Your Ride to the Cloud: Extending ERP with Oracle Profitability & Cost Management Cloud Service (PCMCS) for Standard Cost Rate Development

A common need among manufacturing organizations is improvement in the process of developing annual labor and overhead standards to use as input into standard cost rates for product cost and inventory valuation. In spite of the investments that have been made in ERP solutions, it is typically an offline Excel-based exercise that is required to take historical data from the ERP to determine the updated direct labor rate & overhead rate components of a product standard cost for an upcoming fiscal year.  The release of Oracle Profitability and Cost Management-Cloud Service (PCMCS) in October 2016 provides a unique opportunity for manufacturers to ease, streamline and document the process of generating the cost-per-direct labor hour or cost-per-machine-hour rates that are requisite in standard costing.

Background

Generally accepted accounting principles (GAAP) allow for one of multiple methods for the valuation of inventory to a manufacturer: Last-In, First-Out (LIFO); First-In, First-Out (FIFO); or a Weighted Average.

Because prices for labor and materials fluctuate throughout a year and inventory is built or drawn, it is difficult to track inventory on an on-going basis using these methods. Further, from a management perspective, it is more meaningful to separate the effects of price changes and inventory builds/draws from values associated with normal business.  Pricing decisions, incentive compensation and matching expenses to the physical flow of goods would all be adversely impacted by trying to constantly manage to these methods.

A common approach to achieve meaningful inventory and cost of goods sold values is to establish a “standard cost” for every product and then adjust the value of inventory on a separate line at year-end, to bring it to the GAAP basis.

This standard cost requires direct labor, direct material and an inclusion of an amount representing the “absorption” of certain of plant-related overhead costs into the inventory value.

There are two forms of overhead that must be included in the inventory value from a GAAP perspective: 1) Labor overhead and 2) Manufacturing overhead, sometimes called Indirect Overhead.

  1. Labor overhead represents the costs of direct labor resources above and beyond their direct hourly wage rate. This amount includes payroll taxes, retirement and health care benefits, workers’ compensation, life insurance and other fringe benefits.
  2. Manufacturing overhead includes a grouping of costs that are related to the sustainment of the manufacturing process, but are not directly consumed or incurred with each unit of production. Examples of these costs include:
  • Materials handling
  • Equipment Set-up
  • Inspection and Quality Assurance
  • Production Equipment Maintenance and Repair
  • Depreciation on manufacturing equipment and facilities
  • Insurance and property taxes on manufacturing facilities
  • Utilities such as electricity, natural gas, water, and sewer required for operating the manufacturing facilities
  • The factory management team

The most common first step for determining the value of overheads in inventory is to use a predetermined rate that represents a cost charge per direct labor hour or cost per machine hour. From product bills of material and routings, the total number of hours or labor or machine usage for a unit volume of production is known. The value of the overhead cost rate per direct labor hour (or machine hour) x the number of hours required per unit of production, yields the overhead cost rate per unit. In the example below, the ERP will calculate the cost per work center, but it is reliant on the Direct Labor and Overhead Rates to complete this process.

dp-image-1jpg

The challenge comes when calculating the applicable pre-determined rate for overhead per direct labor hour or machine hour by the applicable cost or work center. PCMCS can assist with automating and updating this process.

A Better Solution: The Ranzal PCMCS Standard Cost Solution

PCMCS provides the ability to quickly and flexibly put the creation of multi-step allocation processes into the hands of business users. It also provides for the management of hierarchies without the need for external dimension management applications as well as standard file templates for data upload.  Further, a series of standard dashboard and report visuals augment the viewing and monitoring of results.  These capabilities allow organizations to quickly load and allocate expenses to applicable overhead cost pools and then merge those cost pools with applicable labor or machine hour values to obtain the relevant overhead rates.

PCMCS allows users to quickly select the cost centers or work centers that are applicable as sources to be included in the overhead rate:

dp-image-2jpg

Users then can easily select the targets for collecting these costs into relevant pools,

dp-image-3

as well as the operational metric to use to assign these overhead costs to their applicable pools.

dp-image-4

Users then can easily select the targets for collecting these costs into relevant pools,

dp-image-5

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). Following the release of PCMCS, Ranzal will be announcing a Cloud servicing offering that will leverage the power of the Cloud to provide an accelerated method of producing the required inputs for overhead allocation in standard costing.

More than just Standard Costing

Additionally, while PCMS provides an excellent way to develop overhead rates for standard costing, it can simultaneously be utilized to determine allocations and costing valuations that leverage other methodologies for product and customer costing and profitability. Much has been written about the potential for inaccuracies if the standard cost basis of overhead allocation in product costing were to be used universally or exclusively for management analysis.  Overhead has become such a large portion of the total cost, that in many cases, overhead rates can be three or four times higher than their respective direct labor rates.  This suggests a general lack of causality between overhead and direct labor hours in many cases, and this has led to the evolution of other methods for costing.  Activity Based Costing is one such example, while simply allocating manufacturing variances to product lines is another.

PCMCS can be used to meet the requirements for both the externally reported methods and the management methods of product costing.

All of the Results in One Place

Determining the method by which overhead should be captured in the cost of different products of inventory is an important process because it represents a step by which a large number of dollars is moved from an expense to an asset, usually temporarily but sometimes permanently, and this can impact profitability and stock share price.

For the purpose of valuing inventory for statutory reporting, the overhead rate method is considered acceptable and it is widely used. It is therefore important that organizations find a way to develop and manage these cost valuations in a manner that is well-documented, has transparent methodology and is one that reduces the amount of time spent on the process.  However, it is not the only method that should be used for considering overhead in product and customer costing and profitability analysis.  Further, selling, general and administrative expenses (SG&A) represents another layer of cost that while not part of standard inventory cost, should be considered in overall product costs from a management perspective.

To this end, the Edgewater Ranzal PCMCS Standard Cost solution will provide an opportunity to fulfill multiple needs in costing and profitability and will do so in a manner that will be faster and more user-friendly than what has previously been experienced.

Oracle Business Intelligence Cloud Service (BICS) September Update

The latest upgrade for BICS happened last week and, while there are no new end user features, it is now easier to integrate data. New to this version is the ability to connect to JDBC data sources through the Data Sync tool.  This allows customers to set up automated data pulls from Salesforce, Redshift, and Hive among others.  In addition to these connections, Oracle RightNow CRM customers have the ability to pull directly from RightNow reports using Oracle Data Sync.  Finally, connections to on premise databases and BICS can be secured using Secure Socket Layer (SSL) certifications.

After developing a customer script using API calls to pull data from Salesforce, I am excited about the ability to connect directly to Salesforce with Data Sync. Direct connections to the Salesforce database allows you to search and browse for relevant tables and import the definitions with ease:

blog

Once the definitions have been imported, standard querying clauses can create the ability to include only relevant data, perform incremental ETLs, and further manipulate the data.

While there are no new features for end users, this is a powerful update when it comes to data integration. Using APIs to extract data from Salesforce meant that each extraction query had to be written by hand which was time consuming and prone to error.  With these new data extraction processes, BICS implementations and integrating data becomes much faster, furthering the promise of Oracle Cloud technologies.