Process Simplification – Migrating from HPCM Standard Profitability to Management Ledger

With the introduction of Hyperion Profitability and Cost Management (HPCM), many organizations have recognized the power of this breakthrough solution to build sophisticated and powerful cost models. As such, HPCM has been successfully in use for several years, and in numerous cases, its use has been expanded.

Since the initial release of HPCM, Oracle has developed additional variations of HPCM to provide a full suite of capabilities in costing and profitability that can more specifically provide the right tool for the right job (RTRJ). These additional offerings include HPCM-Detailed Profitability and HPCM-Management Ledger, the latter of which is available either in the on premise version (HPCM-ML) or the cloud version – Profitability & Cost Management-Cloud Service (PCMCS).  The original solution of HPCM is now referred to as HPCM-Standard Profitability (HPCM-Standard).

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). This experience has prompted the notion that given the multiple offerings that are now available, it is worthwhile to evaluate the applicability of the new solutions to an organization’s existing use cases and consider making a change where appropriate.  In particular, Management Ledger offers benefits of flexibility and process simplification to warrant consideration of conversion of an HPCM Standard model to HPCM-ML or PCMCS.   This article discusses that process.

Background

Since HPCM’s introduction, it has been seen that there is not necessarily a one-size-fits-all solution for the set of needs in cost allocations and profitability. All allocations fundamentally follow the basic formula, A = S x F x D/Sum(D) where A = the target Allocated amount, S = Source amount, F = Factor, i.e. percent of source amount to be allocated, often 100%, D = Driver quantity, and Sum(D) = Sum of Driver quantities across target values.

However, this fundamental formula is where similarities end and distinctions begin. The original solution, HPCM-Standard, is well suited for cases where highly complex allocation models are utilized.  It is also well positioned where adherence to a highly-structured framework is sought, and it provides capability for highly detailed graphical tracing of allocations in the user interface.

Alternatively, Detailed Profitability, which can be deemed as the “heavy-lifter” of the offerings, requires that users define relatively simple allocation rules through a single allocation stage. However, in exchange for this concession, the solution can apply those rules across a wide range of dimensions and is able to do so at a very granular level of detail.  Also referred to as “Microcosting,” this solution leverages source pools and rates applied to a high volume of transactions or near-transactions.  Firms within industries such as consumer goods, transportation and distribution, retail banking, and healthcare are among those that may want to leverage this capability.  This solution enables capture of variation in cost at the shipment, order, transaction, or encounter level of detail, and then aggregates those values to higher levels such as product, service, or customer for analysis.

The third offering, Management Ledger, combines aspects of both of the other two solutions, such as some of the metadata granularity of Detailed Profitability, along with the logic complexity of Standard. This enables users to define custom models with fewer restrictions on the framework and fewer limits on the level of detail required for reporting.  Management Ledger is also flexible to accommodate future changes through its Rule Set/Rule sequencing construct.  Subsequent allocation logic changes can be of a substantial nature, potentially up to a near redesign.  Also, the rules building process itself is simplified in Management Ledger and it is one that aligns well with the intuition of finance users.  Further, Oracle’s current strategic direction is with Management Ledger, most notably seen in the recent release of PCMCS.

What is the benefit of conversion?

Management Ledger offers several key capabilities that can improve, streamline, or otherwise address existing challenges in a Standard Profitability environment.

  1. Management Ledger does not rely on a back-end staging table paradigm for data loading as does HPCM-Standard. Such reliance requires the availability of resources with the database skills required to support SQL interfaces to automate model processes, as well as to perform maintenance when metadata updates are made. For some user sites, the availability of these skills is limited.
  2. Management Ledger is an ASO application. It is not subject to the metadata restriction faced when deploying the HPCM-Standard calculation cube, which is BSO, and is subject to reaching the maximum number of potential blocks due to metadata duplication. Since Management Ledger does not duplicate the dimensions, it makes reporting easier for end-users and can eliminate the need for a “simplified” HPCM reporting application that is often created in an implementation of HPCM-Standard.
  3. Management Ledger does not require the use of pre-defined stages and an associated limit of three dimensions per stage as utilized in HPCM-Standard. This framework drove design decisions and influences future changes at certain user sites.
  4. Management Ledger is flexible to accommodate new methods of allocating data. The presence of a dimension in an application allows for its selection and filtering without the need for re-design.
  5. Management Ledger provides an interface that can be quickly learned by business users. Its set-up and maintenance simply requires the identification of sources, destinations, and the driver bases of allocations. Because it does not rely upon or require use of any specific methodology, existing Planning and HFM users can quickly learn the navigation and logic of Management Ledger. As shown below, the process for rules building in Management Ledger is straightforward.
    1-16-1

    Management Ledger Rules Building Interface

    1-16-2

  6. Management Ledger offers a multitude of standard reports for model documentation, rules validation, rule balance summaries of the results, and graphic traceability. PCMCS adds Business Intelligence visualizations such as scatter plots, cumulative profitability “whale curves,” and KPIs.

 

PCMCS Visuals

1-16-31-16-41-16-5

With all of these potential benefits, there are also offsetting considerations. Management Ledger may require more maintenance than HPCM-Standard due to a higher number of allocation rules, which is required in order to enable parallel processing.  Further, the graphical built-in traceability screen in Management may be considered by some as being less intuitive than the screen provided with HPCM-Standard.  Therefore, not in every case where Management Ledger is seen as a useful fit, will the advantages over Standard Profitability be sufficient justification to undertake the time and effort of a conversion.

What are the criteria for undertaking a Management Ledger Conversion?

To help evaluate whether it is worthwhile to pursue migrating a Standard Profitability model to Management Ledger, the following questions can be asked:

  1. Is there a major re-organization pending that is prompting a re-evaluation of the overall stages framework?
  2. Will there be future changes in which new allocation processes are added, such as moving beyond organizational allocations to ones that include other dimensions such as product or customer?
  3. Do changes in allocation methodologies occur often? Will business users be required to make these updates/changes and without the support of IT staff?
  4. Are new scenarios such as What-If or Ad-Hoc planned and is there an interest in testing different allocation methodologies versus the existing live production models?
  5. Are the theoretical limits associated with the Block Storage Outline (BSO) being approached?
  6. Is the process for updating the Standard Profitability staging tables considered to be time consuming and/or is the automation for populating the staging tables viewed as complex or poorly understood?
  7. Are there currently other Management Ledger models in the organization and is there a need or desire to achieve communization of platforms?
  8. Is there an objective to move applications to the Cloud?

 

What are the steps to migrate?

If the answer to any of the above questions is yes, then there is a potential opportunity to convert a Standard Profitability model to Management Ledger. In such a case, a prototype to test the concept should be created.  This prototype should be loaded with a sample of data and rules, typically for at least one POV, and calculated and validated.  Though each situation will have unique requirements, the overall steps are as follows:

Prototype Build -> Rules Creation -> Testing -> Validation -> Adjustment -> Migration

General Steps to Migrating to Management Ledger

  1. Migrate the Standard model to the same environment where the Management Ledger test will be built.
  2. Run a calculation of the Standard model to obtain a benchmark performance time.
  3. Create a new cube and database and copy the dimensions from the existing cube. A new Master application should be created and the dimensionality copied from the existing Standard Profitability Master application. This is so that the dimensionality from the calculation cube isn’t used, in order to avoid duplicate dimensions.
  4. Copy the dimensions from the old to the new cube. Make Cube Outline Updates.
    • Change the NoMember dimension member in each dimension to NoDimensionName.
    • Determine the dimension for the Drivers, usually the DataType or Account dimension.
    • Add the drivers from the Measures dimension to the Account or a DataType dimension.
    • Delete Measures and AllocationType dimensions (used with Standard model).
    • Add the Rule and Balance dimensions (used with Management Ledger models.
    • Add UDAs for potential rule filtering requirements.
    • Should both Source and Target allocation details be required for reporting, dimensions may need to be duplicated or split, such as in a case with Initial Cost Pool and Final Cost Pool.
  5. Create a new Management Ledger Profitability application that references the new cube.
  6. Deploy the Management Ledger Essbase Calculation engine.
  7. Choose and create a single POV to start.
  8. Import data from the existing cube to the new one utilizing the various methods available such as free form loading without rules, structured loading with rules, spreadsheet add-ins such as SmartView or other tools such as FDM/FDMEE. Note: For PCMCS, flat files of dimensions and data are employed.
  9. Document the allocation rules in a template.
  10. Enter the allocation rules through the ML user interface.
  11. Run Model Validation to check the new Rule Sets and Rules for errors before calculating.
  12. Launch a calculation. Start with running a single rule.
  13. Validate the Results. Progressively select more rules for successive calculation as rules are validated.
  14. Adjust methods iteratively.
  15. Create and update a report to demonstrate the validations to end-users as well as how the results are consumed.
  16. Migrate, once validation is complete including acceptability of both the results values and the processing times.

 

Some thoughts on building allocation rules

Upon having a Management Ledger outline, the allocation rules from Standard should be constructed through the user interface. There should be an association between the Stages in a Standard model versus the Rule Sets in a Management Ledger.  As a starting point, the Rule Set sequence flow should match the stages, though it may be found necessary to break the stages into multiple rule sets.

1-16.6.png

Once the rule sets are determined, the rules themselves should be documented in a template (Excel, Word, etc.) that is easy to manage and understand. The example that follows shows the dimensionality of the Source, Destination, Driver Basis, and Source Offset.

This template becomes part of the documentation of the prototype. Upon completion of the template, a user should build the rule sets and rules in the Management Ledger interface.  One of the key benefits of Management Ledger is to reference parent level values in the assignment rules.  This provides the ability to create many-to-many source-destination associations with few keystrokes.  This not only saves time in initial set-up, but also makes the entire process data driven such that when new dimension members such as new accounts, cost centers, products, or customers are added, the allocation rules automatically accommodate them without the need for editing or updating.  The ability to select at the parent level also reduces the need for automation routines of the types that are frequently created in Standard Profitability implementations, such as those used to update staging tables (Management Ledger does not have staging tables).

Users should start with referencing the highest-level parents to make the process as automated as possible. If performance becomes an issue, it may be necessary to reference mid or lower level parents.  Rules should be tested iteratively, i.e. run individually and then in groups to validate both the answers and to track processing time.

If calculation times exceed requirements or expectations, then start moving references to lower level parents. Avoid going to children as that will increase maintenance in the future.

Validation Concepts

Use the Rule Balancing Report to validate the cost flow and confirm that allocations in and out match expectations. Users should also generate a set of SmartView queries from the control HPCM-Standard Model and compare those to a set of SmartView queries from the HPCM-ML prototype.  Input and Stage amounts from HPCM-Standard should compare to Rule Set amounts in HPCM-ML, including checks that rule sets are using drivers correctly.  Calculation time and performance should also be tracked and benchmarked.

1-16-7

Conclusion

The advent of HPCM Management Ledger in both the on premise and cloud-based versions provides organizations with an opportunity to consider their existing solution and whether a migration to Management Ledger is warranted. Multiple considerations must be evaluated in this decision, and a prototype-based assessment is recommended as part of the process.  Edgewater Ranzal provides an Assessment service offering to assist organizations with this evaluation, as well as a subsequent implementation.  With over twenty experienced full-time consultants across the Americas and EMEA, and with more than twenty-five successful HPCM projects delivered since 2009, Edgewater Ranzal is the leading Oracle partner in delivering all versions of HPCM. Its comprehensive multi-product delivery approach can incorporate other tools such as Planning, DRM, FDMEE, & OBIEE.  These qualifications, along with its close relationship with Oracle Development, make Edgewater Ranzal the premier partner for client success.

 

Accelerate Your Ride to the Cloud: Extending ERP with Oracle Profitability & Cost Management Cloud Service (PCMCS) for Standard Cost Rate Development

A common need among manufacturing organizations is improvement in the process of developing annual labor and overhead standards to use as input into standard cost rates for product cost and inventory valuation. In spite of the investments that have been made in ERP solutions, it is typically an offline Excel-based exercise that is required to take historical data from the ERP to determine the updated direct labor rate & overhead rate components of a product standard cost for an upcoming fiscal year.  The release of Oracle Profitability and Cost Management-Cloud Service (PCMCS) in October 2016 provides a unique opportunity for manufacturers to ease, streamline and document the process of generating the cost-per-direct labor hour or cost-per-machine-hour rates that are requisite in standard costing.

Background

Generally accepted accounting principles (GAAP) allow for one of multiple methods for the valuation of inventory to a manufacturer: Last-In, First-Out (LIFO); First-In, First-Out (FIFO); or a Weighted Average.

Because prices for labor and materials fluctuate throughout a year and inventory is built or drawn, it is difficult to track inventory on an on-going basis using these methods. Further, from a management perspective, it is more meaningful to separate the effects of price changes and inventory builds/draws from values associated with normal business.  Pricing decisions, incentive compensation and matching expenses to the physical flow of goods would all be adversely impacted by trying to constantly manage to these methods.

A common approach to achieve meaningful inventory and cost of goods sold values is to establish a “standard cost” for every product and then adjust the value of inventory on a separate line at year-end, to bring it to the GAAP basis.

This standard cost requires direct labor, direct material and an inclusion of an amount representing the “absorption” of certain of plant-related overhead costs into the inventory value.

There are two forms of overhead that must be included in the inventory value from a GAAP perspective: 1) Labor overhead and 2) Manufacturing overhead, sometimes called Indirect Overhead.

  1. Labor overhead represents the costs of direct labor resources above and beyond their direct hourly wage rate. This amount includes payroll taxes, retirement and health care benefits, workers’ compensation, life insurance and other fringe benefits.
  2. Manufacturing overhead includes a grouping of costs that are related to the sustainment of the manufacturing process, but are not directly consumed or incurred with each unit of production. Examples of these costs include:
  • Materials handling
  • Equipment Set-up
  • Inspection and Quality Assurance
  • Production Equipment Maintenance and Repair
  • Depreciation on manufacturing equipment and facilities
  • Insurance and property taxes on manufacturing facilities
  • Utilities such as electricity, natural gas, water, and sewer required for operating the manufacturing facilities
  • The factory management team

The most common first step for determining the value of overheads in inventory is to use a predetermined rate that represents a cost charge per direct labor hour or cost per machine hour. From product bills of material and routings, the total number of hours or labor or machine usage for a unit volume of production is known. The value of the overhead cost rate per direct labor hour (or machine hour) x the number of hours required per unit of production, yields the overhead cost rate per unit. In the example below, the ERP will calculate the cost per work center, but it is reliant on the Direct Labor and Overhead Rates to complete this process.

dp-image-1jpg

The challenge comes when calculating the applicable pre-determined rate for overhead per direct labor hour or machine hour by the applicable cost or work center. PCMCS can assist with automating and updating this process.

A Better Solution: The Ranzal PCMCS Standard Cost Solution

PCMCS provides the ability to quickly and flexibly put the creation of multi-step allocation processes into the hands of business users. It also provides for the management of hierarchies without the need for external dimension management applications as well as standard file templates for data upload.  Further, a series of standard dashboard and report visuals augment the viewing and monitoring of results.  These capabilities allow organizations to quickly load and allocate expenses to applicable overhead cost pools and then merge those cost pools with applicable labor or machine hour values to obtain the relevant overhead rates.

PCMCS allows users to quickly select the cost centers or work centers that are applicable as sources to be included in the overhead rate:

dp-image-2jpg

Users then can easily select the targets for collecting these costs into relevant pools,

dp-image-3

as well as the operational metric to use to assign these overhead costs to their applicable pools.

dp-image-4

Users then can easily select the targets for collecting these costs into relevant pools,

dp-image-5

Edgewater Ranzal is the leading implementation services provider of Oracle and Hyperion EPM solutions and has extensive experience with Hyperion Profitability and Cost Management (HPCM). Following the release of PCMCS, Ranzal will be announcing a Cloud servicing offering that will leverage the power of the Cloud to provide an accelerated method of producing the required inputs for overhead allocation in standard costing.

More than just Standard Costing

Additionally, while PCMS provides an excellent way to develop overhead rates for standard costing, it can simultaneously be utilized to determine allocations and costing valuations that leverage other methodologies for product and customer costing and profitability. Much has been written about the potential for inaccuracies if the standard cost basis of overhead allocation in product costing were to be used universally or exclusively for management analysis.  Overhead has become such a large portion of the total cost, that in many cases, overhead rates can be three or four times higher than their respective direct labor rates.  This suggests a general lack of causality between overhead and direct labor hours in many cases, and this has led to the evolution of other methods for costing.  Activity Based Costing is one such example, while simply allocating manufacturing variances to product lines is another.

PCMCS can be used to meet the requirements for both the externally reported methods and the management methods of product costing.

All of the Results in One Place

Determining the method by which overhead should be captured in the cost of different products of inventory is an important process because it represents a step by which a large number of dollars is moved from an expense to an asset, usually temporarily but sometimes permanently, and this can impact profitability and stock share price.

For the purpose of valuing inventory for statutory reporting, the overhead rate method is considered acceptable and it is widely used. It is therefore important that organizations find a way to develop and manage these cost valuations in a manner that is well-documented, has transparent methodology and is one that reduces the amount of time spent on the process.  However, it is not the only method that should be used for considering overhead in product and customer costing and profitability analysis.  Further, selling, general and administrative expenses (SG&A) represents another layer of cost that while not part of standard inventory cost, should be considered in overall product costs from a management perspective.

To this end, the Edgewater Ranzal PCMCS Standard Cost solution will provide an opportunity to fulfill multiple needs in costing and profitability and will do so in a manner that will be faster and more user-friendly than what has previously been experienced.

Oracle Business Intelligence Cloud Service (BICS) September Update

The latest upgrade for BICS happened last week and, while there are no new end user features, it is now easier to integrate data. New to this version is the ability to connect to JDBC data sources through the Data Sync tool.  This allows customers to set up automated data pulls from Salesforce, Redshift, and Hive among others.  In addition to these connections, Oracle RightNow CRM customers have the ability to pull directly from RightNow reports using Oracle Data Sync.  Finally, connections to on premise databases and BICS can be secured using Secure Socket Layer (SSL) certifications.

After developing a customer script using API calls to pull data from Salesforce, I am excited about the ability to connect directly to Salesforce with Data Sync. Direct connections to the Salesforce database allows you to search and browse for relevant tables and import the definitions with ease:

blog

Once the definitions have been imported, standard querying clauses can create the ability to include only relevant data, perform incremental ETLs, and further manipulate the data.

While there are no new features for end users, this is a powerful update when it comes to data integration. Using APIs to extract data from Salesforce meant that each extraction query had to be written by hand which was time consuming and prone to error.  With these new data extraction processes, BICS implementations and integrating data becomes much faster, furthering the promise of Oracle Cloud technologies.

A Comparison of Oracle Business Intelligence, Data Visualization, and Visual Analyzer

We recently authored The Role of Oracle Data Visualizer in the Modern Enterprise in which we had referred to both Data Visualization (DV) and Visual Analyzer (VA) as Data Visualizer.  This post addresses readers’ inquiries about the differences between DV and VA as well as a comparison to that of Oracle Business Intelligence (OBI).  The following sections provide details of the solutions for the OBI and DV/VA products as well as a matrix to compare each solution’s capabilities.  Finally, some use cases for DV/VA projects versus OBI will be outlined.

For the purposes of this post, OBI will be considered the parent solution for both on premise Oracle Business Intelligence solutions (including Enterprise Edition (OBIEE), Foundation Services (BIFS), and Standard Edition (OBSE)) as well as Business Intelligence Cloud Service (BICS). OBI is the platform thousands of Oracle customers have become familiar with to provide robust visualizations and dashboard solutions from nearly any data source.  While the on premise solutions are currently the most mature products, at some point in the future, BICS is expected to become the flagship product for Oracle at which time all features are expected to be available.

Likewise, DV/VA will be used to refer collectively to Visual Analyzer packaged with BICS (VA BICS), Visual Analyzer packaged with OBI 12c (VA 12c), Data Visualization Desktop (DVD), and Data Visualization Cloud Service (DVCS). VA was initially introduced as part of the BICS package, but has since become available as part of OBIEE 12c (the latest on premise version).  DVD was released early in 2016 as a stand-alone product that can be downloaded and installed on a local machine.  Recently, DVCS has been released as the cloud-based version of DVD.  All of these products offer similar data visualization capabilities as OBI but feature significant enhancements to the manner in which users interact with their data.  Compared to OBI, the interface is even more simplified and intuitive to use which is an accomplishment for Oracle considering how easy OBI is to use.  Reusable and business process-centric dashboards are available in DV/VA but are referred to as DV or VA Projects.  Perhaps the most powerful feature is the ability for users to mash up data from different sources (including Excel) to quickly gain insight they might have spent days or weeks manually assembling in Excel or Access.  These mashups can be used to create reusable DV/VA Projects that can be refreshed through new data loads in the source system and by uploading updated Excel spreadsheets into DV/VA.

While the six products mentioned can be grouped nicely into two categories, the following matrix outlines the differences between each product. The following sections will provide some commentary to some of the features.

Table 1

Table 1:  Product Capability Matrix

Advanced Analytics provides integrated statistical capabilities based on the R programming language and includes the following functions:

  • Trendline – This function provides a linear or exponential plot through noisy data to indicate a general pattern or direction for time series data. For instance, while there is a noisy fluctuation of revenue over these three years, a slowly increasing general trend can be detected by the Trendline plot:
Figure 1

Figure 1:  Trendline Analysis

 

  • Clusters – This function attempts to classify scattered data into related groups. Users are able to determine the number of clusters and other grouping attributes. For instance, these clusters were generated using Revenue versus Billed Quantity by Month:
Figure 2

Figure 2:  Cluster Analysis

 

  • Outliers – This function detects exceptions in the sample data. For instance, given the previous scatter plot, four outliers can be detected:
Figure 3

Figure 3:  Outlier Analysis

 

  • Regression – This function is similar to the Trendline function but correlates relationships between two measures and does not require a time series. This is often used to help create or determine forecasts. Using the previous Revenue versus Billed Quantity, the following Regression series can be detected:
Figure 4

Figure 4:  Regression Analysis

 

Insights provide users the ability to embed commentary within DV/VA projects (except for VA 12c). Users take a “snapshot” of their data at a certain intersection and make an Insight comment.  These Insights can then be associated with each other to tell a story about the data and then shared with others or assembled into a presentation.  For those readers familiar with the Hyperion Planning capabilities, Insights are analogous to Cell Comments.  OBI 12c (as well as 11g) offers the ability to write comments back to a relational table; however, this capability is not as flexible or robust as Insights and requires intervention by the BI support team to implement.

Figure 5

Figure 5:  Insights Assembled into a Story

 

Direct connections to a Relational Database Management System (RDBMS) such as an enterprise data warehouse are now possible using some of the DV/VA products. (For the purpose of this post, inserting a semantic or logical layer between the database and user is not considered a direct connection).  For the cloud-based versions (VA BICS and DVCS), only connections to other cloud databases are available while DVD allows users to connect to an on premise or cloud database.  This capability will typically be created and configured either by the IT support team or analysts familiar with the data model of the target data source as well as SQL concepts such as creating joins between relational tables.  (Direct connections using OBI are technically possible; however, they require the users to manually write the SQL to extract the data for their analysis).  Once these connections are created and the correct joins are configured between tables, users can further augment their data with data mashups.  VA 12c currently requires a Subject Area connected to a RDBMS to create projects.

Leveraging OLAP data sources such as Essbase is currently only available in OBI 12c (as well as 11g) and VA 12c. These data sources require that the OLAP cube be exposed as a Subject Area in the Presentation layer (in other words, no direct connection to OLAP data sources).  OBI is considered very mature and offers robust mechanisms for interacting with the cube, including the ability to use drillable hierarchical columns in Analysis.  VA 12c currently exposes a flattened list of hierarchical columns without a drillable hierarchical column.  As with direct connections, users are able to mashup their data with the cubes to create custom data models.

While the capabilities of the DV/VA product set are impressive, the solution currently lacks some key capabilities of OBI Analysis and Dashboards. A few of the most noticeable gaps between the capabilities of DV/VA and OBI Dashboards are the inability to:

  • Create the functional equivalent of Action Links which allows users to drill down or across from an Analysis
  • Schedule and/or deliver reports
  • Customize graphs, charts, and other data visualizations to the extent offered by OBI
  • Create Alerts which can perform conditionally-based actions such as pushing information to users
  • Use drillable hierarchical columns

At this time, OBI should continue to be used as the centerpiece for enterprise-wide analytical solutions that require complex dashboards and other capabilities. DV/VA will be more suited for analysts who need to unify discrete data sources in a repeatable and presentation-friendly format using DV/VA Projects.  As mentioned, DV/VA is even easier to use than OBI which makes it ideal for users who wish to have an analytics tool that rapidly allows them to pull together ad hoc analysis.  As was discussed in The Role of Oracle Data Visualizer in the Modern Enterprise, enterprises that are reaching for new game-changing analytic capabilities should give the DV/VA product set a thorough evaluation.  Oracle releases regular upgrades to the entire DV/VA product set, and we anticipate many of the noted gaps will be closed at some point in the future.

Oracle Business Intelligence – Synchronizing Hierarchical Structures to Enable Federation

More and more Oracle customers are finding value in federating their EPM cubes with existing relational data stores such as data marts and data warehouses (for brevity, data warehouse will refer to all relational data stores). This post explains the concept of federation, explores the consequences of allowing hierarchical structures to get out of synchronization, and shares options to enable this synchronization.

In OBI, federation is the integration of distinct data sources to allow end users to perform analytical tasks without having to consider where the data is coming from. There are two types of federation to consider when using EPM and data warehouse sources:  vertical and horizontal.  Vertical federation allows users to drill down a hierarchy and switch data sources when moving from an aggregate data source to a more detailed one.  Most often, this occurs in the Time dimension whereby the EPM cube stores data for year, quarter, and month, and the relational data sources have details on daily transactions.  Horizontal federation allows users to combine different measures from the distinct data sources naturally in an OBI analysis, rather than extracting the data and building a unified report in another tool.

Federation makes it imperative that the common hierarchical structures are kept in sync. To demonstrate issues that can occur during vertical federation when the data sources are not synchronized, take the following hierarchies in an EMP application and a data warehouse:

Figure 1: Unsynchronized Hierarchies

Jason Hodson Blog Figure 1.jpg

Notice that Colorado falls under the Western region in the EPM application, but under the Southwestern region in the data warehouse. Also notice that the data warehouse contains an additional level (or granularity) in the form of cities for each region.  Assume that both data sources contain revenue data.  An OBI analysis such as this would route the query to the EPM cube and return these results:

Figure 2: EPM Analysis – Vertical Federation

Jason Hodson Blog Figure 2

However, if the user were to expand the state of Washington to see the results for each city, OBI would route the query to the data warehouse. When the results return, the user would be confronted with different revenue figures for the Southwest and West regions:

Figure 3: Data Warehouse – Vertical Federation

Jason Hodson Blog Figure 3

When the hierarchical structures are not aligned between the two data sources, irreconcilable differences can occur when switching between the sources. Many times, end users are not aware that they are switching between EPM and a data warehouse, and will simply experience a confusing reorganization in their analysis.

To demonstrate issues that occur in horizontal federation, assume the same hierarchies as in Figure 1 above, but the EPM application contains data on budget revenue while the data warehouse contains details on actual revenue. An analysis such as this could be created to query each source simultaneously and combine the budget and actual data along the common dimension:

Figure 4: Horizontal Federation

Jason Hodson Blog Figure 4

However, drilling into the West and Southwest regions will result in Colorado becoming an erroneously “shared” member:

Figure 5: Colorado as a “Shared” Member

Jason Hodson Blog Figure 5

In actuality, the mocked up analysis above would more than likely result in an error since OBI would not be able to match the hierarchical structures during query generation.

There are a number of options to enable the synchronization of hierarchical structures across EPM applications and data warehouses. Many organizations are manually maintaining their hierarchical structures in spreadsheets and text files, often located on an individual’s desktop.  It is possible to continue this manual maintenance; however, these dispersed files should be centralized, a governance processes defined, and the EPM metadata management and data warehouse ETL process redesigned to pick up these centralized files.  This method is still subject to errors and is inherently difficult to properly govern and audit.  For organizations that are already using Enterprise Performance Management Architect (EPMA), a scripting process can be implemented that extracts the hierarchical structures in flat files.  A follow on ETL process to move these hierarchies into the data warehouse will also have to be implemented.

The best practices solution is to use Hyperion Data Relationship Management (DRM) to manage these hierarchical structures. DRM boasts robust metadata management capabilities coupled with a system-agnostic approach to exporting this metadata.  DRM’s most valuable export method allows pushing directly to a relational database.  If a data warehouse is built in tandem with an EPM application, DRM can push directly to a dimensional table that can then be accessed by OBI.  If there is a data warehouse already in place, existing ETL processes may have to be modified or a dimensional table devoted to the dimension hierarchy created.  Ranzal has a DRM accelerator package to enable the synchronization of hierarchical structures between EPM and data warehouses that is designed to work with our existing EPM application DRM implementation accelerators.  Using these accelerators, Ranzal can perform an implementation in as little as six weeks that provides metadata management for the EPM application, establishes a process for maintaining hierarchical structure synchronization between EPM and the data warehouse, and federation of the data source.

While the federation of EPM and data warehouse sources has been the primary focus, it is worth noting that two EPM cubes or two data warehouses could be federated in OBI. For many of the reasons discussed previously, data synchronization processes will have to be in place to enable this federation.  The previous solutions for maintaining metadata synchronization may be able to be adapted to enable this federation.

The federation of EPM and data warehouse sources allows an enterprise to create a more tightly integrated analytical solution. This tight integration allows users to transverse the organization’s data, gain insight, and answer business essential questions at the speed of thought.  As demonstrated, mismanaging hierarchical structures can result in an analytical solution that produces unexpected results that can harm user confidence.  Enterprise solutions often need enterprise approaches to governance; therefore, it is often imperative to understand and address shortcomings in hierarchical structure management.  Ranzal has a deep knowledge of EPM, DRM, and OBIEE, and how these systems can be implemented to tightly work together to address an organization’s analytical and reporting needs.

Undocumented Data Export Feature in Oracle Hyperion PBCS (Planning and Budgeting Cloud Service)

In response to companies looking for more decentralized services with less IT overhead, Oracle has launched the Planning and Budgeting Cloud Service (PBCS). PBCS is a hosted version of the Oracle Hyperion Planning and Data Management/Integration (FDMEE) tools with a particular focus on a completely online-based interface. For additional information on PBCS, please click HERE.

From a functional perspective, this is an ideal situation: to have near-full capabilities of an on-premise solution without the infrastructure maintenance concerns. Practically, though, there are some holes to fill as Oracle perfects and grows the solution.

One of the main areas for concern has been the integration of data into and out of PBCS. Data Management (a version of FDMEE) is the recommended tool for loading flat file data into the system, while there is also the ability to load directly to Essbase with perfect files. Getting files out of the system, on the other hand, has not been so straightforward. Without access to the Essbase server, exporting files proves impractical. Companies often need data exports from Essbase for backups, integrations into other systems, or for review. PBCS does not seem to have a native method of being able to extract Level Zero (Lv0) data on a regular basis that could be easily copied out of the system and used elsewhere.

Despite this, the DATAEXPORT command still exists in the PBCS world. How, then, could it be used to get a needed file?

It actually begins as with a normal on-premise application by creating a Business Rule to do a data export. This can be done manually, but it is recommended to use the System Template to make sure everything is set up perfectly.

JP_ScreenShot_2015_03_09_09_10_35

When setting up the location to export the file to, it should be set up as:

 “/u03/lcm/[File_Name.txt]”

JP_ScreenShot_2015_03_09_09_30_59

When this is done, a user can then navigate over to the Inbox/Outbox Explorer and see the file in there:

JP_ScreenShot_2015_03_09_09_42_39

JP_ScreenShot_2015_03_09_09_44_01

And that is really all there is to it! With a business rule in place, the entire process can be automated using EPMAutomate (EPMAutomate and recommendations for an automation engine/methodology will be discussed in a later post) and a batch scripting client to do a process that:

  • Deletes the old file
  • Runs the business rule to do the data export
  • Copy the file off of PBCS and to a local location
  • Push the file to any other needed location

The one important thing to note is that as of PBCS 11.1.2.3.606 (April 2015 patch), all files in the Inbox/Outbox Explorer — along with any files in Application Management (LCM) — that are older than two months will be automatically deleted. As such, if these files are being kept for archive purposes, they must be backed up offline in order to be preserved.

Default and User Friendly Prompting With BI Publisher

As mentioned in the previous post, Dynamic Report Grouping with Oracle BI Publisher, Edgewater Ranzal is working with a client to convert XML Publisher reports to BI Publisher reports. As part of Ranzal’s initiative, we began looking for opportunities to improve the user interface as well as create a standard methodology that report developers could utilize in the future. One of the initial areas we focused on was to improve the prompting feature. To this effort, we concentrated on:

  • Presenting prompts to the user within the BI Publisher tool
  • The displaying of user-entered prompt values within the report
  • Creating a methodology of implementation for report developers.

As expected, many of the reports had time prompts (date, period, or year), but the existing reports did not have default prompt values.  Although it is not published in any Oracle documentation we have seen, Oracle offers five functions that can be inserted into the Default Value option of the parameter:

{$SYSDATE()$}
{$FIRST_DAY_OF_MONTH()$}
{$LAST_DAY_OF_MONTH()$}
{$FIRST_DAY_OF_YEAR()$}
{$LAST_DAY_OF_YEAR()$}

*Note that you also have to set the Data Type to Date for these parameters. 

Simple numeric mathematical calculations can be performed with these functions to add some flexibility.  For instance, the previous day’s date would be displayed as

{$SYSDATE() – 1$}

By using these functions in conjunction with the Date String Format in the parameter options section, a variety of date value defaults can be displayed in the prompting section of the report. The following table is a sample of the prompts, Default Value, and Date Format Strings that were deployed at the client:

BI Publisher post 2 1

It is very important to understand that, regardless of the Date Format String settings, the actual value used in the date functions is the full date string and an optional numeric number added or subtracted that represents days. For instance, if the Default Value is set as {$FIRST_DAY_OF_YEAR() + 1$} (first day of year plus one) and the Date Format String is set to MM, the user would still see 01 as the default value because the actual value generated (and then converted to the month number) is 20XX-01-02T00:00:00.000+HH:00 (Jan 2, 20XX).

Because the optional numeric value used in the function refers only to days, and no logic can be written into the Default Value function, there is a natural limitation that prohibits generating anything beyond a period and/or year plus or minus one. For instance, if a client wants a prompt default value for two years ago, logic cannot be written to determine if the current year or previous year was a leap year and conclude whether to subtract 365 x 2 = 730 or 366 x 2 = 732 from the first day of year function (or system date function, depending on your preference).

Understandably, this problem would only occur two days every four years (December 31st of both a leap year and the year following a leap year); however, extrapolating from this logic is evidence of the difficultly in going back two or more months from any date function because of the variable numbers in a month. We observed an even more complicated version of this issue when the client wanted to have the default values for a period range equal to the previous period (i.e. during Q3, From Period defaults to 04 and To Period defaults to 06). Depending on the current period, the From Period needs to default from three to five periods ago and the To Period needs to default from one to three periods ago. Further exacerbating this problem was the year prompt that, during Q1, needs a default value of the previous year.

The final piece of the puzzle when using any parameters with the date data type is realizing that the bind value passed to your data model is the full date/time string. Our client exclusively used SQL in their data models; therefore, it was only a matter of using Oracle SQL’s native TO_CHAR function to convert the date/time string to a relationally comparable value as such:

BI Publisher post 2 2

The Ranzal team then looked to streamline and simplify interaction between parameters, parameter input requirement evaluation, and the RTF templates. The client’s reports had up to twelve parameters that required user input, and they used XLST logic to evaluate whether or not users had supplied values. As mentioned in previous posts, XLST is not a robust language as it relates to logical evaluations; after all, XLST was designed to consume XML documents and output new documents (in this case, RTF based reports). Because of these limitations, the initial RTF templates used the following logic (white space added for clarity):

BI Publisher post 2 3

Using this method, each parameter is evaluated until a null value is found, and then the remaining parameters are evaluated for a null value. When the XLST consumes the XML, each required parameter that the user has not entered a value for results in an additional warning line message. From a developer point of view, each additional required parameter requires the creation of additional lines of code. While the example above only has four required parameters, reports with many required parameters become quite convoluted and difficult to maintain.

Ranzal again turned to the logic processing capabilities of Oracle SQL. Within the data model, we created a new data set to create a parameter status (named PARAM_STAT) to look at the bind values passed by the BI Publisher parameters. We came up with the following SQL template to generate a more succinct warning message within the column value PARAM_STAT (note that n denotes the number of required report parameters):

BI Publisher post 2 4

There is an argument for creating a SQL statement that concatenates all missing parameter names with a comma and then uses logic to correct the punctuation; however, we felt that from a reusability standpoint, it would be best to compartmentalize the statement using the WITH TABLE1 statement. Using the above SQL template, report developers merely have to update the following lines:

  • 4 – 7:  Data model parameter names (i.e. :PRMBU) and report parameter names (i.e. Business Unit)
  • 10:  Data model parameter names (i.e. :PRMBU)
  • 15 – 20:  Replace the PARAM_COUNT comparison values (n, n – 1, and n – 2)

Using the example above with the required parameters for year, period, business unit, and ledger, the following SQL statement was generated:

BI Publisher post 2 5

Using this parameter status value results in a much more succinct XLST template that needs only to evaluate whether PARAM_STAT has a value (white space added for clarity):

BI Publisher post 2 6

The client has hundreds of BI Publisher reports and plans to continue to develop additional reports as their Oracle Business Intelligence platform becomes the standard reporting tool. By using the SQL template along with the simplified RTF template, the real work becomes creating the table, pivot table, or chart within the RTF template.  Fortunately, the Ranzal team was able to create an Excel-based VBA macro that automates the generation of the majority of the client’s templates. We will discuss this tool in a later post.

These two examples demonstrate the Ranzal team’s commitment to taking a proactive stance to examining current processes and looking for opportunities for improvement.  As we worked through the technical details of this implementation, we carefully balanced the idea of a user-centered experience against the often competing need for a simplified methodology and process for report developers. To accomplish the latter, we went through several phases of technical refinement, demonstrated the process to developers, and provided thorough documentation. This ensures that when the time comes to turn the maintenance of these reports over to the client, there is a complete knowledge transfer as well.