Default and User Friendly Prompting With BI Publisher

As mentioned in the previous post, Dynamic Report Grouping with Oracle BI Publisher, Edgewater Ranzal is working with a client to convert XML Publisher reports to BI Publisher reports. As part of Ranzal’s initiative, we began looking for opportunities to improve the user interface as well as create a standard methodology that report developers could utilize in the future. One of the initial areas we focused on was to improve the prompting feature. To this effort, we concentrated on:

  • Presenting prompts to the user within the BI Publisher tool
  • The displaying of user-entered prompt values within the report
  • Creating a methodology of implementation for report developers.

As expected, many of the reports had time prompts (date, period, or year), but the existing reports did not have default prompt values.  Although it is not published in any Oracle documentation we have seen, Oracle offers five functions that can be inserted into the Default Value option of the parameter:

{$SYSDATE()$}
{$FIRST_DAY_OF_MONTH()$}
{$LAST_DAY_OF_MONTH()$}
{$FIRST_DAY_OF_YEAR()$}
{$LAST_DAY_OF_YEAR()$}

*Note that you also have to set the Data Type to Date for these parameters. 

Simple numeric mathematical calculations can be performed with these functions to add some flexibility.  For instance, the previous day’s date would be displayed as

{$SYSDATE() – 1$}

By using these functions in conjunction with the Date String Format in the parameter options section, a variety of date value defaults can be displayed in the prompting section of the report. The following table is a sample of the prompts, Default Value, and Date Format Strings that were deployed at the client:

BI Publisher post 2 1

It is very important to understand that, regardless of the Date Format String settings, the actual value used in the date functions is the full date string and an optional numeric number added or subtracted that represents days. For instance, if the Default Value is set as {$FIRST_DAY_OF_YEAR() + 1$} (first day of year plus one) and the Date Format String is set to MM, the user would still see 01 as the default value because the actual value generated (and then converted to the month number) is 20XX-01-02T00:00:00.000+HH:00 (Jan 2, 20XX).

Because the optional numeric value used in the function refers only to days, and no logic can be written into the Default Value function, there is a natural limitation that prohibits generating anything beyond a period and/or year plus or minus one. For instance, if a client wants a prompt default value for two years ago, logic cannot be written to determine if the current year or previous year was a leap year and conclude whether to subtract 365 x 2 = 730 or 366 x 2 = 732 from the first day of year function (or system date function, depending on your preference).

Understandably, this problem would only occur two days every four years (December 31st of both a leap year and the year following a leap year); however, extrapolating from this logic is evidence of the difficultly in going back two or more months from any date function because of the variable numbers in a month. We observed an even more complicated version of this issue when the client wanted to have the default values for a period range equal to the previous period (i.e. during Q3, From Period defaults to 04 and To Period defaults to 06). Depending on the current period, the From Period needs to default from three to five periods ago and the To Period needs to default from one to three periods ago. Further exacerbating this problem was the year prompt that, during Q1, needs a default value of the previous year.

The final piece of the puzzle when using any parameters with the date data type is realizing that the bind value passed to your data model is the full date/time string. Our client exclusively used SQL in their data models; therefore, it was only a matter of using Oracle SQL’s native TO_CHAR function to convert the date/time string to a relationally comparable value as such:

BI Publisher post 2 2

The Ranzal team then looked to streamline and simplify interaction between parameters, parameter input requirement evaluation, and the RTF templates. The client’s reports had up to twelve parameters that required user input, and they used XLST logic to evaluate whether or not users had supplied values. As mentioned in previous posts, XLST is not a robust language as it relates to logical evaluations; after all, XLST was designed to consume XML documents and output new documents (in this case, RTF based reports). Because of these limitations, the initial RTF templates used the following logic (white space added for clarity):

BI Publisher post 2 3

Using this method, each parameter is evaluated until a null value is found, and then the remaining parameters are evaluated for a null value. When the XLST consumes the XML, each required parameter that the user has not entered a value for results in an additional warning line message. From a developer point of view, each additional required parameter requires the creation of additional lines of code. While the example above only has four required parameters, reports with many required parameters become quite convoluted and difficult to maintain.

Ranzal again turned to the logic processing capabilities of Oracle SQL. Within the data model, we created a new data set to create a parameter status (named PARAM_STAT) to look at the bind values passed by the BI Publisher parameters. We came up with the following SQL template to generate a more succinct warning message within the column value PARAM_STAT (note that n denotes the number of required report parameters):

BI Publisher post 2 4

There is an argument for creating a SQL statement that concatenates all missing parameter names with a comma and then uses logic to correct the punctuation; however, we felt that from a reusability standpoint, it would be best to compartmentalize the statement using the WITH TABLE1 statement. Using the above SQL template, report developers merely have to update the following lines:

  • 4 – 7:  Data model parameter names (i.e. :PRMBU) and report parameter names (i.e. Business Unit)
  • 10:  Data model parameter names (i.e. :PRMBU)
  • 15 – 20:  Replace the PARAM_COUNT comparison values (n, n – 1, and n – 2)

Using the example above with the required parameters for year, period, business unit, and ledger, the following SQL statement was generated:

BI Publisher post 2 5

Using this parameter status value results in a much more succinct XLST template that needs only to evaluate whether PARAM_STAT has a value (white space added for clarity):

BI Publisher post 2 6

The client has hundreds of BI Publisher reports and plans to continue to develop additional reports as their Oracle Business Intelligence platform becomes the standard reporting tool. By using the SQL template along with the simplified RTF template, the real work becomes creating the table, pivot table, or chart within the RTF template.  Fortunately, the Ranzal team was able to create an Excel-based VBA macro that automates the generation of the majority of the client’s templates. We will discuss this tool in a later post.

These two examples demonstrate the Ranzal team’s commitment to taking a proactive stance to examining current processes and looking for opportunities for improvement.  As we worked through the technical details of this implementation, we carefully balanced the idea of a user-centered experience against the often competing need for a simplified methodology and process for report developers. To accomplish the latter, we went through several phases of technical refinement, demonstrated the process to developers, and provided thorough documentation. This ensures that when the time comes to turn the maintenance of these reports over to the client, there is a complete knowledge transfer as well.

Ushering in the New Era of Hyperion Strategic Finance

Welcome to the first installment of our new Hyperion Strategic Finance (HSF) blog series. Edgewater Ranzal’s HSF team has been working closely with Ranzal’s other Hyperion practices (HFM, FDM, Planning/Essbase etc.) to hone in on how HSF can be utilized to its full potential in accordance with the other product offerings.  As part of that process we felt it was important to start a dialogue (blog) to share some of our insights on various topics ranging from new product release info to cutting edge integration best practices.  We are hoping this series will be a good resource for you and your organization on your HSF journey.

Given the major changes to HSF in the 11.1.2.2 release, and the exciting product roadmap ahead of us, I thought it would be good to start the series by discussing “The New Era of Hyperion Strategic Finance.”

So…What’s new?

With the 11.1.2.2 release comes probably the most significant change to the product since it was acquired by Hyperion.  A shift in the user interface from a traditional thick client to an Excel based Smart View Add-in is at the core of the change.  The enhancement enables the end user to perform the majority of HSF modeling activities directly in an Excel workbook.  With this change, the legacy reporting in the HSF client has been REPLACED with Excel reporting via Smart View.  This means that those who choose to implement 11.1.2.2 will be required to use the Smart View based reporting and/or export the data to an external database (i.e. Essbase) to utilize other reporting tools (i.e. Financial Reports, OBIEE etc.).  For any current HSF users looking to upgrade, it is important to note that the process of upgrading to 11.1.2.2 automatically converts existing HSF reports into the Smart View format, however, existing charts/graphs will need to be rebuilt and some formatting issues have been identified which may require some re-work.  While this does introduce a big change for end users, it also presents a great opportunity by opening up native Excel functionality like allowing the use of Excel graphing, conditional formatting, highlight sums, and the group / ungroup data feature etc. To view the full list of new features you can look through the 11.1.2.2 Read me, available here.

Ush1

Why make the change?

Some existing clients have asked me why this change was made in the first place.  The answer is really two-fold:

1. Tighter Integration with Hyperion Planning: If there has been one consistent theme throughout all of my meetings/conversations with Oracle’s HSF/Planning development team it has been a desire to continuously improve the integration between Hyperion Planning and HSF.  The integration I’m referring to doesn’t stop at data, but also includes seamlessly integrating the end user experience.  Everything from selling the tools as a combined solution (i.e. bundled pricing) to having both HSF and Planning users interact with the tool in a similar manner (i.e. Smart View) have been or will be addressed.  Below I will outline some additional roadmap items that are planned that will continue this theme.

2. Reporting: All legacy HSF users that have spent time creating, formatting, and modifying HSF reports are well aware that reporting has NOT been an area of strength previous to 11.1.2.2.  Both Oracle and our implementation team consistently receive requests to have HSF reporting operate “more like Excel.”  This move is a direct response to those requests, and it is definitely a big step in the right direction.

Current Challenges:

As with any major change / new release there are going to be some growing pains and the 11.1.2.2 release of HSF is no different.  We have been working closely with Oracle over the past couple of months to identify any issues, make recommendations, and test fixes that have been applied.  The main point I want to make clear is that from a functional perspective the tool has the EXACT same capabilities.  You are simply changing the way you do things, not what can be done.  There is a bit of a learning curve to understand the new menu bar, short cut keys etc…but in general it still functions much like its predecessor with the added benefit of the Excel look and feel.

A couple of things to look out for if you will be implementing AND going live prior to the 11.1.2.3 release:

1. Issues with large numbers of active reports:  We have experienced some performance issues when working with a file that has numerous reports (standard or freestyle) open at once.  This includes reduced speed of the check out / in process, flickering upon calc/refresh, and occasional freezing of the application.  Currently, the product seems to work more seamlessly when working with just the accounts tab or 1-2 reports, however, fixing this is at the top of Oracle’s priority list and we expect it to be addressed sooner rather than later.

2. Renaming of Time Periods: You want to avoid renaming the default time periods in the entities (i.e. Changing 2013 to FY13).  In the current release this can cause some calculation issues in system regarding the funding routine.  Again, this has been identified by Oracle and is expected to be resolved in the next patch set or release.

3. Smart View Parity: There are some features that are not currently in the Smart View interface which may require users to revert back to the old HSF client.  Some of these features include the debt/depr schedulers, ECM/ACM etc.  This means that the end user may have to jump between interfaces in certain instances.  The good news is that these features are expected to be added to the Smart View functionality in the next release (11.1.2.3) and can still be utilized in the traditional client if need be.  To see a full list of these items look to the 11.1.2.3 roadmap column below.

4. With the change in the technology, come some changes to the infrastructure component, so it is important to discuss these requirements with your technical team to diagnose hardware and software needs before moving forward. It is also important to note that the upcoming release of 11.1.2.3 will NOT be backward compatible with 11.1.2.2.  So, if you have a multiproduct implementation with integration components you would need to upgrade the entire suite.

Product Direction / Road Map

Ush2

The above picture is a snap shot of the HSF roadmap given to me by the Oracle product team.  You can see the current release (as of this blog post) on the far left, calendar year 2013’s scheduled release (11.1.2.3) in the middle, and the subsequent releases on the far right.  The focus this year is to truly stabilize the Smart View integration and incorporate all of the standard features of the thick client into the Smart View interface.  The exciting part of this is how the changes in 11.1.2.2 have set up the product for even more advancement in the future.  Post 11.1.2.3, a lot of the heavy lifting for HSF’s major changes will be complete.  This will allow the team to focus on true feature enhancements like adding a monthly depreciation scheduler or addressing the concept of parent level scenario modeling (possibly an idea for a future blog post!).  In addition, if you look at the future direction, in the Enterprise Readiness section, you will see items such as LRP Integration to Planning, Automated Data Loads, DRM support etc…This represents a future state which allows a user to manage both data and metadata in a consistent manner between HSF, Planning, and Essbase.  Imagine a world where you can have one excel worksheet open with your HSF model and another with a LIVE connection to an HSF reporting cube (via Essbase) for Ad-hoc purposes!  Not only that but a world where the Essbase cube is automatically updated as you make changes to the HSF model, meaning no manual metadata management between applications or tedious mappings that need to be maintained. This type of enhancement truly empowers the user to focus on the modeling aspects of HSF while allowing Essbase to shine for management reporting – truly using the right tool for the right job.

Our Recommendation

Overall we are very excited about the direction Oracle is heading with HSF.  The Smart View capability in 11.1.2.2 is really just the first step in what we see as a continued effort to make the product better for its users.  With that in mind, we have begun multiple 11.1.2.2 implementations, however, it is understood in all cases that there will be some issues to work through and there are no pending deadlines where these issues could put the project success at risk.  In fact, the Go Live dates are not expected to occur until a time when we believe 11.1.2.3 and / or an 11.1.2.2 patch set will be available.  Given that, we do recommend that all new HSF clients seriously considering starting with v11.1.2.2 while taking note of the challenges mentioned above.  Initially this approach eliminates the need for your end users to learn two different user interfaces.  Additionally, if you are willing to give feedback, as an 11.1.2.2 client you will definitely have the ear of the Oracle team as it pertains to resolving any existing product issues, as well as requesting new enhancements for the future.  Oracle is very eager to make this release a success and they truly value any input early adopters can provide.  So if you have the patience to work through some bumps in the road, and the time to resolve the issues you may encounter, I would definitely encourage giving the 11.1.2.2 release serious consideration.

I hope you found this information helpful.  We look forward to coming out with many more in the future. In that vein, if you have any ideas / request for blog topics please feel free to leave them in the comments section or reach out to me directly at rmeester@ranzal.com and we will make sure to address them in future posts.

About the Author

Ryan Meester is a Practice Director for the Strategic Planning Practice at Edgewater Ranzal.  His first encounter with HSF dates back to 2004 as a Consultant with Hyperion in the HSF practice.  After three years in that capacity, leading projects and assisting with business development efforts, Ryan co-founded Meridian Consulting International with two of his Hyperion colleagues, Andrew Starks and Ricardo Rasche.  At Meridian, Ryan, Andrew, and Ricardo focused exclusively on HSF implementation services until Meridian was acquired by Edgewater Ranzal in May of 2010.  This was a strategic acquisition for both Meridian and Ranzal. Both organizations were seeing more and more multiproduct implementations which required a broader EPM focus. The acquisition effectively rounded out Ranzal’s EPM service offering by adding HSF expertise to their repertoire.

Ranzal UK are presenting at UKOUG 2012

I’m please to announce that Ranzal UK consultants are presenting papers on both days of the UKOUG Hyperion conference this year.  The conference takes place on October 23rd & 24th at the Park Plaza hotel in central London.

http://hyperion.ukoug.org/

On the Tuesday my colleague Mark Drayton will be talking about a recent implementation of Hyperion Planning and HPCM in the session HPCM and Hyperion Planning: A Cost Allocation Process‘.   This will be especially interesting to those of you who want to see how Mark made use of batch updates to HPCM Stage 1 Drivers and Assignments.

On Wednesday, our Senior Consultant Alecs Mlynarzek will be delivering a presentation titled ‘Planning Integration with Task Lists‘, which describes techniques we have used to enable users to launch integration tasks from the Task List UI within Hyperion Planning, a feature not available out of the box.

Also on the Wednesday, Ranzal’s UK Infrastructure Lead Dave Hogg and I will be presenting various innovative techniques that we have implemented for clients in order to meet technical challenges – this session is called ‘Guide to Using EAS to Maintain Batch Environment Variables‘.

We really hope that you can come and attend one of our sessions – please come and say hello and we can tell you more about how we are growing, and the exciting projects that we are currently delivering.

Techniques for Creating, Loading, and Optimizing a Simple Essbase ASO Application

A couple of recent projects have required us to build an Essbase database to provide a subset of upstream system data for downstream consumer systems such as Hyperion Profitability and Cost Management (HPCM).  The process included dimension updates, data loads and custom calculations. Essabase Aggregate Storate Option (ASO) was the chosen Essbase technology because we were potentially dealing with large data volumes, relatively simple hierarchy structures, and only a small number of custom calculations that could be easily modeled in MDX within minimal performance impact.

The principle was that an overnight batch would be used to completely rebuild the ASO cube each night, including any metadata restructures that were necessary, followed by a full reload of data.

The high level process is as follows:

The starting point was to use a ‘stub’ application as a template for the metadata rebuild.  This is an ASO Essbase application with all dimension headers present, all POV dimensions present (Years, Periods, Scenarios etc), and all volatile hierarchies represented by the hierarchy headers only.  This ASO application serves as a “poor man’s MDM” which allows us to have application, dimension and hierarchy properties all pre-set.  The main advantage of the stub outline is that it creates a natural defragmentation of the target ASO application which improves query performance, and reduces dimension build times to the minimum. This is analogous to a relational database where you want to ‘truncate’ tables and/or compress, as opposed to deleting and reading all the time – there is a gradual growth.  A good tip is to defragment build dimensions in order from smallest to largest in terms of volumes.

A sample ‘Stub.otl’ outline looks something like the following. In this case, the stub outline is modeled after the new embedded Fusion G/L Essbase cube:

As can be seen, the volatile dimensions (Budget centre, Balancing Entity, Accounts, etc) are each populated with a single hierarchy header (e.g. BE_dummy) whereas the static dimensions (AccountingPeriod, Balance Amount etc) are complete, and will not be the subject of a dimension load in the MaxL.  Static dimensions which contain members with MDX member formula will persist (although the formula will not necessarily validate at this stage as they may depend on members that have not yet been rebuilt).

The first part of the batch process is to use this Stub outline to replace the outline in the ‘user’ ASO cube (i.e. the cube that will be restructured and loaded with data).  The MaxL will clear data & replace the .otl file in the user application with the .otl file from the ‘stub’ application

A simplified version of the MaxL is as follows (normally passwords would be encrypted):

This simply copies the Stub.otl file into the ‘user’ ASO cube database folder & names it with the target database name – it will be available as soon as the application is reloaded.

The next section in the MaxL would be a standard dimension build of those volatile dimensions – the primary consideration when building the hierarchies is that the ASO restrictions on hierarchies are met otherwise the outline will not verify.  This is not covered here – we assume that incoming master data is pre-validated to meet these requirements, but the summary of dimension rules for ASO is as follows:

  • ASO dimensions can contain hierarchies of 2 types – ‘Stored’ or ‘Dynamic’
  • A dimension must be tagged as Multiple Hierarchies Enabled’ or “Dynamic” if it contains two or more hierarchies
  • The first hierarchy in a dimension where Multiple Hierarchies enabled is specified  must be defined as a ‘Stored’ hierarchy
  • Stored Hierarchies are generally only additive as they only allow the + or ~ consolidation operators
  • Dynamic Hierarchies can contain any consolidation operators, and members can contain formulas.
  • For alternate hierarchies, where shared members may be required, Stored hierarchies can only contain one instance of a member (to avoid double counting), but subsequent Stored hierarchies can contain members previously defined in previous stored hierarchies

Once metadata has been loaded, the data load can be carried out.

Once this is complete, we have a fully loaded ASO cube, which we can retrieve data against using either SmartView or an Essbase report script (for example, when we are supplying filtered data to our downstream systems).

The example Smart View retrieve template below is a straightforward report with periods as columns and 550 rows of level 0 Budget Centres, with all other dimensions set as filters.

The Essbase application log shows that the above SmartView query took over 16s to execute.  This report layout may or may not be representative of real world queries / reports but the object of the exercise here is to speed this up for in-day usage.

ASO databases do not use calculation scripts to consolidate the data so the traditional BSO approach to consolidation cannot be used.  Instead, ASO will attempt to dynamically calculate upper level intersections, which, while resulting in much faster batch processing times, may result in longer than necessary retrieval times.

What we can do to improve this situation is use the ‘Query Tracking’ facility in ASO to capture the nature of queries run against the ASO cube, and build retrieval statistics against it. These statistics can then be used to build aggregation views tailored to retrieval patterns in the business.

This relies on us having some predefined definitions of the kinds of queries that are likely to be run – SmartView report templates, Web Analysis pages & Financial Reports definitions will all be suitable.

In this example, we use the above SmartView template as a basis for creating an Essbase Report script as follows:

This report mimics the SmartView template, and we use it during the overnight batch to capture the query characteristics using Query Tracking. One of the reasons to use report scripts is that if you use the query designer (or the Spreadsheet Retrieval Wizard if you are using a REALLY old version of the Excel Add In), it can save a report script output. MDX queries will have a similar affect.

The sequence of MaxL steps is as follows:

  • Switch on Query Tracking
  • Run one or more Essbase Report Script(s)
  • Run ‘execute aggregate process’ command to create aggregate views

The MaxL to accomplish this is as follows:

The ‘execute aggregate process’ command is issued with the ‘ based on query_data’ option to tell Essbase to use query patterns picked up by Query Tracking to build the aggregation views.  Essbase will build as many views as necessary until the ‘total_size’ limit is reached.  This limit may need tweaking so as to give the desired improvement in performance whilst also conserving disk space (which may get swallowed up with larger ASO cubes).  The particular example also runs in a matter of seconds, but the addition of more sample reports needs to be managed to ensure that the batch run time does not exceed its window.  It should be noted that one can process hierarchies without the query tracking, but there are restrictions on what alternate hierarchies get processed, and this is a very good technique when you are trying to improve performance on “alternate rollups”.

When this has been executed, users should see an improvement on query performance.

Our SmartView query was rerun, and the log file demonstrates the reduction in query time to less than 1 second :

This approach lends itself to situations where the ASO outline is likely to change frequently.  Changes in metadata mean that aggregation views created and saved in EAS cannot necessarily be reused – new level 0 members will not necessarily invalidate the aggregate views, but new upper level members, or restructured hierarchies definitely will invalidate these views. The rationale for this is because the ASO aggregation engine constructs multiple “jump” points based on the most recent level hierarchy  –  if I were going to oversimplify what was happening in a BSO world, imagine level zero stored, level one as dynamic calc, level two stored, level three as dynamic calc, and level four stored. In any instance, there would never be more than one level of dynamic calc. I don’t know if this is still the case, but this may be why ASO cubes seem to like symmetrical vs. ragged hierarchies a bit easier – it makes the derivation of what should be calculated vs. dynamic easier.

Come See Edgewater Ranzal at Kscope11

ODTUG Kscope11 is right around the corner. Kscope11 offers the chance for a full day EPM Symposium on Sunday, plus the opportunity to learn from experts in the EPM and BI fields on a wide range of topics.

Edgewater Ranzal will be well represented at the conference, with our associates presenting eight presentations covering Planning, DRM, EPMA, HFM, and FDM. The sessions that we will be presenting at Kscope11 are summarized below. Each title links to an abstract for the presentation, providing additional details.

Session No. Date Time Room Presenter Title
1 6/27/11 11:15 – 12:15 102C Jeff Richardson Calculation Manager:  The New and Improved Application to Create Planning Business Rules
7 6/28/11 11:15 – 12:15 103C Tony Scalese Planning (or Essbase) and FDM and ERPi Equals Success!
10 6/28/11 4:30 – 5:30 101B Chris Barbieri Security and Auditing in HFM
11 6/29/11 8:30 – 9:30 103A Patrick Lehner Best Practices for Using DRM with EPMA
11 6/29/11 8:30 – 9:30 101B Chris Barbieri Getting Started with Calc Manager for HFM
12 6/29/11 9:45 – 10:45 101B Chris Barbieri Advanced Topics in Calc Manager for HFM
12 6/29/11 9:45 – 10:45 102C John Martin Have it Your Way: Building Planning Hierarchies with EPMA or Outline Load Utility
13 6/29/11 11:15 – 12:15 101B Tony Scalese Maximizing the Value of an EPM Investment with ERPi, FDM, & EPMA
17 6/30/11 8:30 – 9:30 101B Tony Scalese Taking Your FDM Application to the Next Level with Advanced Scripting
18 6/30/11 10:30 – 11:30 101B Peter Fugere IFRS Reporting Within Hyperion Financial Management

In addition to the presentations above, you can catch up with our experts at our booth in the Vendor Showcase.

We look forward to seeing you in Long Beach. If you haven’t already registered, you can do so here.

Oracle Business Intelligence Essbase Cube Builder: Continued Integrations with Essbase and OBIEE

Oracle has been busy focusing on improved consolidation of their BI applications this past year. This next offering within Oracle’s arsenal of BI apps will allow for enhanced integrations between the multi-dimensional and relational worlds of Essbase and OBIEE. The new application is called Oracle Business Intelligence Essbase Cube Builder (OECB) previously referenced as Oracle Essbase Integrator (OEI) during beta testing and its purpose will help refine and make for more consistent binary integrations. OECB will permit OBI apps to push metadata and data into Essbase cubes, a significant advancement for OBIEE users where leveraging in-depth ad hoc and what-if scenarios can be available. You can download OECB on the Oracle e-delivery site:

OECB Download

Oracle Business Intelligence Essbase Cube Builder

OECB will bestow an integrated BI platform that supports OLAP capabilities and dashboards among others. OECB will also allow Essbase to leverage metadata and data from OBI and OBI apps. Other supports will include what-if scenarios, allocations, spreading, changing attributes, and multiple hierarchy comparisons.

From a user’s perspective, OECB will provide seamless authentication from OBI to Essbase which reduces maintenance and improves the user experience. Users can also feel confident that metadata and data will have the same context across platforms which mean users will see the same dimensions, members and measures between their OBI and Essbase apps. The environments will be synchronized as updated changes will be retained and applied with data values matching between the two apps. OBI dashboard metadata and data contents will be synchronized to Smartview. OBI physical/business model and mapping/presentation layers will be synchronized to Essbase. Additional support features indicated above affords users with the ability for more robust development. All of this becomes available starting with version 11.1.2. Prior versions will also become available after the official release of OECB.

The implication here is if you’re a traditional OBIEE shop wanting to add flexibility to improve analysis with a powerful application like Essbase, OECB will be your opportunity. As Oracle continues to provide expanded integrations between Essbase and OBIEE, see how this new offering called Oracle Business Intelligence Essbase Cube Builder can be leveraged within your organization.

ORACLE HYPERION CALC MANAGER – Part 3 – Working with Templates

In Part 1 of this series, we introduced Calc Manager, providing a general overview and explanation of some new terms.  In the second post in the series, we walked through the development of a Planning rule that utilized a run time prompt.  In this post, we’ll explore templates provided within Calc Manager.

As with the Rule Designer, which is a great tool to help less experienced developers build rules, templates provide a simple way to develop rules for basic tasks in Planning and Essbase…tasks such as copying, clearing, exporting, allocating, and aggregating data.  In addition, you can design your own templates.

We’ll begin by logging on to Hyperion Workspace and navigating to Calc Manager.  Once in Workspace, the navigation path is:  Navigate -> Administer -> Calculation Manager.

Once in Calc Manager, you’ll land on the System View tab, which appears as follows:

Once again, I’ll use my EPMA enabled version of my Planning app based on Sample.Basic.

To access predefined templates, right click on “Rules”.  Once you give the rule a name, the graphical designer is launched.  In the “Existing Objects” window, you should find a list of the pre-existing templates.  A list of the system templates follows:

CLEAR DATA

In order to use the system template to Clear Data, drag and drop “Clear Data” from the System Templates to the Rule Designer.  This will then invoke a member selection window asking you to specify the data to clear.  Keep in mind that this template generates a calc script utilizing the CLEARBLOCK command as opposed to a CLEARDATA command.

In my sample app, I select “FY11” for the Years dimension and “Final” for the Version dimension.  The dropdown box for “Clearblock Option” can be used to define the blocks to be cleared…”All” is the default.  The code that is generated appears below.

FIX ("FY11","Final")
  CLEARBLOCK ALL;
ENDFIX

COPY DATA

The Copy Data template helps to walk the calc developer through the process of copying data from one slice of the database to another.

In the remainder of the wizard, you select the “Copy From” member and the “Copy To” member.  The calc script generated follows:

FIX (@RELATIVE("Measures" , 0),@RELATIVE("Periods" ,0),@RELATIVE("Product" , 0),@RELATIVE("Market" , 0),@RELATIVE("Years" , 0),"Budget")
DATACOPY "Working" TO "Final";
ENDFIX

AMOUNT-UNIT-RATE

The Amount-Unit-Rate template allows the developer to build a calc script to solve for either an amount, unit, or rate, basically whichever is missing.  I’ve added a couple of measures to my application to facilitate the demo.  Using the member selection wizard, I’ve selected “Sales” as my amount, “Cases” as my unit, and “Revenue per Case”  as my rate.  The script generated by the template follows:

"Sales"(
  IF ("Sales" == #missing and "Cases" != #missing and "Revenue per Case" != #missing)
    "Sales" = "Cases" * "Revenue per Case";
  ELSEIF ("Sales" != #missing and "Cases" == #missing and "Revenue per Case" != #missing)
    "Cases" = "Sales" / "Revenue per Case";
  ELSEIF ("Sales" != #missing and "Cases" != #missing and "Revenue per Case" == #missing)
    "Revenue per Case" = "Sales" / "Cases";
  ELSE
    "Sales" = "Cases" * "Revenue per Case";
  ENDIF
)

ALLOCATIONS

Two types of allocation templates are provided within Calc Manager.  The first template, Allocate Level to Level,  allows you to allocate from one level to another.   In my example with my Planning app, you would use this template to allocate marketing expenses  from product family to product using a driver like revenue.  This approach utilizes @ANCESTVAL to build the script.

The second template, Allocate Simple, allocates values based on a predefined relationship, such as Marketing->Market * Cases/Cases->Market.

Both templates walk the developer through the setup of the allocations, selecting members that are fixed throughout the process, offset members (if any), etc.

AGGREGATION

The aggregation template aids the developer to create a script to aggregate the application.  The first screen of the wizard, pictured below, allows you to select members for the FIX statement in the aggregation – here you would limit the calc to a particular version, scenario, or your non aggregating sparse dimension members.

The next screen prompts for dense dimensions to aggregate.  However, if dynamic calcs are properly utilized, this should not be necessary.

The third screen asks for sparse dimensions for the aggregation.  You should exclude any non aggregating sparse dimensions from this selection.

Next, you’re prompted for partial aggregations of dense dimensions.  Again – if dynamic calcs are used properly, this should not be an issue.

In the final screen of the wizard, the developer selects settings for the script…

The code generated by Calc Manager follows:

SET AGGMISSG ON;
SET FRMLBOTTOMUP ON;
SET CACHE HIGH;
FIX (@RELATIVE("Years" , 0),"Working","Budget")
CALC DIM ("Product");
CALC DIM ("Market");
ENDFIX

Please note that this code is not optimized.  In this example, I would use the following:

AGG (“Product”,”Market”);

The code as generated by Calc Manager will result in an extra pass through the database – the calc can be accomplished with a single pass.  Additionally, AGG can be used in place of CALC DIM if there are no formulas on the dimensions being calculated.  Generally speaking, stored formulas on sparse dimensions should be avoided due to performance issues.

SET Commands

The next template walks the user through setting various SET commands for the calc.  This is a fairly straightforward exercise.

EXPORT DATA

This is another straightforward template that helps create a data export calc script.  You need to define the fixed members for the export,  delimiter, #MISSING value, export type (flat file, relational), etc.

In the final part of this series, due for posting on August 13, we’ll walk through the creation of a ruleset.  If you have any questions before the next post, please leave a comment!

ORACLE HYPERION CALC MANAGER – Part 2 – Creating a Planning Rule

In Part 1 of this series we introduced Calc Manager, providing a general overview and explanation of some new terms.  In this post, we will walk through the development of a rule for Hyperion Planning using the graphical interface within Calc Manager.

Again, in order to access Calc Manager, log on to Hyperion Workspace.  Once in Workspace, the navigation path is:

Navigate->Administer->Calculation Manager.

Once in Calc Manager, you’ll land on the System View tab, which appears as follows:

 

For purposes of this demonstration, I have created an EPMA enabled Planning application from the Sample.Basic application that we all know and love.  When the Planning node is expanded, this is what I see:

First, to help illustrate functionality available in Calc Manager, I’m going to create a script component that contains my standard SET commands for the rule.  In order to create the script component, right click on “Scripts” and click on “New”.  Give your script a name and click on “OK”.  This will launch the Component Designer.

From here, you have two options.  If you know what your SET commands need to be, you’re free to type them in directly.  If you wish to be prompted through the process, click on the  button at the top left corner of the Component Designer window.  This will launch a window with all of the calc functions and SET commands.  The following shot displays the function selection interface for SET commands.

For my purposes, I’m going to directly type my SET commands into the Component Designer.  Once complete, save and validate.

Think of script components as an easy way to reuse code…SET commands, standard cube aggregations and the like. 

Once we have saved the script component with our SET commands, it’s time to develop our rule.  To begin, right click on “Rules” under the database node and select “New”.    Give your rule a name and click on “OK”.  This will launch you into the Rule Designer window.

In this example, I’ll create a rule that aggregates the cube, using a run time prompt for the Version dimension.

We can now begin to develop our rule.  First, we’ll select the script component for our SET commands that we developed earlier.  Simply drag this into the rule designer to the right of “Start”.  The Rule Designer window now looks like this:

Let’s take this opportunity to create our variable for the Version dimension run time prompt.  Go to the “Tools” menu and select “Variables”.  Once the Variable Navigator launches, expand the Planning, application, and database nodes.  I’m going to create a run time prompt variable for the Version dimension.  Right click on your rule name and select “New”.  Once I populate the fields on the “Replacement” tab, my screen looks like this:

When complete, save the variable.  Now, back to our rule…

We’re going to specify members for our “Fix” statement.  To do this, select “Member Range” in the New Objects portion of the Rule Palette.  

To add to the rule, drag and drop to the right of the SET command script.  My screen looks like this:

Next, we’ll populate the members for our Fix statement.  I’ll start with Measures.  For my rule, I want to select all of the level 0 measures.  Once I click on the Value field for the Measures dimension, an Actions box appears. 

I want to select a Function.  This invokes the function selection window that we observed earlier.  I want to select @LEVMBRS from the list, which will then prompt for the dimension and level number.

I select Measures from the drop down box and enter “0” for the level name.  I’m going to repeat this process for all of my dense and non aggregating sparse dimensions, with the exception of the Version dimension.  This will be handled via the run time prompt.  For the Version dimension, select “Variable” in the Actions box.  Change the Category selection to “Rule” and this is what we see.

Highlight the variable and click OK.  My member range box looks like this:

Now, we’ll develop the script component to aggregate the Product and Market dimensions.  I’m going to drag a script from the “New Objects” portion of the Rule Palette into my member range.  The graphical display looks like:

Again, I’m going to select a function (AGG in this case).  I then select Products and Market from the dimension selector. 

Now, save and validate.  To deploy the rule to Planning, select Quick Deploy:

Once deployed, the rule can be run from Planning.

In this post, we’ve provided a walk through on developing a new rule using the graphical designer.  More experienced developers can directly code the calc in script mode.  To convert to script mode, select “Edit” and “Script” from the menu. 

In the next post, due by July 31, we’ll explore templates and ruleset creation.  In the meantime, please leave a comment if you have any questions!

ORACLE HYPERION CALC MANAGER – Part 1

With the continued investment in the Hyperion tool set by Oracle, there was a desire to centralize the development of calculations for HFM, Essbase, and Planning.  As a result of this, Oracle Hyperion Calculation Manager was born.  Calc Manager is a powerful tool for developing and administering rules for Planning and Essbase.   An intuitive graphical interface is available to help in the development process, helping to expedite movement through the learning curve for people just beginning to dip their toes into the world of Oracle Hyperion Planning and Oracle Essbase.

Over the course of several posts this summer, I’ll explore Calc Manager functionality from the Essbase and Planning points of view.  For EPMA-enabled Planning applications, use of Calc Manager is required.   With version 11.1.1.3, Calc Manager can be used with Classic Planning apps as well.  However, the focus of my blog posts will be EPMA-enabled apps, as Classic Planning rides off into the sunset.

Calc Manager, a component of EPM Architect, is integrated into EPM Workspace, the standard entry point for many Hyperion applications.  In order to access Calc Manager, log into Workspace, and select Nagivate->Administer->Calculation Manager (see screen shot below for navigation path).  However, before we get too far into actually navigating the tool, we’ll need to get comfortable with the terminology within Calc Manager.

There are three types of objects within Calc Manager:  components, rules, and rulesets.   Components are smaller pieces of a larger rule.  Things like SET commands, FIX statements, formulas, etc. are examples of components.  I’ll explore this in much greater detail in a future post, but think of a standard types of SET commands that you use in all of your scripts – this can be saved separately as a script component and pulled into a new rule very easily.  Included below is a shot of the Component Designer with a sample of some standard set commands.

Essentially, rules are the finished calc script, similar to Business Rules in the past.  Rules are used for modeling/allocations/aggregations and the like.  Rules can be built using system templates.  Oracle has provided standardized templates for tasks such as clearing, copying, allocating, aggregating, and exporting data.   Again, these templates will be explored in additional detail in a future post.

Rulesets are similar to Business Rule Sequences under Hyperion Business Rules.  Rulesets can be used to launch rules sequentially or simultaneously depending on your logic requirements.

Now that we’ve covered the basic terminology related to Calc Manager, in my next post, which should be online by July 4, we’ll walk you through creating a rule for an EPMA enabled Planning app.  In the meantime, if you have any questions, leave a comment!

Using Hyperion Essbase to Report Comparable Store Sales

One of the commonly used measures in the retail industry is “comps” – comparisons of actual sales for this year versus last year.  The goal of reporting comparable store is to provide information on what portion of a company’s sales comes from increasing sales growth in existing stores versus opening new stores.   This metric is used to measure whether a company’s sales will continue to grow when store base reaches a saturation point, or the company slows expansion.

What are the considerations in defining comp store calculation?

  • Definition of comp store. In addition to having a store open for at least 1 year, it’s important to compare stores that have not changed significantly.  In this case, we are using square footage in the store to identify significant changes to a store.  In our example, if square footage changes by more than 25%, sales are no longer comparable to prior periods.  Also, if the status of a store changes (i.e. opening, closing, moving, temporarily closing), comp store sales are not comparable with prior periods.
  • Definition of applicable time periods. In this case, we used month to date, quarter to date, and year to date.  Each applicable time period is calculated monthly.  The applicable time period amount is calculated only for stores open during the applicable period.  For example, the June YTD amount for 2010 is only calculated for stores in existence from Jan 2009.
  • Calculation of comp sales. Most clients prefer to remove the effects of currency translation on this calculation.  In this case, only net sales are used for comp store analysis.

Implementation

The database outline for the comp sales database contains the following 10 dimensions:

  • An individual store is uniquely identified as a member in the stores dimension.
  • Comp store amounts are only calculated for the comp stores scenario.  Actual data is loaded to the comp store scenario.

Below are sections of the accounts dimensions used for the comp store calculation.

The comp store control stats are used to calculate the comp status counter, which is the first determinant of whether a store is a comp store.

The comp store metrics hierarchy stores the applicable comp store amounts in local currency and USD.  Local currency comp store metrics show amounts for current year and prior year for MTD, QTD, and YTD.  USD comp store metrics show amounts at a constant exchange rate.

Approach

There are 2 different calculations for the comp store process:

  • The calculation of the comp store sales counter determines whether a store qualifies for comp store status based on square footage and store status.
  • The calculation of comp store metrics is dependent on the calculation of the comp store sales counter.  The metrics calculation determines comp store amounts.

The key processes for the comp store sales counter calculation are as follows:

  • Calculate monthly square footage amounts.  Set beginning balance equal to prior December.  Accounts calculated are: square footage, store status, and comp store counter.
  • Calculate monthly square footage change percent.
  • Calculate ending store status and comp store status counter based on inputs for square footage and change type (open, close, move).  The comp store status counter is used to identify qualification for comparable periods.

The following is an example of how the comp store status counter logic would be applied to a store.  Note that the store comp counter is incremented monthly once a store is open, but a change in square feet of the store resets the counter.  This is to assure that sales from the 2000 square foot store are not compared with the 3000 square foot store.

After calculating the store comp counter, the key processes for the comp store sales metrics are as follows:

  • Copy actual (a rollup scenario including general ledger amounts and adjustments) to CompStoreAnalysis  (another scenario).  This allows reporting comp store results in a single scenario.
  • Create blocks for every year based on prior year gross sales.
  • Calculate net sales current year and net sales prior year in local currency for each appropriate time period, based on comp store status counter and the applicable comp time period (MTD, QTD, and YTD).
  • To be included in QTD comps, a store must have a store status counter of 13 and have been in existence since the beginning of the current quarter last year.  For YTD comps, the store must have been in existence since the beginning of last year.
  • Calculate comp store sales in USD using the prior year rate.
  • Aggregate comp store metric amounts in the comp store analysis scenario by stores, products, geographies, and legal organization.

Note in the sample store shown above, comparable net sales on a MTD basis would be calculated for December 2008.  Amounts would be calculated both in local currency and USD.  The USD accounts are for current and prior year would use the same rate (last year’s).