Using Hyperion Essbase to Report Comparable Store Sales

One of the commonly used measures in the retail industry is “comps” – comparisons of actual sales for this year versus last year.  The goal of reporting comparable store is to provide information on what portion of a company’s sales comes from increasing sales growth in existing stores versus opening new stores.   This metric is used to measure whether a company’s sales will continue to grow when store base reaches a saturation point, or the company slows expansion.

What are the considerations in defining comp store calculation?

  • Definition of comp store. In addition to having a store open for at least 1 year, it’s important to compare stores that have not changed significantly.  In this case, we are using square footage in the store to identify significant changes to a store.  In our example, if square footage changes by more than 25%, sales are no longer comparable to prior periods.  Also, if the status of a store changes (i.e. opening, closing, moving, temporarily closing), comp store sales are not comparable with prior periods.
  • Definition of applicable time periods. In this case, we used month to date, quarter to date, and year to date.  Each applicable time period is calculated monthly.  The applicable time period amount is calculated only for stores open during the applicable period.  For example, the June YTD amount for 2010 is only calculated for stores in existence from Jan 2009.
  • Calculation of comp sales. Most clients prefer to remove the effects of currency translation on this calculation.  In this case, only net sales are used for comp store analysis.

Implementation

The database outline for the comp sales database contains the following 10 dimensions:

  • An individual store is uniquely identified as a member in the stores dimension.
  • Comp store amounts are only calculated for the comp stores scenario.  Actual data is loaded to the comp store scenario.

Below are sections of the accounts dimensions used for the comp store calculation.

The comp store control stats are used to calculate the comp status counter, which is the first determinant of whether a store is a comp store.

The comp store metrics hierarchy stores the applicable comp store amounts in local currency and USD.  Local currency comp store metrics show amounts for current year and prior year for MTD, QTD, and YTD.  USD comp store metrics show amounts at a constant exchange rate.

Approach

There are 2 different calculations for the comp store process:

  • The calculation of the comp store sales counter determines whether a store qualifies for comp store status based on square footage and store status.
  • The calculation of comp store metrics is dependent on the calculation of the comp store sales counter.  The metrics calculation determines comp store amounts.

The key processes for the comp store sales counter calculation are as follows:

  • Calculate monthly square footage amounts.  Set beginning balance equal to prior December.  Accounts calculated are: square footage, store status, and comp store counter.
  • Calculate monthly square footage change percent.
  • Calculate ending store status and comp store status counter based on inputs for square footage and change type (open, close, move).  The comp store status counter is used to identify qualification for comparable periods.

The following is an example of how the comp store status counter logic would be applied to a store.  Note that the store comp counter is incremented monthly once a store is open, but a change in square feet of the store resets the counter.  This is to assure that sales from the 2000 square foot store are not compared with the 3000 square foot store.

After calculating the store comp counter, the key processes for the comp store sales metrics are as follows:

  • Copy actual (a rollup scenario including general ledger amounts and adjustments) to CompStoreAnalysis  (another scenario).  This allows reporting comp store results in a single scenario.
  • Create blocks for every year based on prior year gross sales.
  • Calculate net sales current year and net sales prior year in local currency for each appropriate time period, based on comp store status counter and the applicable comp time period (MTD, QTD, and YTD).
  • To be included in QTD comps, a store must have a store status counter of 13 and have been in existence since the beginning of the current quarter last year.  For YTD comps, the store must have been in existence since the beginning of last year.
  • Calculate comp store sales in USD using the prior year rate.
  • Aggregate comp store metric amounts in the comp store analysis scenario by stores, products, geographies, and legal organization.

Note in the sample store shown above, comparable net sales on a MTD basis would be calculated for December 2008.  Amounts would be calculated both in local currency and USD.  The USD accounts are for current and prior year would use the same rate (last year’s).

Hyperion Financial Management: Zero View and Default View Settings

One of the most common areas of confusion in Hyperion Financial Management (HFM) has been the Scenario application setting for the Default View and Zero View.  Peering back into the history of Hyperion Solutions, you will find this setting has existed in all the Consolidation products, such as Hyperion Enterprise’s “missing values” setting under Categories.  They are not new features and are a fundamental to HFM.

ZeroViewForAdj and ZeroViewForNonadj

There are two ZeroView settings in HFM.  The ZeroViewForNonadj is a setting which applies to the <Entity Currency> Value dimension.   The second is the ZeroViewForAdj which applies a setting to the <Entity Curr Adjs> Value dimension member.

The derived data that results from the ZeroView settings commonly appears in HFM as a slightly grey numeric value, as opposed to a black display found with standard data.

Before we explain what it does, let’s talk about why we need it.

Part of Hyperion Financial Management’s financial intelligence is its Account Types.  These types help to manage data flowing through the chart of accounts and support variance reporting.

  • Revenue
  • Expense
  • Asset
  • Liability
  • Balance
  • Flow
  • GroupLabel
  • Currency
  • BalanceRecurring

The Revenue, Expense and Flow Account types are referred to as “flow” accounts and tend to support Profit and Loss reporting.  As such, they are supported by the View dimension.  The View dimension allows data to be viewed, keyed or loaded as YTD or Periodic amounts.  Data entered as periodic sales of $1000 to all months would allow users to view December as Periodic $1000 or YTD $12,000.

Example 1:

The ZeroView setting informs the HFM Scenario how to handle data for a flow account when no data is loaded to a Period.  No data is not a zero; it is a blank, non-existent value.

Example 2:

If data was loaded from your source system for Jan as $1000, how should HFM calculate Feb? There are two options using the ZeroView settings which are YTD or Periodic.  The first option is the YTD setting for the ZeroViewForNonadjs attribute.

Choosing the ZeroView setting as YTD, HFM will fill the period following with a derived value as zero YTD.

Example 3 ZeroView as YTD:

Recall that Flow accounts can be viewed as Periodic or YTD, what does Feb look like as Periodic?  If there is $1000 YTD in Jan and zero YTD in Feb, there must have been a change in periodic activity.  Therefore, the Feb Periodic value would be $-1000 to arrive at YTD zero.

Example 4 ZeroView as YTD:

The second option is to set the ZeroView settings as Periodic.  This setting addresses the same properties of flow type accounts.  In this instance, the following period will have a zero applied as a Periodic zero, which is the activity.  This setting is common on the Budget and Forecast Scenarios.  The data result is the entry flows through all the periods because the YTD value is derived from the Periodic activity.

Example 5 ZeroView as Periodic:

Example 6 ZeroView as Periodic:

How does one determine which setting is correct for their application?  Typically, we analyze how the data load file is constructed.  For example, for flow accounts that may be adjusted to zero, are they included in the file?  Does a periodic version of the file supply zero items as the year-to-date negative offset? The most common setting is to set the ZeroView as YTD for Actual data loads.

A good example why we would set the ZeroView to YTD in Actual would be if data was re-classed month-to-month or a new account is used.  In the example below, a different account is used in Feb compared to the data supplied in Jan.  The ZeroView setting as YTD will automatically assume the Feb amount is to be YTD zero.

Example 7 ZeroView as YTD:

If the Scenario was set to Periodic for the ZeroView, the application would have incorrect results for Feb as 2000.  The data would flow through all the Periods, and in this case, the Sales account should be zero for Feb.  The original account would require a -1000 Periodic or “0″ YTD entries to clear the value in Feb on account Sales.

Example 8 ZeroView as Periodic:

The ZeroViewForAdjs setting applies to how HFM Journal entries will function for missing data and how they affect future periods.

For Journal adjustments we see the same impact on the data within the <Entity Curr Adjs> Value member.  The YTD setting will essentially reverse the Journal in the following period by applying the YTD offset to the Journal.

Example 9 ZeroView as YTD:

In cases were a prior period is being adjusted, the YTD setting will keep the YTD values of future Period, such as Feb, constant.  As an example, if the original value were 1000 in Jan and 1000 in Feb, the Feb YTD amount would be 2000.  A journal posted to Jan would impact the Jan results, but the Feb YTD amount of 2000 would not change.  In the example above, what is adjusted is the Periodic amount from 1000 to 500.  Therefore, to see the impact on all periods, the Journal would need to be created and posted in all periods going forward.

Journal adjustments with the ZeroView setting for Journals as Periodic will allow the impact of the Journal to affect the YTD results in the future periods.  Because of this, to reverse the affect in a future period a reversing Journal entry is required.

Example 10 ZeroView as Periodic:

Therefore, when viewing data within HFM as a YTD or Periodic view, the grey offset values are the result of your Scenario settings for ZeroViewForNonadjs or ZeroViewForAdjs.  These amounts impact your YTD and Periodic results in Local and Translated amounts.

Default View

The Default view determines what View will be applied to the “<Scenario View>”.  When opening a Grid, form or report, this is the default member that will display.

Because of this, it is important when writing reports or comparing data, a specific view be chosen.  The “<Scenario View>” results will vary by the design of each Scenario.

The default view also is a concern for Rules in HFM.  This setting sets the basis of calculations.

As an example, if there was a Rule that simply called for “estimated bonus” to be a percent of “sales” after we attained our goal of $1000.00, the calculation would vary by Default View on Scenarios.

In the example, the YTD Scenario will calculate a result because it is calculating Rules off the YTD value of $1500. The Periodic Scenario does not calculate the Rule because it is driven off the Periodic value of $500.

Careful attention to Rules development and the utilization of the Dynamic Sub Routine in rules can help to minimize issues caused by differences in the Default View settings.

Special uses for Life Cycle Management (LCM)

In my previous post, I showed how to use LCM to back up or copy an entire planning application environment.  Here I’ll expand on that subject a bit by showing some other uses you may find handy.  This is by no means meant to be an exhaustive collection – just a few suggestions you may find useful and which may provoke ideas for other uses.

Copy single dimension from one app to another

This can be done for any dimension, including the standard planning dimensions.  Here, to expand on the subject we are also going to export from the “Organization” dim in one planning app & import to the “Entity” dim in another.

Select the artifacts to export (no harm in copying everything).

Click thru the next screen to this one.

Since we need to change the dimension name, we must export to files, not directly to the other app.

Then click thru the remaining screens to execute the migration.

After the export finishes, go to the \Hyperion\Common\Import_export directory. Under the Username@Directory folder find the files you exported.

In the “info” directory, edit “listing.xml” changing all instances of “Organization” to “Entity”.

Now find the XML file for the dimension to be migrated with name change.

Rename to the target dimension name.

Now edit the file to change “Organization” to “Entity”.

In Shared Services->Application Groups->File System, open the extract and select the (newly renamed) Entity dimension.

Define Migration…

…and click thru the remaining screens to execute the migration.

Lights-out Operation

In Shared Services select the artifacts to be backed up and define migration.

We need to back it up to files so type in a folder name…

…and click thru the remaining screens until you get here.

Now, instead of clicking the Execute button, click “Save Migration Definition.”

You will get this screen…

…click “Save.”

Shared Services wants to save “MigrationDefinition.xml” where you tell it to.

You can name the file any name you want (I suggest using naming conventions to differentiate the operation being saved) and anywhere you want.

After saving the file you will get this…

…click “Close” and the backup definition will be saved.

Now look in the Automation folder where the xml file was saved.

The file has everything Shared Services needs to run the backup from the command line utility except the USERID and PASSWORD.

Edit in TextPad or other text editor and type in a Userid and password.

After running the job the password is automatically encrypted.

The job is run from an Oracle supplied process, “utility.bat.”

…and you pass the path information to the migration definition file you created above.”

You should channel the output to a log file so you will have a record of success or failure.  The following message is an excerpt from that log which, in turn, lists the detailed log location & name and whether the process was a success or failure and it will also tell exactly where any failure occurred in the process.

I hope I’ve shown you enough to get you started using LCM.  It can certainly be a valuable tool, whether you want to do one-time tasks or perform lights-out operations such as regular backups.  The important thing to remember is to test it and see what, if any, problems you will have and either fix those or work around them.

Using Oracle’s Hyperion® Life Cycle Management

What is LCM?

LCM (Life Cycle Management) is a tool which can be used to migrate Hyperion applications, cubes, repositories, or artifacts across product environments and operating systems. It is accessed through the Shared Services Console.

Does it work?

After using LCM at a few clients I think the answer is a definite YES, but there needs to be a realistic setting of expectations:  Yes, LCM has some very good and handy uses; but NO, it is not necessarily going to be a painless, simple answer to your migration and/or backup needs.

What can I do with it?

You can use it for migrations:

  • One environment to another
  • One app to another (same SS environment)
  • Selected dimensions or other artifacts

And for backups/restores, including keeping two separate environments synchronized:

  • Selected artifacts
  • Lights-out

Products which can be migrated are:

  • Shared Services
  • Essbase
  • Planning
  • Reporting
  • HFM
  • The dimensions housed in EPMA

This blog is going to concentrate on using LCM for planning application migrations although, as you can see from the list above, it can also be used for other products as well.

First I’ll show how a migration is done, using screen shots, to give a detailed look.  Then I’ll point out things to look out for including things which will cause the migration to fail — with work-arounds where possible.

To migrate an entire Planning application, you will need to copy (4) areas:

  1. Shared Services
  2. Essbase (For Planning, only need the Essbase Global Variables.  All App/DB specific variables are migrated with the Planning Application)
  3. Planning Application
  4. Reporting and Analysis (if applicable)

The order in which you export these is not important but when doing the import, the order is very important.

Some important considerations:

  • Target app can have different name from source
  • Source and destination plan types must match
    • Can be changed by editing the files
    • Target plan types must be in same order as source
  • Start year must be the same
    • Number of years doesn’t need to match
  • Base time period must be the same
  • Target app’s Currency settings must match Source
  • Standard Dimension names must match
    • Can be changed by editing the files

When exporting any application it is advisable to just export everything.  If necessary you can be selective on the import side.

Start the process by opening the Shared Services console and go to the Application Groups –>Application (in this case – Shared Services under Foundation).

In the lower part of the screen, click “Select All” and then “Define Migration”

Now go through the screens:

Leave each field with an * and Choose “Next”

Type in a file name for the export.  It is advisable that you use a naming convention for this since you will end up with (possibly multiple) files for each application.

Review the destination options & click “Next.”

Finally, review the Migration summary and click “Execute Migration.”

NOTE:  If this process is going to be re-run in a lights-out environment you should instead choose the “Save Migration Definition” button.  I’ll discuss this more fully later on.

You will get this information screen.  Click Launch Migration Status Report to actually see the migration progress.

As long as the migration is running you will get a status of In Progress

Click Refresh to keep checking status (if desired) until you get a status of Completed or Failed.

All of the other applications can be exported this same way, each with slightly different screen sequences but generally the same process.

The primary differences will be for Planning and Essbase where, if there are other applications in the same Shared Services environment, they will be offered as possible targets for the export, in addition to the File System.  Selecting one of these will cause a direct migration from the source application to the selected target application.

After the exports are finished the LCM export files can be copied to the target server environment, if needed.  These export files can be found on the Shared Services server under \Hyperion\common\import_export\username@directory.

Copy the entire directory (in this example, Essadmin@Native Directory) to the Hyperion\common\import_export directory on the target server.

The import side is where things are more likely to be tricky.  Here you will reverse the process, selecting the export files in proper order (Shared Services, Essbase, Planning & Reporting) and importing them to whatever target is appropriate.

Start the process by logging in to the Shared Services console as the same username you used in the export process.  Under Application Groups–>File System, find the appropriate export files and click “Define Migration.”

Click through the screens, including the SS screen selecting the target application to import to.

On the destination option screen select Create/Update and increase the Max errors if desired (default = 100)…

…and run the migration.

For the Planning import select all to begin.

Click through the screens and select the planning application to import to.

And click through the remaining screens to execute the migration.

The Reporting migration is similar.  Select all the artifacts you want to import.

And go through the remaining screens to execute the migration.

In many cases, especially where you are keeping two identical environments in sync, these migrations should go smoothly and complete without error.  However, at other times, especially when doing an initial migration or one where the security will be much different from one to another, you may have to make several passes at the migration.  When even one item fails to migrate successfully, LCM will send back a status of “Failed”.  Click on that link in the status report and LCM will tell you what items failed to migrate.  All other items will usually have migrated successfully.   You will then have to figure out why the item failed and either fix the problem, work around the problem or ignore it and migrate the item another way.

Here are some things I’ve found which will cause you problems in using LCM:

  • In exporting a planning application with many substitution variables, the EXPORT failed – refusing to export all of the variables.  This was worked around by exporting only the variables and then exporting everything except the variables.
  • OR, you can play with the group count/size settings as well as report and log files location within the migration.properties file.
  • Default settings usually are:
  • grouping.size=100[mb]
  • grouping.size_unknown_artifact_count=10000
  • Using “All Locations” in HBR will cause failure for those forms.
  • Essbase server names—if not same in source & target, you will have to modify the import files for target name.
  • Report Name Length is limited to 131 characters less folder name.
  • Dim members “Marked for Delete” won’t migrate.  You will have to delete them using a SQL query if you want them migrated.
  • Form folders may get re-ordered on migration.  You can work around this by manually adding the folders to the target application in the proper order.  LCM will not reorder existing folders.
  • Doesn’t support parentheses ( ) in form names.  You won’t get an error indication in the export/import – the forms just won’t be there in the imported app.  You’ll have to rename the forms to get them to come over.
  • Member formulas need to be in planning – if just in Essbase they don’t come over.  If this is a one-time migration you can use the EAS migration utility to bring the outline over after the LCM migration.
  • You must manually delete Shared Services groups in the target app if you deleted them in the source app (or they will remain).
  • Reports – you must manually update the data source in the target.
  • Members don’t come over with certain special characters.
  • Doesn’t support Clusters; must use the outline as HBR location.
  • Global Variables with limits in their definition don’t work.

Well, now you should be able to use LCM and judge for yourself whether it is right for your application.  In another BLOG I’ll show how to run LCM in a lights-out mode and also how to do some modifications to the export files so you can do things like sharing dimension detail between planning applications.

Enjoy!

Can FDM Do That?

Most everyone that uses or has seen a demo of Oracle Hyperion Financial Data Quality Management (“FDM”) knows the basic functionality that FDM provides – mapping, data loading, data validation and financial statement certification.  But FDM is so much more than the basics. 

FDM is a very open architecture product readily supporting advanced customization.  As I highlighted in my last blog post, the workflow process (Import  Validate  Export  Check) can be fully automated and executed on a scheduled basis using either FDM’s internal scheduling component or any Windows batch based scheduling tool that an organization prefers.  But that’s just the tip of the proverbial iceberg. 

Any organization that has recently experienced a Hyperion product upgrade – for example, System 9 to Fusion or Enterprise to HFM – knows the pain of revalidating years of financial data.  This exercise can easily take weeks.  Not only is this process time consuming, it’s tedious and often prone to error.  More importantly, data validation can be one of the biggest risks to a project.  The need to improve seems obvious. 

To address this opportunity, we developed a custom solution that leverages HFM and FDM’s advanced features, custom scripts and batch loading. The benefits are substantial – literally tens of thousands of data points can be validated (and re-validated) in minutes – with 100% data accuracy.  This process is easily extendable not only to other Oracle/Hyperion products like Planning & Essbase but potentially to other data stores.

The benefits of this process may be obvious but let’s take a moment to think about them:

  • 100% Data Accuracy – How valuable is this to your organization in the current economic and financial market climate?  The cost of restated financials is far too great to fathom – potential for government fines, reduced shareholder equity and even loss of one’s job.
  • Shorten Implementation Timelines – How nice would it be for your project to come in on time or early?  Using this solution, you can realistically trim weeks if not months out of a project timeline. 
  • Reduced Implementation Costs – let’s face it, in this economy, every dollar needs to pay dividends.  Whether you choice to leverage a consultant, temp, intern or internal resource to validate your data, there is a cost associated with it.  Reducing the time associated with this activity will reduce your project cost. 

I invite you to check back often as I’ll continue to discuss “outside the box” solutions that can add significant ROI to your FDM investment.   

Contributed by:
Tony Scalese, FDM Design Lead
Hyperion Certified Consultant – HFM
Ranzal & Associates
ascalese@ranzal.com

Adding Drill Through to your Hyperion Implementation – Part 2

So I have been told I blog with the frequency of a solar eclipse, and with respect to this two-part series, several interested parties via e-mal have indicated I have done the equivalent of a network TV season ending episode – “just wait the whole summer folks for the answer to these questions”. Thanks to all of you who “gently” pinged me to remind me to complete the blog.

Implementation:

So for example, take the following view one might see at the bottom level (level zero, or base level members) of an Essbase cube:

image1

Notice the accrual for Ranzal services that is visible, and multiple transactions are available at the weekly level. A further explosion into AP would reveal the identity of the vendors, and other relevant info looking something like this: 

image3

So as can be seen from the above example, relevant data from multiple tables is required from the drill through view. With the accruals that are made, there is relevant information in the journal descriptions; for the AP system there is vendor detail that is valid. The solution to this drill through is the creation of a view (called VendorFact in the above schematic) that is a union of the two data sources, filtering on data source. The query might look something like this:

/**Begin pseudo-code***/
CREATE VIEW ‘VendorFact’ AS

SELECT
Company, Dept, FY, Per, Acct, JournalID, JournalDesc as Desc, PostingDate as MyDate, ‘00000’ as Vendor_ID, ‘00000 as Doc_Nbr, ‘0’ Line_Nbr, PostedBaseAmt as Amt
From
GLTRANS
WHERE
Source <> ‘AP’

UNION

SELECT

Company, Dept, Year(Trans_Date) as FY, Month(Trans_Date) as Per, Acct, ‘AP’ as Source, ‘NONE’ as JournalID, LineDesc as Desc, TransDate as MyDate, Vendor_ID, Doc_Nbr, Line_Nbr, Trans_Amt
From
AP
/**End pseudo-code***/

The following view might result:

image4

With the required view in place, a drill through query could then be created in Hyperion Web Analysis, using the relevant GL chart fields (Company, Department, Fiscal Year, Fiscal_Period, and Account) as filters in the selection query as follows:

SELECT * from VendorFact
Where Company = [CompanyToken] and Dept = [DeptToken] and Account = [AccountToken] and FY = [FYToken] and Per = [Month]

Once this drill through query had been created, a drill-link could be added as a menu item on the Summary Expense analyzer view that was directly querying the Essbase database. When the user had drilled down to the bottom of the cube, the next action would open up the new Web Analysis report, pass the relevant parameters to the query, and display a relational grid like the one above.

To give you a flavor of what this might look like, check out the example below from another type of application…..

Step 1 – Create the Summary “Essbase” View, and establish Drill Links to target detail report

  • The report is setup so that all dimensions except for currency must be selected at the lowest level. This is done as a sample and to put the constraint on the report so that it returns only expected rows.
  • To drill through, first select data down to level 0 for all dimensions. Next double click the actual data cell you want to drill into.
  • The drill through report will be initialized with the parameters passed. If dimensions are not at lowest level, a “no data available” message will appear on the drill through.
  • Drill Link Options Configuration
    This is the configuration to point the report to the Drill Through Report. It is accessed by opening the report and right clicking in a cell selecting “Drill” and “Drill Link Options…”

screen1

  • The drill link options page pops up. You specify the name of the report you want to open, and what to pass as the “WHERE” clause to the next report.

screen2


Step 2 – Create the detailed Web Analysis view accessing the relational data

  • The drill through report is accessed by the FinPlan report based on the dimensionally that is picked. There’s a 10,000 row limit on drill-throughs, as any set of detailed transactions that can be drilled to should never exceed even a few hundred records. If this setting was not enabled, opening the drill through report directly would fail as too many records would be loaded.
  • SQL Query Configuration
    This screen is accessed by right clicking on the Drill Through Report Grid and selecting “Edit Query…”
  • The JDBC-ODBC bridge driver is used to connect back to the hyperion data mart housing the drill through data. Drill through users will leverage a shared drill through connection..The row limit and fetch size are also configured here.

screen3

  • Click the “Query Builder” button in the screen above and another dialog box opens. This is where the drill through mappings are configured. Click Mappings and notice that all dimensions are configured to their drill through counterparts in the vw_GL_Transactions_DT view.

screen4

  • When executing the drill through, an output like this can be generated.
  • screen5

 

  • As a final note, in constructing the view, particular care must be paid to the structure of the member names in the Essbase outline. For, example, core chart fields such as Departments and Accounts are often prefixed or suffixed to make them unique (e.g. instead of “000610” one might have “Dept_000610”). In constructing the drill through view, the member names must match those exactly as defined in Essbase, in order for the filtering aspects of the queries to execute properly.
  • Other Drill Through Choices
    So one problem with Essbase is that it has multiple ways to do everything. For example, how many ways are there to load data? (Answer: For block storage cubes, flat file via rules file; sql interface with rules file; EIS data load via ODB connection; lock and send via Excel; lock and send via smartview; planning web forms; FDM adapter; DIM adapter; ODI apdater; HAL adapter; I am sure I missed a few). There are 2 command line automation interfaces for goodness sake (MaxL and Esscmd). Transaction drill through is no different. Approach wise, the one consistent thought is this: never drill back to the live transaction tables. Ok, so never is a pretty strong word, but let’s just say as a general rule of thumb, doing so is not a good idea for the following reasons: •
  • Your Essbase cube is a snapshot of data from a load. Querying a live database for open periods implies the sum of your details may not equal your account balances
  • Performance – transaction databases are optimized for write based operations, and highly normalized. As we have defined in the above example, creating denormalized views for specific queries will make your DBAs and end users happier.

Now, technology gets better (hardware and software), and with some of the new items coming from Oracle, out of the box, they are trying to get you to a place where EPM apps are more real time, so the rules above won’t always apply, but for now, it’s a good starting point.

In general when deciding on drill through options, you want to ask the following questions:

  • What products do I need to drill through from?
         o Web Analysis
         o Interactive Reporting
         o Financial Reports o Excel Add-In
         o Smart View
         o OBIEE
  • What is the type of content I want to drill through two?
         o Transactions in a relational database
         o Some other content served up via a URL (for example, very clever way to serve up document images)
  • What level of the source cube do I need to drill through from?
         o Either level zero, or the top of a dimension
         o Every level of a non-ragged hierarchy
         o Custom groups and ragged alternate rollups
  • What release of the software are you using (Fusion vs. 9 vs 7.x)

For example, if you want to enable drill through from every product, and need to support ragged or standard hierarchies, you are looking at an Essbase Integration Services or Essbase Studio solution. Ranzal has some creative techniques for enabling drill through via EIS on cubes not built with EIS (thanks to our handy outline extractor), which makes it useful for Planning, but once your concern over “newness” has abetted, Essbase Studio allows you to enable drill through the right way – build your cube first, then add the drill through later on. Not the other way around.

On the other hand, if you are looking at something quick and dirty, a Web Analysis or IR view in the short term can get you where you need to be.

Either way, Oracle, particularly with its continued emphasis on OBIEE/Essbase integration, is looking to provide more out of the box options to enable drill through. For example, if you haven’t seen it, check out the Ranzal webinar on Financial Data Management; the latest releases are geared to provide drill back from Planning to FDM, and then ultimately if you load from Oracle Financials, drill all the way back to the ERP.

Closing
This blog was meant to give you an idea of options and approaches for enabling drill through on your Hyperion application. While its great to have options, remember the golden rule – when you make your choice, know WHY you chose that path. Design is basically the optimization of an approach against one set of requirements over another.

Until next time, good luck.

Authored by:
Mike Killeen – Ranzal and Associates
mkilleen@ranzal.com

For more than eleven years, Mike Killeen has scoped, designed, and implemented Planning & Essbase applications for Ranzal & Associates, an award winning, preferred consulting partner of Oracle Hyperion. Currently serving as practice director, Mike provides guidance to Hyperion product management, Ranzal customers, and internal consulting resources alike on best practices related to the implementation of Hyperion System 9. Mike is a certified Hyperion Professional, with certifications in Essbase, Planning, and HFM. He graduated cum laude from Cornell University with a degree in engineering, and received his MBA from the University of Connecticut.

BI for the BlackBerry Crowd

During the recent COLLABORATE 2009 Conference, Ranzal was selected to present a session designed around showing how to use the reporting tools from either traditional Hyperion or Oracle (formerly Siebel Analytics) and deliver content to smart phones.  The session started off with an overview of smart phones, methods of delivery to those phones and some potential pitfalls and considerations, such as what if a user loses their phone? What kind of security policies need to be in place?  Then a couple quick demonstrations in Hyperion were given which included a few tips and tricks on formatting.  This was all done using Interactive Reporting and Workspace.  Lastly, a couple quick demos in OBIEE Answers were provided to the attendees.

The demos were done in Hyperion version 9, although the content was pertinent for version 11 as well.  Some of the common themes in the demonstrations were focus around timely information (i.e. there is no need to send a month report to a cell phone) and focus on exceptions instead of a whole data set (cell phone reporting should be more around focusing on a problem that needs attention than sending a whole dashboard or report).

A copy of the presentation from COLLABORATE 2009 can be found at ranzal.com.