Oracle Business Intelligence Cloud Service (BICS) September Update

The latest upgrade for BICS happened last week and, while there are no new end user features, it is now easier to integrate data. New to this version is the ability to connect to JDBC data sources through the Data Sync tool.  This allows customers to set up automated data pulls from Salesforce, Redshift, and Hive among others.  In addition to these connections, Oracle RightNow CRM customers have the ability to pull directly from RightNow reports using Oracle Data Sync.  Finally, connections to on premise databases and BICS can be secured using Secure Socket Layer (SSL) certifications.

After developing a customer script using API calls to pull data from Salesforce, I am excited about the ability to connect directly to Salesforce with Data Sync. Direct connections to the Salesforce database allows you to search and browse for relevant tables and import the definitions with ease:


Once the definitions have been imported, standard querying clauses can create the ability to include only relevant data, perform incremental ETLs, and further manipulate the data.

While there are no new features for end users, this is a powerful update when it comes to data integration. Using APIs to extract data from Salesforce meant that each extraction query had to be written by hand which was time consuming and prone to error.  With these new data extraction processes, BICS implementations and integrating data becomes much faster, furthering the promise of Oracle Cloud technologies.

A Comparison of Oracle Business Intelligence, Data Visualization, and Visual Analyzer

We recently authored The Role of Oracle Data Visualizer in the Modern Enterprise in which we had referred to both Data Visualization (DV) and Visual Analyzer (VA) as Data Visualizer.  This post addresses readers’ inquiries about the differences between DV and VA as well as a comparison to that of Oracle Business Intelligence (OBI).  The following sections provide details of the solutions for the OBI and DV/VA products as well as a matrix to compare each solution’s capabilities.  Finally, some use cases for DV/VA projects versus OBI will be outlined.

For the purposes of this post, OBI will be considered the parent solution for both on premise Oracle Business Intelligence solutions (including Enterprise Edition (OBIEE), Foundation Services (BIFS), and Standard Edition (OBSE)) as well as Business Intelligence Cloud Service (BICS). OBI is the platform thousands of Oracle customers have become familiar with to provide robust visualizations and dashboard solutions from nearly any data source.  While the on premise solutions are currently the most mature products, at some point in the future, BICS is expected to become the flagship product for Oracle at which time all features are expected to be available.

Likewise, DV/VA will be used to refer collectively to Visual Analyzer packaged with BICS (VA BICS), Visual Analyzer packaged with OBI 12c (VA 12c), Data Visualization Desktop (DVD), and Data Visualization Cloud Service (DVCS). VA was initially introduced as part of the BICS package, but has since become available as part of OBIEE 12c (the latest on premise version).  DVD was released early in 2016 as a stand-alone product that can be downloaded and installed on a local machine.  Recently, DVCS has been released as the cloud-based version of DVD.  All of these products offer similar data visualization capabilities as OBI but feature significant enhancements to the manner in which users interact with their data.  Compared to OBI, the interface is even more simplified and intuitive to use which is an accomplishment for Oracle considering how easy OBI is to use.  Reusable and business process-centric dashboards are available in DV/VA but are referred to as DV or VA Projects.  Perhaps the most powerful feature is the ability for users to mash up data from different sources (including Excel) to quickly gain insight they might have spent days or weeks manually assembling in Excel or Access.  These mashups can be used to create reusable DV/VA Projects that can be refreshed through new data loads in the source system and by uploading updated Excel spreadsheets into DV/VA.

While the six products mentioned can be grouped nicely into two categories, the following matrix outlines the differences between each product. The following sections will provide some commentary to some of the features.

Table 1

Table 1:  Product Capability Matrix

Advanced Analytics provides integrated statistical capabilities based on the R programming language and includes the following functions:

  • Trendline – This function provides a linear or exponential plot through noisy data to indicate a general pattern or direction for time series data. For instance, while there is a noisy fluctuation of revenue over these three years, a slowly increasing general trend can be detected by the Trendline plot:
Figure 1

Figure 1:  Trendline Analysis


  • Clusters – This function attempts to classify scattered data into related groups. Users are able to determine the number of clusters and other grouping attributes. For instance, these clusters were generated using Revenue versus Billed Quantity by Month:
Figure 2

Figure 2:  Cluster Analysis


  • Outliers – This function detects exceptions in the sample data. For instance, given the previous scatter plot, four outliers can be detected:
Figure 3

Figure 3:  Outlier Analysis


  • Regression – This function is similar to the Trendline function but correlates relationships between two measures and does not require a time series. This is often used to help create or determine forecasts. Using the previous Revenue versus Billed Quantity, the following Regression series can be detected:
Figure 4

Figure 4:  Regression Analysis


Insights provide users the ability to embed commentary within DV/VA projects (except for VA 12c). Users take a “snapshot” of their data at a certain intersection and make an Insight comment.  These Insights can then be associated with each other to tell a story about the data and then shared with others or assembled into a presentation.  For those readers familiar with the Hyperion Planning capabilities, Insights are analogous to Cell Comments.  OBI 12c (as well as 11g) offers the ability to write comments back to a relational table; however, this capability is not as flexible or robust as Insights and requires intervention by the BI support team to implement.

Figure 5

Figure 5:  Insights Assembled into a Story


Direct connections to a Relational Database Management System (RDBMS) such as an enterprise data warehouse are now possible using some of the DV/VA products. (For the purpose of this post, inserting a semantic or logical layer between the database and user is not considered a direct connection).  For the cloud-based versions (VA BICS and DVCS), only connections to other cloud databases are available while DVD allows users to connect to an on premise or cloud database.  This capability will typically be created and configured either by the IT support team or analysts familiar with the data model of the target data source as well as SQL concepts such as creating joins between relational tables.  (Direct connections using OBI are technically possible; however, they require the users to manually write the SQL to extract the data for their analysis).  Once these connections are created and the correct joins are configured between tables, users can further augment their data with data mashups.  VA 12c currently requires a Subject Area connected to a RDBMS to create projects.

Leveraging OLAP data sources such as Essbase is currently only available in OBI 12c (as well as 11g) and VA 12c. These data sources require that the OLAP cube be exposed as a Subject Area in the Presentation layer (in other words, no direct connection to OLAP data sources).  OBI is considered very mature and offers robust mechanisms for interacting with the cube, including the ability to use drillable hierarchical columns in Analysis.  VA 12c currently exposes a flattened list of hierarchical columns without a drillable hierarchical column.  As with direct connections, users are able to mashup their data with the cubes to create custom data models.

While the capabilities of the DV/VA product set are impressive, the solution currently lacks some key capabilities of OBI Analysis and Dashboards. A few of the most noticeable gaps between the capabilities of DV/VA and OBI Dashboards are the inability to:

  • Create the functional equivalent of Action Links which allows users to drill down or across from an Analysis
  • Schedule and/or deliver reports
  • Customize graphs, charts, and other data visualizations to the extent offered by OBI
  • Create Alerts which can perform conditionally-based actions such as pushing information to users
  • Use drillable hierarchical columns

At this time, OBI should continue to be used as the centerpiece for enterprise-wide analytical solutions that require complex dashboards and other capabilities. DV/VA will be more suited for analysts who need to unify discrete data sources in a repeatable and presentation-friendly format using DV/VA Projects.  As mentioned, DV/VA is even easier to use than OBI which makes it ideal for users who wish to have an analytics tool that rapidly allows them to pull together ad hoc analysis.  As was discussed in The Role of Oracle Data Visualizer in the Modern Enterprise, enterprises that are reaching for new game-changing analytic capabilities should give the DV/VA product set a thorough evaluation.  Oracle releases regular upgrades to the entire DV/VA product set, and we anticipate many of the noted gaps will be closed at some point in the future.

Oracle Business Intelligence – Synchronizing Hierarchical Structures to Enable Federation

More and more Oracle customers are finding value in federating their EPM cubes with existing relational data stores such as data marts and data warehouses (for brevity, data warehouse will refer to all relational data stores). This post explains the concept of federation, explores the consequences of allowing hierarchical structures to get out of synchronization, and shares options to enable this synchronization.

In OBI, federation is the integration of distinct data sources to allow end users to perform analytical tasks without having to consider where the data is coming from. There are two types of federation to consider when using EPM and data warehouse sources:  vertical and horizontal.  Vertical federation allows users to drill down a hierarchy and switch data sources when moving from an aggregate data source to a more detailed one.  Most often, this occurs in the Time dimension whereby the EPM cube stores data for year, quarter, and month, and the relational data sources have details on daily transactions.  Horizontal federation allows users to combine different measures from the distinct data sources naturally in an OBI analysis, rather than extracting the data and building a unified report in another tool.

Federation makes it imperative that the common hierarchical structures are kept in sync. To demonstrate issues that can occur during vertical federation when the data sources are not synchronized, take the following hierarchies in an EMP application and a data warehouse:

Figure 1: Unsynchronized Hierarchies

Jason Hodson Blog Figure 1.jpg

Notice that Colorado falls under the Western region in the EPM application, but under the Southwestern region in the data warehouse. Also notice that the data warehouse contains an additional level (or granularity) in the form of cities for each region.  Assume that both data sources contain revenue data.  An OBI analysis such as this would route the query to the EPM cube and return these results:

Figure 2: EPM Analysis – Vertical Federation

Jason Hodson Blog Figure 2

However, if the user were to expand the state of Washington to see the results for each city, OBI would route the query to the data warehouse. When the results return, the user would be confronted with different revenue figures for the Southwest and West regions:

Figure 3: Data Warehouse – Vertical Federation

Jason Hodson Blog Figure 3

When the hierarchical structures are not aligned between the two data sources, irreconcilable differences can occur when switching between the sources. Many times, end users are not aware that they are switching between EPM and a data warehouse, and will simply experience a confusing reorganization in their analysis.

To demonstrate issues that occur in horizontal federation, assume the same hierarchies as in Figure 1 above, but the EPM application contains data on budget revenue while the data warehouse contains details on actual revenue. An analysis such as this could be created to query each source simultaneously and combine the budget and actual data along the common dimension:

Figure 4: Horizontal Federation

Jason Hodson Blog Figure 4

However, drilling into the West and Southwest regions will result in Colorado becoming an erroneously “shared” member:

Figure 5: Colorado as a “Shared” Member

Jason Hodson Blog Figure 5

In actuality, the mocked up analysis above would more than likely result in an error since OBI would not be able to match the hierarchical structures during query generation.

There are a number of options to enable the synchronization of hierarchical structures across EPM applications and data warehouses. Many organizations are manually maintaining their hierarchical structures in spreadsheets and text files, often located on an individual’s desktop.  It is possible to continue this manual maintenance; however, these dispersed files should be centralized, a governance processes defined, and the EPM metadata management and data warehouse ETL process redesigned to pick up these centralized files.  This method is still subject to errors and is inherently difficult to properly govern and audit.  For organizations that are already using Enterprise Performance Management Architect (EPMA), a scripting process can be implemented that extracts the hierarchical structures in flat files.  A follow on ETL process to move these hierarchies into the data warehouse will also have to be implemented.

The best practices solution is to use Hyperion Data Relationship Management (DRM) to manage these hierarchical structures. DRM boasts robust metadata management capabilities coupled with a system-agnostic approach to exporting this metadata.  DRM’s most valuable export method allows pushing directly to a relational database.  If a data warehouse is built in tandem with an EPM application, DRM can push directly to a dimensional table that can then be accessed by OBI.  If there is a data warehouse already in place, existing ETL processes may have to be modified or a dimensional table devoted to the dimension hierarchy created.  Ranzal has a DRM accelerator package to enable the synchronization of hierarchical structures between EPM and data warehouses that is designed to work with our existing EPM application DRM implementation accelerators.  Using these accelerators, Ranzal can perform an implementation in as little as six weeks that provides metadata management for the EPM application, establishes a process for maintaining hierarchical structure synchronization between EPM and the data warehouse, and federation of the data source.

While the federation of EPM and data warehouse sources has been the primary focus, it is worth noting that two EPM cubes or two data warehouses could be federated in OBI. For many of the reasons discussed previously, data synchronization processes will have to be in place to enable this federation.  The previous solutions for maintaining metadata synchronization may be able to be adapted to enable this federation.

The federation of EPM and data warehouse sources allows an enterprise to create a more tightly integrated analytical solution. This tight integration allows users to transverse the organization’s data, gain insight, and answer business essential questions at the speed of thought.  As demonstrated, mismanaging hierarchical structures can result in an analytical solution that produces unexpected results that can harm user confidence.  Enterprise solutions often need enterprise approaches to governance; therefore, it is often imperative to understand and address shortcomings in hierarchical structure management.  Ranzal has a deep knowledge of EPM, DRM, and OBIEE, and how these systems can be implemented to tightly work together to address an organization’s analytical and reporting needs.

Undocumented Data Export Feature in Oracle Hyperion PBCS (Planning and Budgeting Cloud Service)

In response to companies looking for more decentralized services with less IT overhead, Oracle has launched the Planning and Budgeting Cloud Service (PBCS). PBCS is a hosted version of the Oracle Hyperion Planning and Data Management/Integration (FDMEE) tools with a particular focus on a completely online-based interface. For additional information on PBCS, please click HERE.

From a functional perspective, this is an ideal situation: to have near-full capabilities of an on-premise solution without the infrastructure maintenance concerns. Practically, though, there are some holes to fill as Oracle perfects and grows the solution.

One of the main areas for concern has been the integration of data into and out of PBCS. Data Management (a version of FDMEE) is the recommended tool for loading flat file data into the system, while there is also the ability to load directly to Essbase with perfect files. Getting files out of the system, on the other hand, has not been so straightforward. Without access to the Essbase server, exporting files proves impractical. Companies often need data exports from Essbase for backups, integrations into other systems, or for review. PBCS does not seem to have a native method of being able to extract Level Zero (Lv0) data on a regular basis that could be easily copied out of the system and used elsewhere.

Despite this, the DATAEXPORT command still exists in the PBCS world. How, then, could it be used to get a needed file?

It actually begins as with a normal on-premise application by creating a Business Rule to do a data export. This can be done manually, but it is recommended to use the System Template to make sure everything is set up perfectly.


When setting up the location to export the file to, it should be set up as:



When this is done, a user can then navigate over to the Inbox/Outbox Explorer and see the file in there:



And that is really all there is to it! With a business rule in place, the entire process can be automated using EPMAutomate (EPMAutomate and recommendations for an automation engine/methodology will be discussed in a later post) and a batch scripting client to do a process that:

  • Deletes the old file
  • Runs the business rule to do the data export
  • Copy the file off of PBCS and to a local location
  • Push the file to any other needed location

The one important thing to note is that as of PBCS (April 2015 patch), all files in the Inbox/Outbox Explorer — along with any files in Application Management (LCM) — that are older than two months will be automatically deleted. As such, if these files are being kept for archive purposes, they must be backed up offline in order to be preserved.

Default and User Friendly Prompting With BI Publisher

As mentioned in the previous post, Dynamic Report Grouping with Oracle BI Publisher, Edgewater Ranzal is working with a client to convert XML Publisher reports to BI Publisher reports. As part of Ranzal’s initiative, we began looking for opportunities to improve the user interface as well as create a standard methodology that report developers could utilize in the future. One of the initial areas we focused on was to improve the prompting feature. To this effort, we concentrated on:

  • Presenting prompts to the user within the BI Publisher tool
  • The displaying of user-entered prompt values within the report
  • Creating a methodology of implementation for report developers.

As expected, many of the reports had time prompts (date, period, or year), but the existing reports did not have default prompt values.  Although it is not published in any Oracle documentation we have seen, Oracle offers five functions that can be inserted into the Default Value option of the parameter:


*Note that you also have to set the Data Type to Date for these parameters. 

Simple numeric mathematical calculations can be performed with these functions to add some flexibility.  For instance, the previous day’s date would be displayed as

{$SYSDATE() – 1$}

By using these functions in conjunction with the Date String Format in the parameter options section, a variety of date value defaults can be displayed in the prompting section of the report. The following table is a sample of the prompts, Default Value, and Date Format Strings that were deployed at the client:

BI Publisher post 2 1

It is very important to understand that, regardless of the Date Format String settings, the actual value used in the date functions is the full date string and an optional numeric number added or subtracted that represents days. For instance, if the Default Value is set as {$FIRST_DAY_OF_YEAR() + 1$} (first day of year plus one) and the Date Format String is set to MM, the user would still see 01 as the default value because the actual value generated (and then converted to the month number) is 20XX-01-02T00:00:00.000+HH:00 (Jan 2, 20XX).

Because the optional numeric value used in the function refers only to days, and no logic can be written into the Default Value function, there is a natural limitation that prohibits generating anything beyond a period and/or year plus or minus one. For instance, if a client wants a prompt default value for two years ago, logic cannot be written to determine if the current year or previous year was a leap year and conclude whether to subtract 365 x 2 = 730 or 366 x 2 = 732 from the first day of year function (or system date function, depending on your preference).

Understandably, this problem would only occur two days every four years (December 31st of both a leap year and the year following a leap year); however, extrapolating from this logic is evidence of the difficultly in going back two or more months from any date function because of the variable numbers in a month. We observed an even more complicated version of this issue when the client wanted to have the default values for a period range equal to the previous period (i.e. during Q3, From Period defaults to 04 and To Period defaults to 06). Depending on the current period, the From Period needs to default from three to five periods ago and the To Period needs to default from one to three periods ago. Further exacerbating this problem was the year prompt that, during Q1, needs a default value of the previous year.

The final piece of the puzzle when using any parameters with the date data type is realizing that the bind value passed to your data model is the full date/time string. Our client exclusively used SQL in their data models; therefore, it was only a matter of using Oracle SQL’s native TO_CHAR function to convert the date/time string to a relationally comparable value as such:

BI Publisher post 2 2

The Ranzal team then looked to streamline and simplify interaction between parameters, parameter input requirement evaluation, and the RTF templates. The client’s reports had up to twelve parameters that required user input, and they used XLST logic to evaluate whether or not users had supplied values. As mentioned in previous posts, XLST is not a robust language as it relates to logical evaluations; after all, XLST was designed to consume XML documents and output new documents (in this case, RTF based reports). Because of these limitations, the initial RTF templates used the following logic (white space added for clarity):

BI Publisher post 2 3

Using this method, each parameter is evaluated until a null value is found, and then the remaining parameters are evaluated for a null value. When the XLST consumes the XML, each required parameter that the user has not entered a value for results in an additional warning line message. From a developer point of view, each additional required parameter requires the creation of additional lines of code. While the example above only has four required parameters, reports with many required parameters become quite convoluted and difficult to maintain.

Ranzal again turned to the logic processing capabilities of Oracle SQL. Within the data model, we created a new data set to create a parameter status (named PARAM_STAT) to look at the bind values passed by the BI Publisher parameters. We came up with the following SQL template to generate a more succinct warning message within the column value PARAM_STAT (note that n denotes the number of required report parameters):

BI Publisher post 2 4

There is an argument for creating a SQL statement that concatenates all missing parameter names with a comma and then uses logic to correct the punctuation; however, we felt that from a reusability standpoint, it would be best to compartmentalize the statement using the WITH TABLE1 statement. Using the above SQL template, report developers merely have to update the following lines:

  • 4 – 7:  Data model parameter names (i.e. :PRMBU) and report parameter names (i.e. Business Unit)
  • 10:  Data model parameter names (i.e. :PRMBU)
  • 15 – 20:  Replace the PARAM_COUNT comparison values (n, n – 1, and n – 2)

Using the example above with the required parameters for year, period, business unit, and ledger, the following SQL statement was generated:

BI Publisher post 2 5

Using this parameter status value results in a much more succinct XLST template that needs only to evaluate whether PARAM_STAT has a value (white space added for clarity):

BI Publisher post 2 6

The client has hundreds of BI Publisher reports and plans to continue to develop additional reports as their Oracle Business Intelligence platform becomes the standard reporting tool. By using the SQL template along with the simplified RTF template, the real work becomes creating the table, pivot table, or chart within the RTF template.  Fortunately, the Ranzal team was able to create an Excel-based VBA macro that automates the generation of the majority of the client’s templates. We will discuss this tool in a later post.

These two examples demonstrate the Ranzal team’s commitment to taking a proactive stance to examining current processes and looking for opportunities for improvement.  As we worked through the technical details of this implementation, we carefully balanced the idea of a user-centered experience against the often competing need for a simplified methodology and process for report developers. To accomplish the latter, we went through several phases of technical refinement, demonstrated the process to developers, and provided thorough documentation. This ensures that when the time comes to turn the maintenance of these reports over to the client, there is a complete knowledge transfer as well.

Ushering in the New Era of Hyperion Strategic Finance

Welcome to the first installment of our new Hyperion Strategic Finance (HSF) blog series. Edgewater Ranzal’s HSF team has been working closely with Ranzal’s other Hyperion practices (HFM, FDM, Planning/Essbase etc.) to hone in on how HSF can be utilized to its full potential in accordance with the other product offerings.  As part of that process we felt it was important to start a dialogue (blog) to share some of our insights on various topics ranging from new product release info to cutting edge integration best practices.  We are hoping this series will be a good resource for you and your organization on your HSF journey.

Given the major changes to HSF in the release, and the exciting product roadmap ahead of us, I thought it would be good to start the series by discussing “The New Era of Hyperion Strategic Finance.”

So…What’s new?

With the release comes probably the most significant change to the product since it was acquired by Hyperion.  A shift in the user interface from a traditional thick client to an Excel based Smart View Add-in is at the core of the change.  The enhancement enables the end user to perform the majority of HSF modeling activities directly in an Excel workbook.  With this change, the legacy reporting in the HSF client has been REPLACED with Excel reporting via Smart View.  This means that those who choose to implement will be required to use the Smart View based reporting and/or export the data to an external database (i.e. Essbase) to utilize other reporting tools (i.e. Financial Reports, OBIEE etc.).  For any current HSF users looking to upgrade, it is important to note that the process of upgrading to automatically converts existing HSF reports into the Smart View format, however, existing charts/graphs will need to be rebuilt and some formatting issues have been identified which may require some re-work.  While this does introduce a big change for end users, it also presents a great opportunity by opening up native Excel functionality like allowing the use of Excel graphing, conditional formatting, highlight sums, and the group / ungroup data feature etc. To view the full list of new features you can look through the Read me, available here.


Why make the change?

Some existing clients have asked me why this change was made in the first place.  The answer is really two-fold:

1. Tighter Integration with Hyperion Planning: If there has been one consistent theme throughout all of my meetings/conversations with Oracle’s HSF/Planning development team it has been a desire to continuously improve the integration between Hyperion Planning and HSF.  The integration I’m referring to doesn’t stop at data, but also includes seamlessly integrating the end user experience.  Everything from selling the tools as a combined solution (i.e. bundled pricing) to having both HSF and Planning users interact with the tool in a similar manner (i.e. Smart View) have been or will be addressed.  Below I will outline some additional roadmap items that are planned that will continue this theme.

2. Reporting: All legacy HSF users that have spent time creating, formatting, and modifying HSF reports are well aware that reporting has NOT been an area of strength previous to  Both Oracle and our implementation team consistently receive requests to have HSF reporting operate “more like Excel.”  This move is a direct response to those requests, and it is definitely a big step in the right direction.

Current Challenges:

As with any major change / new release there are going to be some growing pains and the release of HSF is no different.  We have been working closely with Oracle over the past couple of months to identify any issues, make recommendations, and test fixes that have been applied.  The main point I want to make clear is that from a functional perspective the tool has the EXACT same capabilities.  You are simply changing the way you do things, not what can be done.  There is a bit of a learning curve to understand the new menu bar, short cut keys etc…but in general it still functions much like its predecessor with the added benefit of the Excel look and feel.

A couple of things to look out for if you will be implementing AND going live prior to the release:

1. Issues with large numbers of active reports:  We have experienced some performance issues when working with a file that has numerous reports (standard or freestyle) open at once.  This includes reduced speed of the check out / in process, flickering upon calc/refresh, and occasional freezing of the application.  Currently, the product seems to work more seamlessly when working with just the accounts tab or 1-2 reports, however, fixing this is at the top of Oracle’s priority list and we expect it to be addressed sooner rather than later.

2. Renaming of Time Periods: You want to avoid renaming the default time periods in the entities (i.e. Changing 2013 to FY13).  In the current release this can cause some calculation issues in system regarding the funding routine.  Again, this has been identified by Oracle and is expected to be resolved in the next patch set or release.

3. Smart View Parity: There are some features that are not currently in the Smart View interface which may require users to revert back to the old HSF client.  Some of these features include the debt/depr schedulers, ECM/ACM etc.  This means that the end user may have to jump between interfaces in certain instances.  The good news is that these features are expected to be added to the Smart View functionality in the next release ( and can still be utilized in the traditional client if need be.  To see a full list of these items look to the roadmap column below.

4. With the change in the technology, come some changes to the infrastructure component, so it is important to discuss these requirements with your technical team to diagnose hardware and software needs before moving forward. It is also important to note that the upcoming release of will NOT be backward compatible with  So, if you have a multiproduct implementation with integration components you would need to upgrade the entire suite.

Product Direction / Road Map


The above picture is a snap shot of the HSF roadmap given to me by the Oracle product team.  You can see the current release (as of this blog post) on the far left, calendar year 2013’s scheduled release ( in the middle, and the subsequent releases on the far right.  The focus this year is to truly stabilize the Smart View integration and incorporate all of the standard features of the thick client into the Smart View interface.  The exciting part of this is how the changes in have set up the product for even more advancement in the future.  Post, a lot of the heavy lifting for HSF’s major changes will be complete.  This will allow the team to focus on true feature enhancements like adding a monthly depreciation scheduler or addressing the concept of parent level scenario modeling (possibly an idea for a future blog post!).  In addition, if you look at the future direction, in the Enterprise Readiness section, you will see items such as LRP Integration to Planning, Automated Data Loads, DRM support etc…This represents a future state which allows a user to manage both data and metadata in a consistent manner between HSF, Planning, and Essbase.  Imagine a world where you can have one excel worksheet open with your HSF model and another with a LIVE connection to an HSF reporting cube (via Essbase) for Ad-hoc purposes!  Not only that but a world where the Essbase cube is automatically updated as you make changes to the HSF model, meaning no manual metadata management between applications or tedious mappings that need to be maintained. This type of enhancement truly empowers the user to focus on the modeling aspects of HSF while allowing Essbase to shine for management reporting – truly using the right tool for the right job.

Our Recommendation

Overall we are very excited about the direction Oracle is heading with HSF.  The Smart View capability in is really just the first step in what we see as a continued effort to make the product better for its users.  With that in mind, we have begun multiple implementations, however, it is understood in all cases that there will be some issues to work through and there are no pending deadlines where these issues could put the project success at risk.  In fact, the Go Live dates are not expected to occur until a time when we believe and / or an patch set will be available.  Given that, we do recommend that all new HSF clients seriously considering starting with v11.1.2.2 while taking note of the challenges mentioned above.  Initially this approach eliminates the need for your end users to learn two different user interfaces.  Additionally, if you are willing to give feedback, as an client you will definitely have the ear of the Oracle team as it pertains to resolving any existing product issues, as well as requesting new enhancements for the future.  Oracle is very eager to make this release a success and they truly value any input early adopters can provide.  So if you have the patience to work through some bumps in the road, and the time to resolve the issues you may encounter, I would definitely encourage giving the release serious consideration.

I hope you found this information helpful.  We look forward to coming out with many more in the future. In that vein, if you have any ideas / request for blog topics please feel free to leave them in the comments section or reach out to me directly at and we will make sure to address them in future posts.

About the Author

Ryan Meester is a Practice Director for the Strategic Planning Practice at Edgewater Ranzal.  His first encounter with HSF dates back to 2004 as a Consultant with Hyperion in the HSF practice.  After three years in that capacity, leading projects and assisting with business development efforts, Ryan co-founded Meridian Consulting International with two of his Hyperion colleagues, Andrew Starks and Ricardo Rasche.  At Meridian, Ryan, Andrew, and Ricardo focused exclusively on HSF implementation services until Meridian was acquired by Edgewater Ranzal in May of 2010.  This was a strategic acquisition for both Meridian and Ranzal. Both organizations were seeing more and more multiproduct implementations which required a broader EPM focus. The acquisition effectively rounded out Ranzal’s EPM service offering by adding HSF expertise to their repertoire.

Ranzal UK are presenting at UKOUG 2012

I’m please to announce that Ranzal UK consultants are presenting papers on both days of the UKOUG Hyperion conference this year.  The conference takes place on October 23rd & 24th at the Park Plaza hotel in central London.

On the Tuesday my colleague Mark Drayton will be talking about a recent implementation of Hyperion Planning and HPCM in the session HPCM and Hyperion Planning: A Cost Allocation Process‘.   This will be especially interesting to those of you who want to see how Mark made use of batch updates to HPCM Stage 1 Drivers and Assignments.

On Wednesday, our Senior Consultant Alecs Mlynarzek will be delivering a presentation titled ‘Planning Integration with Task Lists‘, which describes techniques we have used to enable users to launch integration tasks from the Task List UI within Hyperion Planning, a feature not available out of the box.

Also on the Wednesday, Ranzal’s UK Infrastructure Lead Dave Hogg and I will be presenting various innovative techniques that we have implemented for clients in order to meet technical challenges – this session is called ‘Guide to Using EAS to Maintain Batch Environment Variables‘.

We really hope that you can come and attend one of our sessions – please come and say hello and we can tell you more about how we are growing, and the exciting projects that we are currently delivering.