Chess as a metaphor for strategic competition is not a novel concept, and it remains one of the most respected due to the intellectual and strategic demand it places on competitors. The sheer combination of moves in a chess game (estimated to be more than the number of atoms in the universe) means that it is entirely possible that no two people have unintentionally played the same game. Of course, many of these combinations result in a draw and many more set a player down the path of an inevitable loss after only a few moves. It is no surprise that chess has pushed the limits of computational analytics which in turn has pushed the limits of players. Claude Shannon, the father of information theory, was the first to state the advantages of the human and computer competitor attempting to wrest control of opposing kings from each other:
The computer is:
- Very fast at making calculations;
- Unable to make mistakes (unless the mistakes are part of the programmatic DNA);
- Diligent in fully analyzing a position or all possible moves;
- Unemotional in assessing current conditions and unencumbered by prior wins or losses.
The human, on the other hand, is:
- Flexible and able to deviate from a given pattern (or code);
- Able to reason;
- Able to learn .
The application of business analytics is the perfect convergence of this chess metaphor, powerful computations, and the people involved. Of course, the chess metaphor breaks down a bit since we have human and machine working together against competing partnerships of humans and machines (rather than human against machine).
Oracle Business Intelligence (along with implementation partners such as Edgewater Ranzal) has long provided enterprises with the ability to balance this convergence. Regardless of the robustness of the tool, the excellence of the implementation, the expertise of the users, and the responsiveness of the technical support team, there has been one weakness: No organization can resolve data integration logic mistakes or incorporate new data as quickly as users request changes. As a result, the second and third computer advantages above are hindered. Computers making mistakes due to their programmatic DNA will continue to make these mistakes until corrective action can be implemented (which can take days, weeks, or months). Likewise, all possible positions or moves cannot be analyzed due to missing data elements. Exacerbating the problem, all of the human advantages stated previously can be handicapped; increasingly so depending on the variability, robustness, and depth of the missing or wrongly calculated data set.
With the introduction of Visual Analyzer (VA) and Data Visualization (DV), Oracle has made enormous strides in overcoming this weakness. Users now have the ability to perform data mashups between local data and centralized repositories of data such as data warehouses/marts and cubes. No longer does the computer have to make data analysis without the availability of all possible data. No longer does the user have to make educated guesses about how centralized and localized data sets correlate and how it will affect overall trends or predictions. Used properly, users and enterprises can leverage VA/DV to iteratively refine and redefine the analytical component that contributes to their strategic goals. Of course, all new technologies and capabilities come with their own challenges.
The first challenge is how an organization can present these new views of data and compare and contrast them with the organizational “one version of the truth”. Enterprise data repositories are a popular and useful asset because they enable organizations to slice, dice, pivot, and drill down into this centralized data while minimizing subjectivity. Allowing users to introduce their own data creates a situation where they can increase data subjectivity. If VA/DV is to be part of your organization’s analytics strategy, processes must be in place to validate the result of these new data models. The level of effort that should be applied to this validation should increase according to the following factors:
- The amount of manual manipulation the user performed on the data before performing the mashup with existing data models;
- The reputability of the data source. Combining data from an internal ERP or CRM system is different from downloading and aligning outside data (e.g. US Census Bureau or Google results);
- The depth and width of data. In layman’s terms, this corresponds to how many rows and columns (respectively) the data set has;
- The expertise and experience of the individual performing the data mashup.
If you have an existing centralized data repository, you have probably already gone through data validation exercises. Reexamine and apply the data and a metadata governance processes you went through when the data repository was created (and hopefully maintained and updated).
The next challenge is integrating the data into the data repository. Fortunately, users may have already defined the process of extracting and transforming data when they assembled the VA/DV project. Evaluating and leveraging the process the user has already defined can shorten the development cycle for enhancing existing data models and the Extract, Transform, and Load (ETL) process. The data validation factors above can also provide a rough order of magnitude of the level of effort needed to incorporate this data. The more difficult task may be determining how to prioritize data integration projects within an (often) overburdened IT department. Time, scope, and cost are familiar benchmarks when determining prioritization, but it is important to take revenue into account. Organizations that have become analytics savvy and have users demanding VA/DV data mashup capabilities have often moved beyond simple reporting and onto leveraging data to create opportunities. Are salespeople asking to incorporate external data to gain customer insight? Are product managers pulling in data from a system the organization never got around to integrating? Are functional managers manipulating and re-integrating data to cut costs and boost margins?
To round out this chess metaphor, a game that seems to be nearly a draw or a loss can breathe new life by promoting a pawn to a lost queen. Many of your competitors already have a business intelligence solution; your organization can only find data differentiation through the type of data you have and how quickly it can be incorporated at an enterprise level. Providing VA/DV to the individuals within your organization with a deep knowledge of the data they need, how to get it, and how to deploy it can be the queen that checkmates the king.
 Shannon, C. E. (1950). XXII. Programming a computer for playing chess. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 41(314), 256-275. doi:10.1080/14786445008521796