Visualizing Big Data

This post explores using Tableau Server to visualize data in a Hadoop cluster.

More and more businesses are finding value or savings in replumbing their databases to be deployed on commodity hardware running open-source software like Cloudera Distribution of Hadoop. With tools like Impala, realtime queries of massive datasets become possible. To get the most insight, compelling interactive data exploration and visualization is necessary. We wanted to explore how Tableau works in this regard, and found Tableau Server visualizing data from CDH using Impala proved facile. This combination provides data exploration at the speed of thought with beautiful intuitive visualizations, resulting in a quick front-end for big data.

To demonstrate this, I took the classic AdventureWorks Bike Store data that Microsoft uses to demo a data warehouse in SQL Server (also used in our Endeca Information Discovery demo) and loaded it in our CDH cluster using Impala. I downloaded Tableau Desktop, as well as the Cloudera ODBC Driver for Impala, and spun up a VM running Windows Server to host Tableau Server. After configuring the Impala driver to point at our cluster, I launched Tableau and added a new data source. Tableau makes it pretty simple to choose from a menu of data sources, from a simple CSV to a massive CDH cluster. After selecting the Cloudera Hadoop option, I input our cluster DNS and the port and credentials for Impala. I selected my new Bike Store database and table, and was ready to whip up some visualizations.

Tableau provides tools for ETL, including a pretty nice GUI for simple joins, but since I was trying to denormalize a star schema I did the transformations using impala-shell where I have more control and the operations are more visible. Cluster-side ETL would also help these visualizations run at the speed of thought, even if working with big data at scale.

Using Tableau provides really easy creation of standard charts and even more complicated visualizations like maps can be rendered automagically with geographic attribute detection. I found the date detection to be less dependable. You can add filters to slice and dice on different attributes, and create dashboards to combine several worksheets, or create “Stories” to walk a viewer through a series of visualizations.

Overall, I found some aspects of Tableau a little bit like using Apple products: you get great design and intuitive functionality at the cost of robust configurability. It’s a fantastic tool to get pretty visualizations up and running quickly and intuitively, to add data sources on the fly with little complexity, and to quickly share the results. But in spite of those strengths, Tableau isn’t a replacement for enterprise-scale BI offerings.

Interact with the visualization we built and embedded here: Store Dashboard

Big Data Discovery – Custom Java Transformations Part 2

In a previous post, we walked through how to implement a custom Java transformation in Oracle Big Data Discovery.  While that post was more technical in nature, this follow up post will highlight a detailed use case for the transformations and illustrate how they can be used to augment an existing dataset.

Example: 2015 Chicago Mayoral Election Runoff

For this example we will be using data from the 2015 Mayoral Election Runoff in Chicago.  Incumbent Rahm Emanuel defeated challenger “Chuy” Garcia with 55.7% of the popular vote.  Results data from the election were compiled and matched up with Chicago communities, which were then subdivided by zip code.  A small sample of the data can be seen below:

Sample election data

Sample election data

In its original state, the data already offers some insight into the results of the election, but only at a high level.  By utilizing the custom transformations, it is possible to bring in additional data and find answers to more detailed questions.  For example, what impact did the wealth of Chicago’s communities have on their selection of a candidate?

One indicator of wealth within a community is the median sale price of homes in that area.  Naturally, the higher the price of homes, the wealthier the community tends to be.  Zillow provides an API that allows users to query for a variety of real estate and demographic information, including median sale price of homes.  Through the custom transformations, we can augment the existing election results with the data available through the API.

The structure of the custom transformation is exactly the same as the ‘Hello World’ example from our previous post.  The transformation is initiated in the BDD Custom Transform Editor with the command runExternalPlugin('ZillowAPI.groovy',zip). In this case, the custom groovy script is called ZillowAPI.groovy and the field being passed to the script is the zip code, zip.

The script then uses the zip to construct a string and convert it to the URL required to make the API call:

def zip = args[0]
String url = "<ZILLOW_API_KEY>&zip=" + zip;
URL api_url = new URL(url);

Once the transform script completes, the median_sale_price field is now accessible in BDD:

Updated data in BDD

Now that the additional data is available, we can use it to build some visualizations to help answer the question posed earlier.

Median Sale Price by Chicago Community – Created using the Branchbird Data Visualization Portlet*

Percentage for Chuy by Community

Percentage for Chuy by Community – Created using the Branchbird Data Visualization Portlet*

The two choropleths above show the median sale price by community and the percentage of votes for “Chuy” by community.  Communities in the northeastern sections of the city seem to have the highest concentration of median sale price, while communities in the western and southern sections tend to have lower prices.  For median sale price to be a strong indicator of how the communities voted, the map displaying votes for “Chuy” should show a similar pattern, with the communities grouped by northeast and southwest.  However, the pattern is noticeably different, with votes for “Chuy” distributed across all sections of the map.

Bar-Line chart of Median Sale Price and Percent for Chuy

Looking at the median sale price in conjunction with the percentage of votes for “Chuy” provides an even clearer picture.  The bars in the chart above represent the median sale price of homes, and are sorted in descending order from left to right.  The line graph represents the percentage of votes for “Chuy” in each community.  If there was a connection between median sale price and the percentage of votes for “Chuy”, we’d expect to see the line graph either increase or decrease as sale price decreases.  However, the percentage of votes varies widely from community to community, and doesn’t seem to follow an obvious pattern in relation to median sale price.  This corresponds with the observations from the two choropleths above.

While these findings don’t provide a definitive answer to the initial question as to whether community wealth was a factor in the election results, they do suggest that median sale price is not a good indicator of how Chicago communities voted in the election.  More importantly, this example illustrates how easy it is to utilize custom Java transformations in BDD to answer detailed questions and get more out of your original dataset.

If you would like to learn more about Oracle Big Data Discovery and how it can help your organization, please contact us at info [at] or share your questions and comments with us below.

* – The Branchbird Data Visualization Portlet is a custom portlet developed by Branchbird and is not available out of the box in BDD.  If you would like more information on the portlet and it’s capabilities, please contact us and stay tuned for a future blog post that will cover the portlet in more detail.

Undocumented Data Export Feature in Oracle Hyperion PBCS (Planning and Budgeting Cloud Service)

In response to companies looking for more decentralized services with less IT overhead, Oracle has launched the Planning and Budgeting Cloud Service (PBCS). PBCS is a hosted version of the Oracle Hyperion Planning and Data Management/Integration (FDMEE) tools with a particular focus on a completely online-based interface. For additional information on PBCS, please click HERE.

From a functional perspective, this is an ideal situation: to have near-full capabilities of an on-premise solution without the infrastructure maintenance concerns. Practically, though, there are some holes to fill as Oracle perfects and grows the solution.

One of the main areas for concern has been the integration of data into and out of PBCS. Data Management (a version of FDMEE) is the recommended tool for loading flat file data into the system, while there is also the ability to load directly to Essbase with perfect files. Getting files out of the system, on the other hand, has not been so straightforward. Without access to the Essbase server, exporting files proves impractical. Companies often need data exports from Essbase for backups, integrations into other systems, or for review. PBCS does not seem to have a native method of being able to extract Level Zero (Lv0) data on a regular basis that could be easily copied out of the system and used elsewhere.

Despite this, the DATAEXPORT command still exists in the PBCS world. How, then, could it be used to get a needed file?

It actually begins as with a normal on-premise application by creating a Business Rule to do a data export. This can be done manually, but it is recommended to use the System Template to make sure everything is set up perfectly.


When setting up the location to export the file to, it should be set up as:



When this is done, a user can then navigate over to the Inbox/Outbox Explorer and see the file in there:



And that is really all there is to it! With a business rule in place, the entire process can be automated using EPMAutomate (EPMAutomate and recommendations for an automation engine/methodology will be discussed in a later post) and a batch scripting client to do a process that:

  • Deletes the old file
  • Runs the business rule to do the data export
  • Copy the file off of PBCS and to a local location
  • Push the file to any other needed location

The one important thing to note is that as of PBCS (April 2015 patch), all files in the Inbox/Outbox Explorer — along with any files in Application Management (LCM) — that are older than two months will be automatically deleted. As such, if these files are being kept for archive purposes, they must be backed up offline in order to be preserved.

Big Data Discovery – Custom Java Transformations Part 1

In our first post introducing Oracle Big Data Discovery, we highlighted the data transform capabilities of BDD.  The transform editor provides a variety of built in functions for transforming datasets.  While these built in functions are straightforward to use and don’t require any additional configuration, they are also limited to a predefined set of transformations.  Fortunately, for those looking for additional functionality during transform, it is possible to introduce custom transformations that can leverage external Java libraries by implementing a custom Groovy script.  The rest of this post will walk through the implementation of a basic example, and a subsequent post will go in depth with a few real world use cases.

Create a Groovy script

The core component needed to implement a custom transform with external libraries is a Groovy script that defines the pluginExec() method.  Groovy is a programming language developed for the Java platform.  Details and documentation on the language can be found here.  For this basic example, we’ll begin by creating a file called CustomTransform.groovy and define a method, pluginExec(), which should take an object array, args, as an argument:

def pluginExec(Object[] args) {
    String input = args[0] //args[0] is the input field from the BDD Transform Editor 

    //Implement code to transform input in some way
    //The return of this method will be inserted into the transform field

    input.toUpperCase() //This example would return an upper cased version of input

pluginExec() will be applied to each record in the BDD dataset, and args[0] corresponds to the field to be transformed.  In the example script above, args[0] is assigned to the variable input and the toUpperCase() method is called on it.  This means that if this custom transformation is applied to a field called name, the value of name for each record will be returned upper cased (For example, “johnathon” => “JOHNATHON”).

Import Custom Java Library

Now that we’ve covered the basics of how the custom Groovy script works, we can augment the script with external Java libraries.  These libraries can be imported and implemented just as they would be in standard Java:

def pluginExec(Object[] args) {
    String input = args[0] //Note that though the input variable is defined in this example, it is not used.  Defining input is not required.
    HelloWorld hw = new HelloWorld() //Create a new instance of the HelloWorld class defined in the imported library
    hw.testMe() //Call the testMe() method, which returns a string "Hello World"

In the example above, the HelloWorld class is imported.  A new instance of HelloWorld is assigned to the variable hw, and the testMe() method is called.  testMe() is designed to simply return the string “Hello World”.  Therefore, the expected output of this custom script is that the string “Hello World” will be inserted for each record in the transformed BDD dataset.  Now that the script has been created, it needs to be packaged up and added to the Spark class path so that it’s accessible during data processing.

Package the Groovy script into a jar

In order to utilize CustomTransform.groovy, it needs to packaged into a .jar file.  It is important that the Groovy script be located at the root of the jar, so make sure that the file is not nested within any directories.  See below for an example of the file structure:


Note that additional files can be included in the jar as well.  These additional files can be referenced in CustomTransform.groovy if desired.  There are multiple ways to pack up the file(s), but the simplest is to use the command line.  Navigate to the directory that contains CustomTransform.groovy and use the following command to package it up:

# jar cf <new_jar_name> <input_file_for_jar>
> jar cf CustomTransform.jar CustomTransform.groovy

Setup a custom lib location in Hadoop

CustomTransform.jar and any additional Java libraries that are imported by the Groovy script need to be added to all spark nodes in your Hadoop cluster.  For simplicity, it is helpful to establish a standard location for all custom libraries that you want Spark to be able to access:

$ mkdir /opt/bdd/edp/lib/custom_lib

The /opt/bdd/edp/lib directory is the default location for the BDD dataprocessing libraries used by Spark.  In this case, we’ve created a subdirectory, custom_lib, that will hold any additional libraries we want Spark to be able to use.

Once the directory has been created, use scp, WinSCP, MobaXterm, or some other utility to upload CustomTransform.jar and any additional libraries used by the Groovy script into the custom_lib directory.  The directory needs to be created on all Spark nodes, and the libraries need to be uploaded to all nodes as well.

Update on the BDD Server

The last step that needs to be completed before running the custom transformation is updating the file.  This step only needs to be completed the first time you create a custom transformation as long as the location of the custom_lib directory remains constant for each subsequent script.

Navigate to /localdisk/Oracle/Middleware/BDD<version>/dataprocessing/edp_cli/config on the BDD server and open the file for editing:

$ cd /localdisk/Oracle/Middleware/BDD1.0/dataprocessing/edp_cli/config
$ vim

The file should look something like this:

# Spark additional runtime properties, see
# for examples

Add an entry to the file to define the spark.executor.extraClassPath property.  The value for this property should be <path_to_custom_lib_directory>/*.  This will add everything in the custom_lib directory to the Spark class path.  The updated file should look like this:

# Spark additional runtime properties, see
# for examples


It is important to note, if there is already an entry in for the spark.executor.extraClassPath property, any libraries referenced by the property should be moved to the custom_lib directory so they are still included in the Spark class path.

Run the custom transform

Now that the script has been created and added to the Spark class path, everything is in place to run the custom transform in BDD.  To try it out, open the Transform tab in BDD and click on the Show Transformation Editor button.  In this example, we are going to create a new field called custom with the type String:

Create new attribute

Create new attribute

Now in the editor window, we need to reference the custom script:

Transform editor

Transform editor

The runExternalPlugin() method is used to reference the custom script.  The first argument is the name of the Groovy script.  Note that the value above is 'CustomTransform.groovy' and not 'CustomTransform.jar'.  The second argument is the field to be passed as an input to the script (this is what gets assigned to args[0] in pluginExec()).  In the case of the “Hello World” example, the input isn’t used, so it doesn’t matter what field is passed here.  However, in the first example that returned an upper cased version of the input field, the script above would return an upper cased version of the key field.

One of the nice features of the built-in transform functions is that they make it possible to preview the transform changes before committing.  With these custom scripts, however, it isn’t possible to see the results of the transform before running the commit.  Clicking preview will just return blank results for all fields, as seen in the example below:

Example of custom transform preview

Example of custom transform preview

The last thing to do is click ‘Add to Script’ and then ‘Commit to Project’ to kick off the transformation process.  Below are the results of the transform.  As expected, a new custom field has been added to the data set and the value “Hello World” has been inserted for every record.

Transform results

Transform results

This tutorial just hints at the possibilities of utilizing custom transformations with Groovy and external Java libraries in BDD.  Stay tuned for the second post on this subject, when we will go into detail with some real world use cases.

If you would like to learn more about Oracle Big Data Discovery and how it can help your organization, please contact us at info [at] or share your questions and comments with us below.

Bringing Data Discovery to Hadoop – Part 3

In our last post, we talked about some of the tools in the Hadoop ecosystem that Oracle Big Data Discovery takes advantage of to do its work — namely Hive and Spark. In this post, we’re going to delve a little deeper into how BDD integrates with data that is already sitting in Hive, how it can write transformed data back to HDFS, and how it can help give users new insights on that data.

Data Processing

BDD ships with a data processing tool that makes imports from Hive easy. Simply point it at the database and table(s) you would like to pull into BDD and the application does the rest. Behind the scenes, the data processing utility launches Spark workers to read in the data from HDFS for the targeted table into new Avro files. BDD then indexes the data in these files for easy discovery.


Spark at work.


Another feature of BDD’s data processing is that it can be set to auto-detect new tables that are created in a Hive database to keep it in sync with Hive. The BDD Hive Table Detector automatically launches a workflow to import a table whenever one is created. Currently, BDD doesn’t yet support updates to existing tables but we hope to see that feature in an future release.

One thing to note: depending on the size of the table, BDD may import only a sampling of its data for discovery purposes. By default, the application’s record threshold is set to one-million. This is in order to keep any analysis of a particular collection as interactive as possible while maintaining a relatively dependable and accurate view. For most intents and purposes, this default setting should probably be enough. However, the threshold can be increased if necessary. Ultimately the amount of data sampling to use would have to be a balance between an individual’s needs and the computing resources available to them.

Exporting Back to Hive

A unique component of BDD is its ability to throw data back to Hadoop once you have it in a state that you are satisfied with or would like to share with other users. We have some campaign funding data to work with as a test case:

The Chicago mayor’s race has been getting some attention due to an unexpected underdog forcing incumbent Rahm Emanuel to a runoff. As you can see, the challenger, Chuy Garcia, is wildly out-funded compared to Emanuel:


Creating this application involved pulling campaign spending data for Illinois from, importing it into BDD, and then joining a couple tables together and cleaning it all up using the transform tools contained within the application.

Now let’s say we wanted to export the results of this work — these joined, transformed data sets — for other users to query for themselves in Hive. We can do that with a simple, built-in export feature that can write our denormalized data set back to HDFS.


With a few quick clicks, BDD can create Avro-formatted files, write them to our Hadoop cluster, and then create the corresponding Hive table automatically:


This particular feature adds a lot of flexibility and opportunity for collaboration in teams where members span a wide range of skills. You can imagine users on the business side and technical side of a company throwing data sets back and forth between each other, sharing insights in a natural way that might have been much more difficult to accomplish in other environments.

That concludes our three-part look at Oracle Big Data Discovery. As we’ve said before, there is a lot to be excited about and we believe the application offers a viable data discovery solution to organizations running data in Hadoop, as well as those who are interested in creating first-time clusters.

For more information or guidance on how BDD could help your organization, contact us at info [at]

Bringing Data Discovery To Hadoop – Part 2

The most exciting thing about Oracle Big Data Discovery is its integration with all the latest tools in the Hadoop ecosystem. This includes Spark, which is rapidly supplanting MapReduce as the processing paradigm of choice on distributed architectures. BDD also makes clever use of the tried and tested Hive as a metadata layer, meaning it has a stable foundation on which to build its complex data processing operations.

In our first post of this series, we showcased some of BDD’s most handy features, from its streamlined UI to its very flexible data transformation abilities. In this post, we’ll delve a little deeper into BDD’s underlying mechanics and explain why we think the application might be a great solution for Hadoop users.


Much of the backbone for BDD’s data processing operations lie in Hive, which effectively acts as a robust metastore for BDD. While operations on the data itself are not performed using Hive functions (which currently run on MapReduce), Hive is a great way to store and retrieve information about the data: where it lives, what it looks like, and how it’s formatted.

For organizations that are already running data in Hive, the integration with BDD couldn’t be simpler. The application ships with a data processing tool that can automatically import databases and tables from Hive, all while keeping data types intact. The tool can also sync up with a Hive database so that when new tables are created a user can automatically work with that data in BDD. If a table is dropped, BDD deletes that particular data set from its index. Currently, the 1.0 version doesn’t support updates to existing Hive tables, but we hope to see that feature in an upcoming release.

BDD can also upload data to HDFS and create a new table with that data in Hive to work with. It does this whenever a user uploads a file through the UI. For example, here’s what we saw in Hive with the consumer complaints data set from the last post after BDD imported it:

Example of an auto-generated Hive table by BDD

This easy integration with Hive makes BDD a good option for both experienced Hadoop users who are using Hive already, as well as less technical users.


While Hive provides a solid foundation for BDD’s operations, Spark is the workhorse. All data processing operations are run through Spark, which allows BDD to analyze and transform data in-memory. This approach effectively sidesteps the launching of slower MapReduce jobs through Hive and gives the processing engine direct access to the data.

When a user commits a series of transforms to a data set via the BDD UI, those transforms are interpreted into a Groovy script that are then passed to Spark through an Oozie job. Here, we can see how some date strings are converted to datetime objects behind the scenes:


After Spark has done its handiwork, the data is then written out to HDFS as a new set of files, serialized and compressed in Avro. The original collection stays intact in another location in case we want to go back to it in the future.

From this point, the data is then loaded into the Dgraph.


The Dgraph is basically an in-memory index, and is what enables the real-time, dynamic exploration of data in BDD. This concept might be familiar to those who have used Oracle Endeca Information Discovery, where the Dgraph also played a key role, and this lineage means BDD inherits some very nice features: quick response, keyword search, impromptu querying, and the ability to unify metrics, structured and unstructured data in a single interface. The biggest difference now is that users have the ability to apply these real-time search and analytic capabilities to data sitting on Hadoop.

We think the marriage of this kind of discovery application with Hadoop makes a lot of sense. For starters, Hadoop has enabled organizations to store vast amounts of data cheaply without necessarily knowing everything about its structure and contents. BDD, meanwhile, offers a solution to indexing exactly this kind of data — data that is irregular, inconsistent and varied.

There’s also the issue of access. Currently, most data tools in the Hadoop ecosystem require a moderate level of technical knowledge, meaning wide swaths of an organization might have little to no view of all that data on HDFS. BDD offers a system to connect more people to that data, in a way that’s straightforward and intuitive.

If you would like to learn more about Oracle Big Data Discovery and how it might help your organization, please contact us at info [at]

Bringing Data Discovery To Hadoop – Part 1

Since Branchbird’s inception, we have been anticipating the intersection of big data with data discovery. What exactly that will look like in the coming years is still up for debate, but we think Oracle’s new Big Data Discovery application provides a window into what true discovery on Hadoop might entail.

We’re excited about BDD because it wraps data analysis, transformation, and discovery tools together into a single user interface, all while leveraging the distributed computing horsepower of Hadoop.

BDD’s roots clearly extend from Oracle Endeca Information Discovery, and some of the best aspects of that application — ad-hoc analysis, fast response times, and instructive visualizations — have made it into this new product. But while BDD has inherited a few of OEID’s underpinnings, it’s also a complete overhaul in many ways. OEID users would be hard-pressed to find more than a handful of similarities between Endeca and this new offering. Hence, the completely new name.

The biggest difference of course, is that BDD is designed to run on the hottest data platform in use today: Hadoop. It is also cutting edge in that it utilizes the blazingly fast Apache Spark engine to perform all of its data processing. The result is a very flexible tool that allows users to easily upload new data into their Hadoop cluster or, conversely, pull existing data from their cluster onto BDD for exploration and discovery. It also includes a robust set of functions that allows users to test and perform transformations on their data on the fly in order to get it into the best possible working state.

In this post, we’ll explore a scenario where we take a basic spreadsheet and upload it to BDD for discovery. In another post, we’ll take a look at how BDD takes advantage of Hadoop’s distributed architecture and parallel processing power. Later on, we’ll see how BDD works with an existing data set in Hive.

We installed our instance of BDD on Cloudera’s latest distribution of Hadoop, CDH 5.3. From our perspective, this is a stable platform for BDD to operate on. Cloudera customers also should have a pretty easy time setting up BDD on their existing clusters.


Getting started with BDD is relatively simple. After uploading a new spreadsheet, BDD automatically writes the data to HDFS, then indexes and profiles the data based on some clever intuition:Screen Shot 2015-02-02 at 12.23.31 PMWhat you see above displays just a little bit of the magic that BDD has to offer. This data comes from the Consumer Financial Protection Bureau, and details four years’ worth of consumer complaints to financial services firms. We uploaded the CSV file to BDD in exactly the condition we received it from the bureau’s website. After specifying a few simple attributes like the quote character and whether the file contained headers, we pressed “Done” and the application got to work processing the file. BDD then built the charts and graphs displayed above automatically to give us a broad overview of what the spreadsheet contained.

As you can see, BDD does a good job presenting the data to us in broad strokes. Some of the findings we get right from the start are the names of the companies that have the most complaints and the kinds of products consumers are complaining about.

We can also explore any of these fields in more detail if we want to do so:

Screen Shot 2015-02-02 at 1.56.17 PM

Now we get an even more detailed view of this date field, and can see how many unique values there are, or if there are any records that have data missing. It also gives us the range of dates in the data. This feature is incredibly helpful for data profiling, but we can go even deeper with refinements.


With just a few clicks on a couple charts, we have now refined our view of the data to a specific company, JPMorgan Chase, and a type of response, “Closed with monetary relief”. Remember, we have yet to clean or manipulate the data ourselves, but already we’ve been able to dissect it in a way that would be difficult to do with a spreadsheet alone. Users of OEID and other discovery applications will probably see a lot of familiar actions here in the way we are drilling down into the records to get a unique view of the data, but users who are unfamiliar with these kinds of tools should find the interface to be easy and intuitive as well.


Another way BDD differentiates itself from some other discovery applications is with the actions available under the “Transform” tab.

Within this section of the application, users have a wealth of common transformation options available to them with just a few clicks. Operations like converting data types, concatenating fields, and getting absolute values now can be done on the fly, with a preview of the results available in near real-time.

BDD also offers more complex transformation functions in its Transformation Editor, which includes features like date parsing, geocoding, HTML formatting and sentiment analysis. All of these are built-in to the application; no plug-ins required. Another nice feature BDD provides is an easy to way group (or bin) attributes by value. For example, we can find all the car-related financing companies and group them into a single category to refine by later on:


Another nice added feature of BDD is the ability to preview the results of a transform before committing the changes to all the data. This allows a user to fine tune their transforms with relative ease and minimal back and forth between data revisions.

Once we’re happy with our results, we can commit the transforms to the data, at which point BDD launches a Spark job behind the scenes to apply the changes. From this point, we can design a discovery interface that puts our enriched data set to work.


Included with BDD are a set of dynamic, advanced data visualizations that can turn any data set into something profoundly more intuitive and usable:


The image above is just a sampling of the kind of visual tools BDD has to offer. These charts were built in a matter of minutes, and because much of the ETL process is baked into the application, it’s easy to go back and modify your data as needed while you design the graphical elements. This style of workflow is drastically different from workflows of the past, which required the back- and front-ends to be constructed in entirely separate stages, usually in totally different applications. This puts a lot of power into the hands of users across the business, whether they have technical chops or not.

And as we mentioned earlier, since BDD’s indexing framework is a close relative to Endeca, it inherits all the same real-time processing and unstructured search capabilities. In other words, digging into your data is simple and highly responsive:


As more and more companies and institutions begin to re-platform their data onto Hadoop, there will be a growing need to effectively explore all of that distributed data. We believe that Oracle’s Big Data Discovery offers a wide range of tools to meet that need, and could be a great discovery solution for organizations that are struggling to make sense of the vast stores of information they have sitting on Hadoop.

If you would like to learn more, please contact us at info [at]

Also be sure to stay tuned for Part 2!