A couple of weeks ago my CMO at UC4, Randy Clark, said, “Out of $10, I would spend $2 on Cloud and $8 on Big Data.” Everyone nodded in agreement. This was in relation to how much focus we would put on UC4’s ONE Automation platform and its use by the Fortune 2000 for both Cloud and Big Data orchestration.

Then, last week, I attended Amazon’s first user’s conference for developers of Amazon Web Services, #ReInvent.  It seems that Jeff Bezos and Werner Vogels were mind melding with our CMO.  While the conference was clearly about the Cloud, my #1 takeaway from the conference was the need for Big Data analytics will drive the need for the Cloud in the Enterprise.

Don’t get me wrong here; the cloud has farther reaching architectural implications and overall business benefits such as:

  • Elastic computing resources enables business scale.
  • The need to design for failure brings greater resiliency.
  • The Cloud’s reach enables companies to have immediate global reach.
  • Elimination of hardware infrastructure lowers OPEX costs and speeds innovation agility.
  • The ability to immediately measure operational costs encourages developers to design for cost.

These are just some of the most important implications and well understood drivers for the cloud.

And, as most know, Amazon and its competitors such as Rackspace have had great successes with startups because they enable a zero foot-print to get up and running with new innovations. So sure, these items were mentioned and at the show, and for those that have been doing SaaS for some time, this is old hat, we get it. The word Cloud often draws yawns for ‘those in the know’.  But what startups have been doing for years is still new to many Enterprise customers.

So why is Big Data so important when changing the conversation with the Enterprise and how is it going to be a key driver for Enterprise Adoption of the Cloud?

Well first off, let’s start with what has been holding enterprises back from adopting the Cloud – Data!!!  Regulatory compliance related to Data access, where it is hosted, how it is secured, and how it is analyzed has limited Enterprise adoption. In Europe for example, US regulations around Homeland Security dramatically limits Cloud adoption. Talk to any European enterprise and they will tell you that it is not ok to put data in a datacenter that the US has jurisdiction over. On Werner’s blog he said he has been visiting with the European Commission. Clearly behemoths like Amazon, Microsoft, and HP will to find ways to solve this problem.

In the US, Amazon has already launched a government cloud to ensure that they meet the International Traffic in Arms Regulations (ITAR) requirements and they now have numerous government agencies live in AWS. And now NASDAQ has partnered with Amazon to act as a Managed Service Provider for AWS offerings in FinSrv. They launched a platform called FinQloud designed to address the regulatory compliance needs of the Financial Services world.

So it seems the compliance issues when hosting Data will no longer be the barrier to entry for the Enterprise.

Now that the primary objection is being addressed, what will be the compelling need? Why will Enterprises be hard pressed to solve their Big Data challenges within the bounds of their existing Datacenters?

We already know that data is growing exponentially and the Enterprise cannot ignore the need for analysis of that data to optimize user experience. Read the Wave of Big Data to learn more about this.

The Enterprise is already spending upwards of 30K a month to support their Data warehouses. Imagine what an exponential growth in Data will mean to the Enterprise from an infrastructure standpoint. They won’t be able to bring up new servers in time to support this type of growth either. Nor will they be able to deal with the Bandwidth constraints that this type of data will result in. And most importantly, they won’t be able to compete with the security and resiliency qualities of service that a Cloud offering like Amazon’s can provide.

In addition, imagine how expensive it will be to retrain staff to learn the new technologies required for this type of analysis. While the likes of eBay, Netflix, LinkedIn, Facebook, and Twitter will benefit from some of the leading development resources in this space, other click and mortar business may not be able to attract the right engineering talent required to understand how to evaluate, setup and manage next generation solutions for Big Data. And assuming they do have the right resources to play with these technologies, they will need playgrounds to quickly evaluate and play with these technologies.

Thus… Enterprises will turn to the Cloud.

And it is clear that things are trending the direction of the Cloud.  Werner said that in the last 12 months they have created 2 million Hadoop Clusters in their AWS Elastic Map Reduce (EMR) offering for customers. Wow!  But that number is actually quite small as compared to some numbers our customers at UC4 have told us. A large online retailer by itself went from 0 jobs 2 years ago to 900K Hadoop a month. That is just one of our Enterprise customers.

Obama is our new Big Data president and he clearly recognizes the shift. Not only is it widely agreed that he won the 2012 election based on his campaign’s use of Big Data analytics, but his administration put $200 Million into technical Innovations around Big Data (reference the press release). $10 Million of that was granted to U.C. Berkeley’s AMPLab. The AMPLab team demonstrated what their funding has produced last week with their Spark and Shark technologies. They did a demo over 50GB of Wikipedia data showing how Shark can result in over 400x performance gains for in-memory queries based on Hive and Hadoop within AWS. When speaking to folks that have been playing with this new technology it seems clear that there are some kinks that need to be ironed out when it comes to larger datasets, but the intent is there and it is impressive.

Engineering minded companies have already adopted Hadoop because they gain the power of Map Reduce on commodity hardware. The Enterprises that do not have these skills will pay big money with the likes of SAP, Oracle, and IBM. In dialogs with the Hanna folks at the conference, they are driving some of the requirements in the AWS cloud and claim they now have 1000 customers running Hanna in the Cloud.

The primary new investments announced last week at AWS were all centered around Big Data in the Cloud starting with Amazon Simple Storage Solution (S3) price reduction by ~25% across the board to lower the barrier to entry on moving data to the cloud.

Werner Vogels announced Amazon’s new Petabyte data warehouse, Redshift. The goal of Redshift is to democratize access to and storage of Big Data. “You can get started with a single 2TB Amazon Redshift node for $0.85/hour On-Demand and pay by the hour with no long-term commitments or upfront costs. This works out to $3,723 per terabyte per year.” Amazon claims up to 11 9s of durability for their new data warehouse and EMR solutions.  Werner also announced two new instance types:

  • A 240 GB Ram instance – Clearly for In Memory DB’s like Hana
  • A 48 TB on disk instance – Clearly to support their Redshift announcement

Finally Amazon announced their new AWS Data Pipeline Web service. I spoke to a few beta testers of this solution and it isn’t really production quality at this juncture. But the intention here is to help in the orchestration of Data. We think customers will turn to UC4 for this part of their solution J

Congratulations Amazon on your first user conference, you did an excellent job. My only complaint… really a $5 gift card in my bag— where was my new Paperwhite Kindle?  Didn’t you get the memo? All user conferences require that you hand out your latest and greatest mobile device for free.