Plastic brain learning/un-learning

Archive for the ‘Business models’ Category

What color is your Cloud fabric ?

In Business models, Cloud Computing, Use Cases on January 15, 2009 at 9:30 pm

Cloud Fabric is a phrase that creates the notion of myriad complex capabilities, and unique features depending on the vendor you talk to. Here, I take the wabi-sabi approach of simplifying things down to essentials, and then try to make sense of all the different variations from vendors, with in a simple, high-level framework or mental model.

Before we go in to details, a caveat: Different types of Cloud fabric discussed in this framework here, is always provisional, in the sense that technological improvements (e.g. Intel already has a research project with 80 cores on a single chip) will influence this space in significant ways.

Back to the topic: Loosely defined, Cloud fabric is a term used to describe the abstraction for the Cloud platform support. It should cover all aspects of the  life-cycle and (automated) operational model for Cloud computing. While there are many different flavors of Cloud fabric from vendors, it is important to keep in mind the general context that drives the features in to the Cloud fabric: The “Cloud fabric” must support, as transparently as possible, the binding of an increasing variety of Workloads to (varying degrees of )Dynamic infrastructure. Of course, vendors may specialize in fabric support for specific types of Workloads, and with limited or no support for Dynamic infrastructure. By Dynamic infrastructure, I mean that it is not only “elastic”, but also “adaptive” to your deployment optimization needs.

That means your compute, storage and even networking capacity  is sufficiently “virtualized” and “abstracted” that elasticity and adaptiveness directly address application workload needs (as transparently as possible — for example is the PaaS or IaaS provider on converged data center networks or do they still have i/o and storage networks in parallel universes?). If VM’s running your application components can migrate, so should storage, and interconnection topologies (firewall, proxies, load balancer rules etc) , seamlessly – and from a customer perspective, “transparently”. 

Somewhere all of this virtualization meets the “un-virtualized” world! 

Whether you are a PaaS/IaaS, “Cloud-center” or “Virtual private Data center” provider etc, variations in the Cloud fabric capabilities stem from the degree of support across both these dimensions:

  • Breadth of Workload categories that can be supported. E.g. AWS and GoogleApp platform support is geared towards workloads that fit the Shared-nothing, distributed computing model based on Horizontal scalability using OSS, commodity servers and (mostly) generic switch network infrastructure. Keep in mind, Workload characterization is an active research area, so I won’t pretend to boil this ocean (because I am not an expert by any stretch of imagination), but just represent the broad areas that seem to be driving the Cloud business models.
  • How “Dynamic” is the Cloud infrastructure layer, in supporting the Workload categories above: Hardware capacity, configuration and general end-end topology of the deployment infrastructure matters, when it comes to performance. No single Cloud Fabric can accommodate the performance needs of any type of Workload, in a transparent manner. E.g. I wouldn’t try to dynamically provision vm’s for my Online gaming service on AWS unless I can also control routing tables and multicast traffic strategy (generally, in a distributed computing architecture, you want to push the processing of state information close to where your data is. So in the case of Online gaming use case, that could mean your multi-cast packets need to reach only a limited set of nodes that are in the same “online neighborhood” as the players). Imagine provisioning vm’s, storage dynamically for such an application across your data center without any thought on deployment optimization. Without an adaptive infrastructure under the covers, an application that has worked perfectly in the past might experience serious performance issues in the Cloud environment. Today, PaaS programming and deployment model determines the Workload match.

 

So, in general, Cloud fabric should support Transparent scaling, Metering/Billing,”Secure”, Reliable and Automated management, Fault tolerance, Deployment templates and good programming/deployment support, “adaptive” clustering etc, but specific Cloud fabric features and Use Case support depends on where your application is, at the intersection of the Workload and need for Dynamic/adaptive Infrastructure. Here’s what I mean…

 

Example of functional capabilities required of Cloud fabric based on Workload type and Infrastructure/deployment needs

Example of functional capabilities required of Cloud fabric based on Workload type and Infrastructure/deployment needs

Let me explain the above table: Y-axis represents a continuum of the deployment technologies & infrastructure used, to support “Task level parallelism” (typically, thread parallel computation where each core might be running different code) at the bottom to “Data parallelism” (typified by multiple cpu’s or cores running the same code against different data) at the top. X-axis broadly represents the 2 major Workload categories:

  1. High Performance workloads: where you want to speed up the processing of a single task on a parallel architecture (where you can exploit Data parallelism e.g. Rendering, Astronomical simulations etc)
  2. High Throughput workloads: where you want to maximize the number of independent tasks per unit time , on a distributed architecture (e.g. Web scale applications)

Where your application(s) fall in to this matrix determines what type of support you need out of your Cloud fabric. There’s room for lots of niche players, each exposing the advantages of their unique choice of the Deployment Infrastructure (how dynamic is it?), PaaS features that are friendly to your Workload.

The above diagram shows some examples. Most of what we call Cloud computing, falls in to the lower right quadrant, as many vendors are proving the viability of this business model for this type of Workload (market) . Features are still evolving.

Of course, Utility computing (top right) and “Main-frame” class, massive parallel computing (top left) capacity has always been available for hire for many, many years. What’s new here is how this can be accessed and used effectively over the internet (along with all that it entails: simple programming/deployment model, manageability, friendly Cloud fabric to help you with all of this in a variable cost model that is the promise of Cloud computing). Vendors will no doubt leverage their “Grid” management framework.

Personal HPC (bottom left) is another burgeoning area.

Many of these may not even be viable, in the market place..that will be determined by Market demand (ok, may be with some marketing too).

Hope this provides a good framework to think about the Cloud Fabric support you need, depending on where your applications fall in this continuum. I’m sure I might have missed other Workload categories, I’d be always interested in hearing your thoughts and insights, of course.

So, what color is your Cloud Fabric?

RESTful Business…

In API, Business models on November 26, 2008 at 5:42 pm

One of the key drivers behind Cloud Computing, of course is the standardization of web API’s around the REST programming model. Whether you are a Technology provider, Content provider or Service provider there is always room for a strategy to monetize or deepen customer relationships, foster a stronger community via REST API’s. So, based on your target market, chances are you can always unearth Business models that can monetize your API’s with appropriate Billing, API usage provisioning capabilities.

Lets look at a few examples:

1. NetFlix: By opening up the API’s to their movie titles and subscriber queue’s, they enable richer user experiences outside of the Netflix web site context (e.g. I’m sure there are iPhone apps for Netflix in the works), Enables new partners and generally all sorts of niche players (e.g. Online Bond movie communities building apps around their interests). Its all about moving “down” the Long Tail, and get a communities to build stuff that NetFlix can’t do alone. Its all about faster innovation, driving more “positive” feedback in to their user base.

2. BestBuy: is all about getting their catalog/estore to wherever you hang out on the web. Talk about extending your customer reach…literally, extending your sales channels.

3.Shopping.com: again, is all about growing the eco-system of customers, partners by enabling off-site experiences.

Exposing API’s, Metering the usage of the infrastructure supporting the API’s, catering to the “Long Tail” via enabling crowd-sourcing…all of these facilitate new business models or extend existing business models in a highly leveraged manner. You’re literally extending the reach of your business development activities, and with appropriate tracking/measurements in place (e.g. what part of your API use cases are popular, growing like crazy etc) you have a mechanism to weed out and surface viable business models or extensions to current business models.

Of course, it takes a lot of discipline and focus to manage these changes (e.g. watch out for cannibalizing existing channels or incentive structures etc). Corporations, however big, can’t think of all the great ideas themselves, it just makes sense to enable communities, accept the wisdom of “crowd sourcing”. After all, isn’t this the age of the programmable web?

Scalability, OSS, Cloud computing business models…

In Business models, Cloud Computing, Open Source Software, Scalability on November 17, 2008 at 5:37 pm

Just to get a perspective on how technology trends (cloud computing, open source software, need for web scale infrastructure) and network effects impact business dynamics, consider:

  1. Moore’s law (CPU power doubles every 18 months)
  2. Gilder’s law (Fiber bandwidth triples every 12 months)
  3. Storage capacity doubles every 12 months (is this called Storage law?)

While these propel improvements such as CMT, greater I/O throughput, greater storage capacity and performance at lower costs, they also enable other developments such as Virtualization etc resulting in lower cost, greater performance, better utilization levels.

Now consider some of the business dynamics based on network effects:
  1. Metcalfe’s law: Value of a network is the square of the number of users (think LinkedIn, your very own Ning network etc)
  2. Power law: Proverbial long-tail and the so called “Freemium” business model (i.e free software == zero cost marketing), you pay when you need extra services.

While few companies enjoy the “head” side of the power curve, things like AWS, Zembly speed up the deployment cycle all along the power curve, even the long tail…it is clear that helping Companies go up the power curve is a good business to be in, faster hardware just makes it faster!  Why? Because there is room for every type of community or customer base, however small it may be, and the marginal cost of delivering to those segments is becoming smaller every day, because of web scale frameworks and variable cost infrastructure made possible by these innovations.

Open Source software + Cloud compute model helps with Economies of Scope (i.e breadth of activities you need to handle in a profitable way) in an unprecedented way: you just don’t worry about IT needs in the traditional manner anymore.
Meanwhile HW innovation trends discussed above + Network effects + Cloud compute model can also help with Economies of Scale (e.g. pay as you go model with AWS) by letting you build on a business model with miniscule “conversion rates” off of a very large online customer base.