Plastic brain learning/un-learning

Posts Tagged ‘Dynamic Infrastructure’

Java Observability, Manageability in the Cloud

In API, Manageability on January 30, 2009 at 10:14 pm

Have you done this fire-drill: you have a high traffic/volume web application, it is sluggish or unstable. Your team is called to figure out if the problem is in the application, middleware stack, operating system or somewhere else in the deployment configuration??

This is where Observability comes in: Being able to dynamically probe resource usage (across all levels of the infrastructure/application stack) at a granular level, control probing overheads and associating actions or triggers with those probes at all levels.At a system level,  DTrace is a good example, on Solaris & Mac OS X. There’s even a Java API for DTrace. Of course, there are other profiling libraries that offer low over-head, extremely fine-grained probes, aggregation capabilities (e.g. JETM).

Manageability is another important consideration: Being able to control and manage applications via standard management systems. This involves (hopefully) consistent instrumentation mechanisms, and standard isolation mechanisms between IT resources being managed, and external management systems. JMX is an example of one such standard. Other options such as JMX to SNMP bridge, MIBs compiled to MXBeans are some methods of integrating managed resources in to higher level (management) frameworks.

Flexible binding of computing infrastructure to workloads, is a key value proposition of the Cloud Computing model. Cloud computing providers like Amazon serve the needs of typical web workloads, by providing access to their dynamic infrastructure.

The problem is workload diversity. Vendors like Amazon, Google essentially offer Distributed computing workload platforms. Imagine tracing application performance issues on a distributed platform!

So, why is Observability and Manageability more important in the Cloud?

Because consumption of a fixed resource like CPU hrs is not optimal when you are on a “elastic”/dynamic infrastructure. You’d want to pay for exact utilization levels.

Because you want monitoring arrangements that work seamlessly, as they do on a single system, but on top of “elastic”/dynamic infrastructure.

There are many other reasons, of course: 

Being able to manage the lifecycle of applications in an automated manner, in a distributed computing environment is perhaps the biggest use case for Manageability in the Cloud.

Metering, Billing, Performance monitoring, Sizing and Capacity planning are some examples of activities in the Cloud computing model that leverage the same underlying Observability principles (Instrumentation, dynamic probes, support for aggregating metrics etc).

Lets look at a couple of examples of what Observability and Manageability capabilities can enable in the Cloud computing context:

  • Customers can pay at a more granular level of Throughput levels (Requests or Transactions processed per unit of time) for a given latency (and other SLA items), so your charge back model makes sense, in the context of your business activities.
  • Enable better  business opportunities, in a cost-effective manner. e.g. Mashery focuses metering/instrumentation at the API level, but the proposition in this context: Open up your API’s, meter it to provide you with an automated “business development filtering” mechanism….attractive if you’re in the right business, even on the “long tail” i.e customer traffic/volume is not high, but at least you have an ecosystem (100’s of developers or partners) to support the long tail, without burning your bank account. See my earlier post on RESTful business for more context here…these considerations are more valid in the Cloud computing paradigm.
  • Ensuring DoS style attacks don’t give you a heart attack (because your elastic cloud racked up a huge bill)

Observability and Manageability are key “infrastructure” capabilities in the Cloud computing model that enable features/value proposition such as the ones discussed above. These are not new ideas, but adaption of time tested ideas to a new computing paradigm (i.e predominantly distributed computing over adaptive/dynamic infrastructure, but other variations will crop up).

What color is your Cloud fabric ?

In Business models, Cloud Computing, Use Cases on January 15, 2009 at 9:30 pm

Cloud Fabric is a phrase that creates the notion of myriad complex capabilities, and unique features depending on the vendor you talk to. Here, I take the wabi-sabi approach of simplifying things down to essentials, and then try to make sense of all the different variations from vendors, with in a simple, high-level framework or mental model.

Before we go in to details, a caveat: Different types of Cloud fabric discussed in this framework here, is always provisional, in the sense that technological improvements (e.g. Intel already has a research project with 80 cores on a single chip) will influence this space in significant ways.

Back to the topic: Loosely defined, Cloud fabric is a term used to describe the abstraction for the Cloud platform support. It should cover all aspects of the  life-cycle and (automated) operational model for Cloud computing. While there are many different flavors of Cloud fabric from vendors, it is important to keep in mind the general context that drives the features in to the Cloud fabric: The “Cloud fabric” must support, as transparently as possible, the binding of an increasing variety of Workloads to (varying degrees of )Dynamic infrastructure. Of course, vendors may specialize in fabric support for specific types of Workloads, and with limited or no support for Dynamic infrastructure. By Dynamic infrastructure, I mean that it is not only “elastic”, but also “adaptive” to your deployment optimization needs.

That means your compute, storage and even networking capacity  is sufficiently “virtualized” and “abstracted” that elasticity and adaptiveness directly address application workload needs (as transparently as possible — for example is the PaaS or IaaS provider on converged data center networks or do they still have i/o and storage networks in parallel universes?). If VM’s running your application components can migrate, so should storage, and interconnection topologies (firewall, proxies, load balancer rules etc) , seamlessly – and from a customer perspective, “transparently”. 

Somewhere all of this virtualization meets the “un-virtualized” world! 

Whether you are a PaaS/IaaS, “Cloud-center” or “Virtual private Data center” provider etc, variations in the Cloud fabric capabilities stem from the degree of support across both these dimensions:

  • Breadth of Workload categories that can be supported. E.g. AWS and GoogleApp platform support is geared towards workloads that fit the Shared-nothing, distributed computing model based on Horizontal scalability using OSS, commodity servers and (mostly) generic switch network infrastructure. Keep in mind, Workload characterization is an active research area, so I won’t pretend to boil this ocean (because I am not an expert by any stretch of imagination), but just represent the broad areas that seem to be driving the Cloud business models.
  • How “Dynamic” is the Cloud infrastructure layer, in supporting the Workload categories above: Hardware capacity, configuration and general end-end topology of the deployment infrastructure matters, when it comes to performance. No single Cloud Fabric can accommodate the performance needs of any type of Workload, in a transparent manner. E.g. I wouldn’t try to dynamically provision vm’s for my Online gaming service on AWS unless I can also control routing tables and multicast traffic strategy (generally, in a distributed computing architecture, you want to push the processing of state information close to where your data is. So in the case of Online gaming use case, that could mean your multi-cast packets need to reach only a limited set of nodes that are in the same “online neighborhood” as the players). Imagine provisioning vm’s, storage dynamically for such an application across your data center without any thought on deployment optimization. Without an adaptive infrastructure under the covers, an application that has worked perfectly in the past might experience serious performance issues in the Cloud environment. Today, PaaS programming and deployment model determines the Workload match.

 

So, in general, Cloud fabric should support Transparent scaling, Metering/Billing,”Secure”, Reliable and Automated management, Fault tolerance, Deployment templates and good programming/deployment support, “adaptive” clustering etc, but specific Cloud fabric features and Use Case support depends on where your application is, at the intersection of the Workload and need for Dynamic/adaptive Infrastructure. Here’s what I mean…

 

Example of functional capabilities required of Cloud fabric based on Workload type and Infrastructure/deployment needs

Example of functional capabilities required of Cloud fabric based on Workload type and Infrastructure/deployment needs

Let me explain the above table: Y-axis represents a continuum of the deployment technologies & infrastructure used, to support “Task level parallelism” (typically, thread parallel computation where each core might be running different code) at the bottom to “Data parallelism” (typified by multiple cpu’s or cores running the same code against different data) at the top. X-axis broadly represents the 2 major Workload categories:

  1. High Performance workloads: where you want to speed up the processing of a single task on a parallel architecture (where you can exploit Data parallelism e.g. Rendering, Astronomical simulations etc)
  2. High Throughput workloads: where you want to maximize the number of independent tasks per unit time , on a distributed architecture (e.g. Web scale applications)

Where your application(s) fall in to this matrix determines what type of support you need out of your Cloud fabric. There’s room for lots of niche players, each exposing the advantages of their unique choice of the Deployment Infrastructure (how dynamic is it?), PaaS features that are friendly to your Workload.

The above diagram shows some examples. Most of what we call Cloud computing, falls in to the lower right quadrant, as many vendors are proving the viability of this business model for this type of Workload (market) . Features are still evolving.

Of course, Utility computing (top right) and “Main-frame” class, massive parallel computing (top left) capacity has always been available for hire for many, many years. What’s new here is how this can be accessed and used effectively over the internet (along with all that it entails: simple programming/deployment model, manageability, friendly Cloud fabric to help you with all of this in a variable cost model that is the promise of Cloud computing). Vendors will no doubt leverage their “Grid” management framework.

Personal HPC (bottom left) is another burgeoning area.

Many of these may not even be viable, in the market place..that will be determined by Market demand (ok, may be with some marketing too).

Hope this provides a good framework to think about the Cloud Fabric support you need, depending on where your applications fall in this continuum. I’m sure I might have missed other Workload categories, I’d be always interested in hearing your thoughts and insights, of course.

So, what color is your Cloud Fabric?