Plastic brain learning/un-learning

Archive for December, 2008|Monthly archive page

Dealing with Web 2.0, Cloud computing trends….

In Systems thinking on December 7, 2008 at 8:55 pm

My favorite oxymoron(ic) concept/phrase is “sustainable competitive advantage”…its all about getting to a point where you have sufficient lead time with competitors on all aspects of ‘competitive advantage’ such as Product or Technology supremacy, Customer intimacy or Operational Efficiency. You need constant learning & unlearning to recognize when the rules of the game change, when there are changes in frame of reference and refocusing. Programmable Web or Web 2.0 or “Participation age” or *aaS (everything as a Service) is just another inflection point in how the technology game is evolving. But first, lets talk about how you might lose competitive advantage. You lose competitive advantage due to:

  • Imitation by competitors: Amazon’s AWS, GoogleApps, Joyent are some of the forerunners in the Cloud computing offerings space. While first mover advantages could be significant in the presence of strong network effects, certainly the concepts around the Cloud Computing business model is no barrier for entry. So, expect responses from other big players such as Microsoft, HP, IBM and Sun.Interestingly, Sun’s Network.com offerings competed on a similar business model, but addressed a much narrower segment of workloads. Then, Web 2.0 came along….Companies with large install bases of their own in various customer segments such as Enterprises, Web 2.0/startups, Channels (Resellers, OEM’s), Developers are sure to come up with unique value propositions for their primary revenue base, and build on their existing networks first. “Best Practices” is the euphemism for imitation…not that it is bad, but remember it is an equalizer. Now, in the context of Cloud computing, ITIL/ITSM as a best-practice will begin to be re-defined.
  • Denial or Inertia of incumbents: Ironically, the likelihood of delayed response to technological shifts increases with a company’s level of success with previously dominant technologies. Google and Amazon seem to have cost structure advantage, and have exploited Web 2.0 standards to roll out utility scale computing. Sunk costs and current cost structures for the big players (IBM, Sun etc) impose quite a lot of inertia, while they formulate response strategies…
  • Exploiting your strength’s to the point where they turn out to be your weakness (Sub-Optimization): What has worked in the past may not work for the future. For e.g. Network effects that drove the adoption of the Microsoft PC platform, are not as relevant in the Web 2.0, Open Source/Open standards world. Similarly, high performance server market is not as attractive to customers as before in the Web 2.0 world, because of open source innovations that enable highly scalable, utility scale deployments using low-end hardware and open source software. Think about what MogileFS, Hadoop, BigTable, AWS do to HW vendors…you would care less about OS innovations such as ZFS or DTrace when you have to worry about large scale deployments. Application level fault tolerance tend to away a big chunk of the value proposition out of these technologies. 
  • Change in the rules of the game: Rail roads, Telegraph/Telephone networks, Automotive manufacturing all went through this, and it is not time for IT industry. First generation of these industries worried about Manufacturing issues (e.g. once Ford Model T’s started rolling out en masse, the success in dealing with problems in production created a new problem/game: that of concern about dealing with growth, customer segments, distribution etc). After Mainframes, Client-server & PC eras, we’re moving back to ubiquitous & participatory age of computing with the need for Utility scale of compute and storage capacity. Cloud computing seems to address business model concerns in this area in a simple/scalable manner. 
  • Change in the very context/frame of reference: The nature of computing, as well as how you look at computing in general, is changing (i.e the frame of reference that is common or “reality” and the methods of inquiry). All enterprises have to deal with a Social network of some sort at the edges. The LongTail, Power Law and the idea of offering API’s for a programmable web of Services around your products, is the new frame of reference. You just don’t “sell products” anymore. This is where terms like “Platform as a Service”, “Infrastructure as a Service”, “Software as a Service” etc come in to play….whether you like it or not, companies will have to be “operators” of “IT utility” at some level. Today we tend to look at “internal applications” versus “customer facing” applications, as adjuncts to product strategy. We’re moving in to an era of Information-bonded, social networks.

Google’s promise on more eco-friendly Data centers, Microsoft’s Generation 4 Data center design, and how they craft their business models around Cloud computing at all levels, determine how the IT industry is set to transform the next round of this game.

Path to greener Data Centers, Clouds and beyond….

In Cloud Computing, Data center, Energy efficient on December 7, 2008 at 7:49 am

Data center power consumption (w.r.t IT equipment like Servers, Storage) is a hot topic these days. The cumulative install base of Servers around the globe is estimated to be in the range of 50 million units, and in the grand scheme of things, might consume 1% of the world-wide energy consumption (according to estimates from Google, let me know if you need a reference). If you’re wondering, yes, world-wide install base of PC’s might consume several multiples of Server power consumption in Data center….why? Because world-wide PC install base is upwards of 600 million units!

Data center power consumption obviously gets more attention…because of the concentration of power consumption. 50 Mega Watts of power consumption at one Data center location is more conspicuous than the same amount of power consumed by PC’s in millions of households and businesses.

Trying to understand the trends towards more eco-friendly computing requires understanding of developments at many levels. Starting from the VLSI design innovations at the Chip or Processor level, Board level, Software level (Firmware, OS/Virtualization, Middleware, Application level), and finally at the Data center level.

Chip or Processor level: Processor chips are already designed to work at a lower frequency based on load, in addition to providing support for virtualization. In Multi-core chips, cores can be off-lined, depending on need (or problems). Chips are designed with multiple power domains….so CPU’s can draw less power based on utilization. The issue is with other parts of the computer system such as Memory, Disks etc. Can you ask the memory chips to offline pages and draw less power? Can you distribute data across Flash or Disks optimally to allow similar proportional power consumption based on utilization levels? These are certainly some of the dominant design issues that need to be addressed, keeping in mind constraints such as low-latency, little or no “wake-up” penalty.

Board level: Today, Server virtualization falls short of end-end virtualization. When machine resources are carved up, guest VM’s don’t necessarily carve up the hardware resources in proportion. Network level virtualization is just beginning to evolve. For e.g. Crossbow in OpenSolaris. Another example is  Intel’s VT technology: enables allocation of specific I/O resources (graphics card or network interface card) to guest VM instances. If Chips and Board level hardware elements are power (and virtualization) savvy, you can ensure power consumption that is (almost) proportional to utilization levels, dictated by the workload.

Firmware level: Hypervisors, whether Emulated or Para-virtualized, present a single interface to the hardware, and can exploit all the Chip-level or board-level support for “proportional energy use” against a given workload.

OS level: Over a sufficiently long time interval (months), server utilization is predominantly characterized by low utilization intervals. Average utilization of Servers in Data centers is usually less than 50%. That means there is plenty of opportunity for Servers to go in to “low-power” mode. How can you design the OS to co-operate here? 

System level: Manageability (e.g. responding to workload changes, migrating workload seamlessly etc), Observability (e.g DTrace ), API’s to manage Middleware or Application stack in response to low-power mode of operation (again, proportional power usage w.r.t workload) are going to be paramount considerations.

Cloud level: shouldn’t Clouds look like operating systems (seamless storage, networking, backups, replication, migration of apps/data, dependencies similar to pkg dependecies). 3Tera and Rightscale solve only some of these problems…but many areas need to be addressed: Dynamic, workload based Performance qualification, Mapping application criticality to Cloud Deployment Models, Leveraging Virtualization technologies seamlessly…

Data center level: Several innovations outside of IT (HVAC systems, again enabled by IT/sensor technologies), as well as innovations at all of the levels discussed above will help drive down PUE (Power Usage Effectiveness) at the Data center level closer to the holy grail (PUE = 1, i.e all the energy supplied to the Data Center goes to useful compute work done). Microsoft’s Generation4 effort represents a leap in this domain, as more and more companies realize that this is a big change of paradigm, as computing business truly goes in to utility scale/mode.

So, there are plenty of problems to be solved in the IT space….pick yours at any of these levels 🙂