Monday, October 25, 2010

Jobs phased out by the Cloud

Is your job secure from being phased out by cloud computing?  Cloud providers will be responsible for most of the technical requirements, so where does that leave you?

Most technical issues in computing will soon be handled by the hypervisors, load balancers, etc. on the suppliers end. However, each supplier will have to be monitored and managed through service contracts.

Infoworld has specified some role shifts based on Forrester and Gartner:

IT jobs of the future will be highly administrative with a focus on capacity planning and contract management – Gravitant, Inc.

Quick tools to help you prepare for this shift are available here

Friday, October 15, 2010

Overutilization vs Underutilization of Virtual Machines

Does virtualization truly improve utilization?

If so, shouldn’t cost be lower as a result?  Well, maybe in the short term…  but in the long term we are starting to see cases where costs accumulate over time and virtualization ends up being more expensive.

Gravitant’s response:
Over-provisioning of Virtual Machines has resulted in Virtual Machine sprawl, which is difficult to manage.  This has resulted in higher cost over time.  On the other hand under-provisioning results in lower performance and SLA penalties.” – Mohammed Farooq, CEO Gravitant

Gravitant’s capacity planning tool configures the optimal physical and virtual machine landscape of servers, network, and storage to
  1. Minimize VM Sprawl
  2. Meet application performance thresholds
  3. Reduce data center costs – systems and operations
Operations Research tools are used to balance the tradeoff between cost (due to sprawl) and performance (from SLAs).

Friday, September 17, 2010

3 keys to Capacity Planning for Virtualization

Fact:  Everyone’s going virtual to reduce cost and improve utilization. Why not?  Sharing of resources and paying for resources on-demand should be beneficial.  But that just makes the job of the capacity planner even more difficult!

3 big challenges in Virtualization that do not exist in traditional capacity planning:

1. Capacity of each box is dynamically allocated, so which virtual machine (VM) actually got how much of resources?

2. Each box has overhead utilization from each VM due to the hypervisor which reduces performance, so what is the critical number of VMs to be configured on each box?

3. Cost models are complex with options for on-demand vs dedicated vs burst capacity, so which option should be chosen?

Solutions: 1. We use a couple of key performance metrics (Transaction Rate and Response Time) that are uniform across all layers and applications.  This tells us how much capacity was effectively used by each application.

2. We use a slowdown factor that discounts available resources due to hypervisor utilization.  As a result, we can derive the optimal number of VMs on each box.

3. Because of solutions 1 & 2, we are able to accurately forecast capacity requirements which can then be compared with the different cost models.  If capacity requirements are high but stable, dedicated would be cheaper, but on-demand is better for unstable capacity requirements.

Saturday, September 11, 2010

Decision Support for Public Healthcare Administration in Indiana

Decision support using analytics sounds great!  But where do we begin?  There's so much data being collected and stored and secured to the nth degree, but now what?

The main issue with all the data we are collecting is the data is usually inconsistent.  This is because there are a number of 'events' both on the demand as well as on the supply end which distort the picture.  So, is low throughput due to fewer resources or low demand?

Therefore, the CIO's decision support group would first need to "cleanse" the data and wrap a structure around it. Then decision support is a matter of applying one of the many analytical tools out there in the right context.  This begs the question - how much time and effort would that take?

Well, it only took 2 months by Gravitant's professional services group to get the FSSA of Indiana up and running.  Gravitant's BusinessMatrix platform was used followed by their AdvancedAnalytics modules to provide visibility into throughput and timeliness, followed by decision support for bottleneck identification and optimal resolution options.

Friday, July 30, 2010

Predictive IT Management for Healthcare Transformation in Texas

It seems like all the chatter on Healthcare reform has disappeared from the media now that the law has been passed.  But will it actually get implemented at the state level?  How effective will it be?  How efficient will it be?  The Health and Human Services Commission (HHSC) in Texas is battling through the details, but it is clear that the IT infrastructure to orchestrate this at an operational level is mind boggling.

The Health Information Exchange (HIE) has been launched to integrate all the participating providers in Texas.  However, this would only add more load to the already staggering demand handled by the HHSC Office of the CIO.  In addition to policy changes, the CIO also has to plan for demand generated due to natural disasters, economic recession, etc.  Moreover, the CIO has to manage an endless list of vendors and private contractors who are responsible for different segments of the organization.  In such a complex and volatile environment, the CIO of HHSC Texas has resorted to expert decision support from Gravitant to manage business demand with IT supply.

Gravitant’s Predictive IT Management (PITMAN) tool answers the following questions for the CIO:

  • When would the system first break due to the load?

  • Which component is the bottleneck?

  • How many resources should be added at the bottleneck to eliminate this issue?

  • What is the schedule for resource addition?

  • What is the cost and risk impact of this transformation?

Video demo here...