http://blog.gravitant.com/2010/11/04/it-capacity-planning-in-the-cloud-and-on-the-ground/
Capacity planning is a hype topic in IT supply chain. It is a
key requirement for companies making strategic IT decisions. The main challenge
appears to be the lack of a uniform, homogeneous measure of comparison between
IT resources. If you take the example of server capacity planning, what makes
one server better than the other? CPU power, number of processors and cores are
definitely key elements for a comparison. However, benchmarking results does
not suggest a straight forward comparison between these elements. SUN has been
using a benchmarking approach – what they call as “m-values”- for their
servers. SPEC values are the most comprehensive references for benchmarking
against competition. However, at the end of the day, all these values are
company declared and endorsed values for their own servers. Also, experimental
conditions and minor configuration changes may cause significant performance
changes as can be seen in the SPECs published.
Recently, the capacity
planning problem has another dimension for the companies planning to move to
cloud. Either public or private, cloud computing provides a large degree of
flexibility for IT operations of companies. However, it is not as easy for the
companies who are used to keeping IT resources “in-house” to make a decision to
move to the cloud. Ignoring all the overhead, accessibility, privacy,
security and legal issues that come with the cloud, capacity planning becomes a
multi-fold complicated problem by itself. While it was not already straight
forward to compare performances of existing hardware, capacity planning brings
a much bigger challenge due to the nature of the cloud where black boxes of
resources somewhere around the world out of control of the company await to be
evaluated and configured by a company who is new to this space.
In reality, the best way
to compare performances of the cloud and the in-house hardware would be after the
fact. However, almost no company has the luxury and resources to reserve and
make such a move to the cloud just to see how it would perform. Therefore,
strategic IT capacity planning comes into the picture as the savior of budget,
time, and energy. But our prior question still remains unanswered even in a
larger scale: “What should be the measure of performance for comparison between
the cloud and the hardware?” There are some attempts going on for comparison of
cloud providers. Cloudharmony.com provides some good performance indicators for alternative cloud
providers. Their performance unit “CCU” has a good perception in the business
if you read the reviews. So one link of the chain is missing to have a good
starting base for comparison between the hardware and the cloud, which is a
relation between SPEC and CCU. I am expecting that it won’t be long before we
see some attempt through defining and measuring this relation.
As strategic IT capacity planning is becoming a major
attraction, the tools to enable it on a larger scale are also making themselves
available. There is a lot to come next on this subject. Optimization and cost
minimization will and should follow every capacity planning attempt to make the
most benefit out of it. Either on the cloud or on the ground, the key to all
these strategic efforts is to have a uniform and homogeneous measure of
performance. Gravitant has developed a unique bottom to top approach in which
the performance is proportional to expected computational power of the hardware
or cloud configuration to resolve this issue. We will talk about this approach
and its outcomes in more detail in our coming blogs.
No comments:
Post a Comment