Tuesday, December 20, 2011

What Do We Mean by Cloud?

From Gravitant's blog.


“In all the ambiguity of what adds value to the Cloud or what facilitates the Cloud, Gravitant sits at the intersection of both, which makes it a pure Cloud company with all the experience, expertise, and solutions built around the Cloud.”

I’ve been writing mostly about what we’ve been developing for and around the Cloud in Gravitant, recently. Now is the time to elaborate a little bit about what’s being said and done about the Cloud outside of Gravitant. I am not intending to analyze specific articles, rather present an overall picture of the impression I get about what is out there and where Gravitant stands in this picture.
As Cloud is getting hype and determining the next generation of IT and what the Internet constitutes of, it is getting a whole lot of attention from the actors of the sector and beyond. While the Cloud has defined itself during its construction with a bottom-to-top approach, recently the new actors of the Cloud are trying to define/re-define the Cloud with a top-to-bottom view.
The concept of IT resource sharing can be dated back as far as the use of mainframe, the Internet, VMware, or EC2 depending on your perception. However, the name “Cloud” -which is cleverly set by the way- comes definitely after commoditization of IT resources, which is very recent. Before Cloud became the “Cloud”, standards of traditional IT had given direction to all innovative efforts towards Cloud. These efforts have been very technical and mostly motivated by infrastructure oriented improvements. Later on, the commoditization of IT resources has required the business model to be well defined. Although there is a lot of technical, infrastructural advancements noted, probably most of the focus is in the definition of business of the Cloud.
I have read many blog articles, white papers and research papers about the Cloud in addition to web content of cloud companies. If there is one thing common among all these articles, that is what exactly could be labeled as Cloud is not very clear. I get same kind of confusion among my colleagues as an Analytics professional, too. Most of the time, boundaries of the field of Analytics is not very clear. It makes sense in both cases, because their definitions of businesses are still in progress. However, I believe certain examples could draw a more indicative line of what could be called as a pure Cloud effort.
Most of the work branded as Cloud efforts are actually conversion of existing desktop software to SaaS. Especially, if you search keywords “Cloud” and “Analytics”, you will see many analytics tools as SaaS. Although I believe every type of Cloud effort is a brick in the wall while constructing a whole Cloud environment, I think we should start distinguishing what Cloud effort is made “for” cloud and what Cloud effort is made by “facilitating” Cloud. So if I have to give an example, if you convert a management software to a SaaS application, then you are “facilitating” Cloud. If this management software is used to manage your Cloud resources, then this is an effort made “for” Cloud. Although there is a considerable gray area in the intersection of the both, I hope the example makes itself clear to the reader. 
Where does Gravitant stand at this intersection? First of all, Gravitant is an established Cloud brokerage company which is enlisted by Gartner’s recent report on Cloud brokerage companies. NIST defines a cloud broker as “…an entity that manages the use, performance and delivery of cloud services and negotiates relationships between cloud providers and cloud consumers.” In the light of this definition, Gravitant’s CloudMatrix and CloudWiz tools manage all traditional IT resources and Cloud resources end to end from sourcing to provisioning and monitoring. They include very powerful and intelligent capacity planning, advanced monitoring and advanced analytics tools which enable enterprises to strategically and tactically plan capacity of their IT resources on the Cloud and in-house in addition to efficiently analyzing huge data collected from the resources and proposing the most effective Cloud Analytics solutions. All these efforts are made for Cloud to make Cloud a more manageable and less costly environment for IT needs of enterprises. 
On the other side, Gravitant’s major Cloud brokerage and management tools CloudMatrix andCloudWiz are ultimate user friendly, fast and smart SaaS applications. They naturally run on the Cloud very efficiently, reliably and securely. Gravitant runs all its other applications and internal IT resources on the Cloud. So Gravitant facilitates the Cloud and has the first-hand Cloud experience as a Cloud user.

Gravitant both adds value to the Cloud and uses it for its own benefit. All these Cloud centric activites make Gravitant a pure Cloud company. Gravitant’s Cloud network grows very fast day by day including Amazon, Terremark, Savvis, Rackspace, IBM, etc. There is a lot to learn about Gravitant’s cloud experience. If you have any ideas, thoughts or questions to add to this discussion of what is “for” cloud and what is “facilitating” cloud, please respond to this post or contact us so that we can share the intellectual part of the Cloud experience together.

Monday, December 12, 2011

Cloud Deployment Tree

The spectrum of cloud deployment models are many, and everyone has a unique combination. Follow this cloud deployment tree to identify the combination that best suits your requirements.

We have intentionally avoided industry terminology in the tree due to lack of standardization. However, the legend can be used to map each combination to commonly used industry terms (as of today). The legend also shows industry leaders for each combination.

This is the very first step in Cloud Assessment.  The next step is to determine if your application would even be feasible in the cloud.  Click here to see if your application would be a good fit in the cloud...

Wednesday, December 7, 2011

Gravitant published in latest Gartner Report

What makes a Cloud Services Broker (CSB)?
Gartner identifies three primary roles that qualify a company to be a CSB:

  • Aggregation (across VARs, IT distributors etc)

  • Integration (with SIs etc)

  • Customization (for SIs, PS etc)

"As both an enabler and a cloud brokerage, Gravitant pulls together a number of the capabilities that IT organizations, VARs and SIs, and public cloud providers can use to extend the value of their offerings." - Daryl Plummer (Gartner Analyst)

Full report here...

Thursday, November 17, 2011

Cloud Capacity Allocation: Reserved vs. On-Demand Capacity or How I Managed to Get over with Black Friday Rush

From Gravitant's blog.


The shopping season just arrived and who knows how much pressure is on the shoulders of IT administrators of e-commerce companies. Competition is tough so if one has to wait more than a couple of seconds to view a deal, he or she can easily move on to some other website to get them all. So the clock is ticking and all the e-commerce websites are supposed to have the resources to fulfill the oncoming demand. Thanks to the cloud, these problems are behind. And thanks to Advanced Analytics team of Gravitant, the related cost-cutting solutions are provided to enterprises as a part in our cloud domain. 

Commoditization of computing via cloud allows IT demand to be fulfilled in time. Ideally, it is possible to acquire the required resources whenever the demand occurs. Obviously, this would be the perfect policy to replenish IT resources regardless of budget constraints. However, putting technical difficulties and lead times aside, supplying demand on time is not very practical and smart when cost and alternative pricing models of the suppliers are considered. Most cloud providers offer lower rates for bulk cloud procurements. 

Practical concerns and budget considerations force enterprises to make a three dimensional IT capacity procurement decision in the cloud. Following are the right questions to ask while making these decisions:

1.    How much capacity to reserve at the beginning?
2.    When to order additional capacity?
3.    How much additional capacity should be ordered each time?

Among these three questions, the last two are the easiest to answer as long as we know the answer to the first question. The combined answer to the last two questions is to order the excess demand whenever it occurs. So the first question remains, “what should the reserved capacity be?”
If we assume the preferred cloud provider prices its cloud uniformly, which means it does not implement any bulk pricing and there are no fixed costs per order and no lead times, then it only makes sense to order equivalent to demand quantity whenever there is a demand realization, hence zero reserved capacity. However, the real world does not work exactly this way so we have to keep some reserved capacity to minimize cost and deal with uncertain technical and business problems.
There are a couple of alternative approaches to solving this problem with operations research and advanced analytics techniques. We can either solve the problem with a deterministic optimization approach or implement Markov Decision Process regarding stochasticity. In the next blog article on this topic, I will discuss these alternative approaches in detail and give an idea of what solutions Gravitant offers to enterprises on the issue of reserved vs. on-demand capacity in the cloud.

Wednesday, November 2, 2011

Part 2 - Your application would be a GREAT FIT in the Cloud if...

1. Your application is fairly isolated from other applications

Typical examples of isolated applications are CRM, messaging, and other custom built applications.  On the other hand, traditional ERP applications are tightly woven with others and hence might require re-architecting the application to fit the cloud.

Alternative: In most cases your application is probably somewhere between isolated and completely integrated with other applications.  In this case, here are some options based on the nature of the dependency:
  1. Communication channel dependency - Create a distributable communication channel that is secure
  2. Architecture dependency - Make a copy of the shared layer for the cloud
  3. Single sign-on security - Upgrade single sign-on to support remote sign-on
If none of these options are feasible, then either both applications would need to be migrated to the cloud or both should remain as is.

2. Your application architecture is cloud friendly

Any application on an x86 platform would work well in the cloud regardless of the operating system.  If the application is on some platform other than x86 and you still want to go cloud, then you would need to re-architect the application to the x86 platform before you begin migration.

Also, if the online-architecture is web based or client server, then your application is more cloud friendly.  Moreover, if the online-architecture is heterogeneous from the batch-architecture, then your application is even more cloud friendly.

Alternative: If your application is on any other platform (such as Sun Sparc, Power PC, or Mainframe), then it might be a better candidate for managed hosting.  Another reason to opt for managed hosting is if your servers require software licenses that can only be tied to physical cores.

3. Your application security requirements are satisfied by FISMA compliance

Some cloud providers are FISMA (Federal Information Security Management Act) certified which ensures they are satisfying all the Federal security standards as measured by NIST.  In addition to FISMA compliance, security can be further enhanced by engaging managed services for security on the cloud (like netForensics).

Alternative: If it is necessary for all the data and/or hardware to be located on-site, then a private cloud or a public/private hybrid may be an option.

<- Back to Part 1 - Your application may NOT YET be ready for the Public Commodity Cloud if...

Tuesday, October 11, 2011

Creating a Virtual Machine on/off schedule

“Wouldn’t it be nice to have a schedule to automatically turn VMs on or off at certain times of the day?” I’ve heard this from many of our clients, and this is definitely an interesting optimization problem.  Since most providers price by VM hours, one always needs to make sure not to end up with VM sprawl.  The fact that licensing on these VMs are also priced by the hour doesn’t help either.  So, yeah VM scheduling would be great but where do we start?

Actually, it isn't very complicated because most of us use load balancers anyway.  The load balancers are monitoring VM utilization (through connection count) and can thus keep track of times when all the VMs are underutilized.  Dr. Zarifoglu, in his load balancing article, identified thresholds for turning VMs on or off based on the workload.  So, turning one or more VMs off is simply an additional step after load balancing!

This leads to two possible approaches for VM Scheduling:
Dynamic scheduling – where VMs are automatically turned on or off based on demand and threshold policies, or
Static scheduling – where one would simply monitor VM utilization over time and come up with a user defined schedule that doesn’t change.

Obviously, the best approach would be to have a hybrid solution where the static schedule is automatically modified at fixed time intervals (say weekly) and is executed only after being approved by an administrator.  See Gravitant’s CloudMatrix – Policy Manager for more details on managing VMs in the cloud.


The caveat is that most cloud providers don’t allow simply turning VMs on or off (except for OpSource and Terremark).  Most providers will charge for stopped VMs as well, unless the VM is ‘deleted’.  So, an alternate process for turning a VM off (with the expectation of turning it on again at some point in time in the future) is to first create an image of the VM and save it in the backup storage space, and then ‘delete’ the VM.  In order to turn this VM back on, a new VM needs to be created and then the image from backup storage needs to be installed on the new VM before it can become functional.


As a result, the process of turning VMs on or off may be not be time and cost efficient.  However, here are a few alternate ways to do this:
(1) Go with a cloud provider like OpSource that allows VMs to be turned on or off at the click of a button and doesn’t charge for VMs that are turned off. (Note that there is a small fee for storage space occupied by the VM).
(2) Go with a cloud provider like Terremark that doesn’t even price by VM.  However, they would still charge for the OS licensing and storage if the VM was turned on at any point in time during the month.
(3) Go with any cloud provider but subscribe to an automated backup and restore service. Gravitant expects to provide this capability in its CloudMatrix console in early 2012.

For more information, go to www.gravitant.com.

Monday, September 26, 2011

An Analytic Approach to Solving Load Balancing Problem in the Cloud

From Gravitant's blog.


IT management moves into a new dimension by the Cloud. In traditional IT, most of the cost generation occurs in procurement of resources, provisioning and maintenance. By nature, the cost generation is fairly static in traditional IT. Fixed cost of hardware and data centers and stable variable maintenance and provisioning costs contribute to this static cost structure. Cloud’s dynamic nature affects cost management of enterprises in the Cloud, too. Pricing strategies of cloud providers go along with principle of cloud as a utility. Although many pricing options have a fixed portion for a reserved capacity, the usage based cost is always a significant and varying part of enterprise cloud costs. This dynamic cost structure increases the importance of intelligent provisioning and management.
My previous article, “Cloud Sourcing Optimization: A Conceptual Model Discussion”, in Gravitant’s blog, introduces Gravitant’s efforts in optimization in Cloud analytics. The next of the series is investigating analytic solution approaches to solving load balancing problems.
The underlying problem is simply to determine when to turn off a virtual machine (VM) due to low utilization without allowing utilization of any VM to exceed a certain threshold level by turning on a new VM. The aim is to keep VM utilization within a reasonable band to minimize provisioning cost while satisfying workload demand. The question is what the “optimal” high-mark and low-mark utilization values to turn on and off VMs are.
The obvious decision variables in a corresponding optimization problem are high-mark utilization value, low-mark utilization value, whether an existing VM is turned off due to low utilization, and whether a new VM is created due to high utilization of any VM. Each turned of VM creates an extra load of work on the rest of the VMs. Each new VM shares the load of a high-utilized VM. Objective is to minimize total cost of provisioning. Set of constraints can be summarized in three groups.
1- High-mark utilization: New utilization of the remaining VMs after adding the used capacity of low-utilization VMs should be lower than high-mark utilization value.
2- Low-mark utilization: Any VM should have a utilization more than low-mark utilization value.
3- New VM creation: If a VM has a higher-than-high-mark utilization, then a new VM is created.
Because there are both binary and continuous variables, the optimization model tends to be a mixed integer programming model. However, since the first set of constrains is quadratic, the exact definition of the model is quadratically constrained mixed integer programming model. Some straightforward enumeration over the set of VMs will help linearize the constraint. Therefore, we will have a mixed integer linear programming model.
Although this static model may seem restrictive in a setting with a varying amount of demand for virtual machines to meet under budget limitations, it has ability to roll over time and transform into a dynamic model which would fit very well to the span of provisioning and the nature of the Cloud. The utilization band in which VMs are allowed to operate changes dynamically and provides a flexible space for decision makers.
This article reveals the tip of the iceberg of the analytic solutions which Gravitant offers as a cloud brokerage and management company for the enterprises. Our set of analytic solutions that help enterprises move into and operate in the Cloud will continue to grow and evolve.

Tuesday, September 13, 2011

Part 1 - Your application may NOT YET be ready for the Public CommodityCloud if...

1. Your application demand is very stable and doesn't fluctuate much
If your servers are not under utilized, then you are better off keeping things the way they are, unless you want to plan for disasters or other unplanned events.  This is because cloud pricing models are geared for elastic computing. It would be more expensive to provision 4 VMs than to run 4 servers in-house.  The benefits of provisioning fewer VMs and using burst capabilities will only be realized if such demand fluctuations are expected.

Alternative: Look for cloud providers (like Terremark) that price by usage (regardless of the number of VMs provisioned).

2. Your application's licenses can only be tied to physical cores
Many software licenses have not yet made the shift to the world of virtual machines.  For example, Oracle licenses can only operate on a fixed physical core, whereas virtualization technology was developed precisely to separate the physical layer from the software layer.  If Oracle is installed on a VM, the VM would be assigned to certain physical core(s) at one point in time and some other core(s) at some other point in time, which would violate Oracle licensing rules.

Alternative: Look for cloud providers (like Savvis) that have managed hosting servers which share a VLAN with their cloud.  In this way, Oracle can be installed on the managed hosting server while the rest of the application can be deployed on the cloud.

3. Your application stores a very large amount of data on the cloudWith storage disks getting cheaper by the day (1TB for $80), it is becoming increasingly cheaper to store large amounts of data in-house rather than pay for cloud storage every month.  This is because cloud storage is typically priced per GB per hour.

Alternative: Share data storage with other customers using Symform (or any other similar technology) that breaks up the data into a number of encrypted parts and then stores them in the other customer's data centers.  This gives the benefits of elasticity without paying too much for it.  It also increases security of data in the cloud because no one can use the data without having all the parts and being able to decrypt all of them.

4. Your application transfers a large amount of data in or out of the cloudMost cloud providers price their bandwidth by GBs transferred per month.  This would be very costly for applications that stream large data files on a regular basis.

Alternative: Look for cloud providers that price by Mbps of dedicated network throughput.  This is typically found in enterprise cloud providers like Savvis and Terremark.

5. You do not have the capability of managing your VMs any better than you do your serversThe ease of provisioning VMs as and when necessary can also lead to VM sprawl if not managed appropriately.

Alternative: Subscribe to a cloud management console that can not only auto provision VMs when necessary, but also schedule VMs to be turned off after use.  Gravitant's cloudMatrix uses predictive analytics to create dynamic workload schedules that change over time based on historic demand trends.

Go to -> Part 2 - Your application would be a GREAT FIT in the Cloud if...

Thursday, July 28, 2011

Cloud Sourcing Optimization: A Conceptual Model Discussion

From Gravitant's blog.


Cloud computing brings up new cost cutting, improved flexibility and increased elasticity opportunities for enterprises. While these are the main marketing features of the cloud, the evaluation and comparison of the vendors has not been straight forward so far. Thanks to CloudWiz of Gravitant, we are able to quantify the features of vendors, evaluate them and compare them in a practical, analytical and user friendly manner. As the cloud space gets larger, and decision making steps become more complicated, we will need to add more intelligence to our decision making in cloud migration.
The potential optimization problems may arise in several parts of the cloud space, such as cloud sourcing problem, enterprise capacity planning problem, vendor capacity planning and scheduling problem, vendor load balance problem, etc. In today’s blog, I will elaborate on how to view cloud sourcing problem as a conceptual optimization model.
After an enterprise intends to move to the cloud, it first needs to translate its current use and needs into cloud requirements. Some  of these requirements are quantifiable while some are not. This task is followed by matching the requirements with multiple cloud vendors for evaluation and comparison.CloudWiz takes care of all these tedious steps in a fast, intelligent and user friendly manner. The optimization of cloud sourcing problem is defined on these steps.
In our problem space, there is one customer against multiple cloud vendors. The decision factor is what portion of a certain computing need to provide from a certain vendor.
What are potential constraints of cloud sourcing problem? Let’s make a list of them.
1- Supply-demand: All demand should be satisfied.
2- Hard capabilities: Selected set of vendors should carry all the unquantifiable capabilities which are core to functioning of the enterprise.
3- Soft capabilities: Selected set of vendors should carry a certain fraction of the unquantifiable capabilities which are secondary to functioning of the enterprise.
4- Quality of service: Each selected vendor should satisfy a certain level of quality of service.
First constraint makes sure there is no lack of supply. Second constraint helps eliminate all infeasible members from the decision set. Third constraint grants some flexibility to  the enterprise in decision making.  Fourth constraint ensures the consistency of quality of service.
What is the objective? It should definitely be measured in dollars since we kept perhaps the most important aspect, cost, out of scope so far. The proposed objective function is the minimization of total procurement cost. Cloud vendors have varying pricing schemes. Therefore, building such an objective function is a tedious task. From determining the constraints to constructing an objective, CloudWiz provides all the inputs for such an optimization model in a smart and clean way.
Let us speculate about how the optimal solution would look like. Obviously, if there is a unique vendor which serves all the hard capabilities and enough soft capabilities with the minimum cost, there is the winner. Otherwise, the customer goes through the feasible vendors and starting with the lowest priced one, picks the ones with all hard capabilities, certain number of soft capabilities and minimum satisfying quality of service, allocating based on cost. Although the model is defined as generic as possible, it can still be customized for any enterprise in any conditions.
Hang on for the future versions of the CloudWiz powered with enhanced intelligence of optimization provided by Advanced Analytics group at Gravitant. I will share potential optimization problems in our coming blogs.

IT Capacity Planning in the Cloud and on the Ground

From Gravitant's blog.


Capacity planning is a hype topic in IT supply chain. It is a key requirement for companies making strategic IT decisions. The main challenge appears to be the lack of a uniform, homogeneous measure of comparison between IT resources. If you take the example of server capacity planning, what makes one server better than the other? CPU power, number of processors and cores are definitely key elements for a comparison. However, benchmarking results does not suggest a straight forward comparison between these elements. SUN has been using a benchmarking approach – what they call as “m-values”- for their servers. SPEC values are the most comprehensive references for benchmarking against competition. However, at the end of the day, all these values are company declared and endorsed values for their own servers. Also, experimental conditions and minor configuration changes may cause significant performance changes as can be seen in the SPECs published.
Recently, the capacity planning problem has another dimension for the companies planning to move to cloud. Either public or private, cloud computing provides a large degree of flexibility for IT operations of companies. However, it is not as easy for the companies who are used to keeping IT resources “in-house” to make a decision to move to the cloud.  Ignoring all the overhead, accessibility, privacy, security and legal issues that come with the cloud, capacity planning becomes a multi-fold complicated problem by itself. While it was not already straight forward to compare performances of existing hardware, capacity planning brings a much bigger challenge due to the nature of the cloud where black boxes of resources somewhere around the world out of control of the company await to be evaluated and configured by a company who is new to this space. 
In reality, the best way to compare performances of the cloud and the in-house hardware would be after the fact. However, almost no company has the luxury and resources to reserve and make such a move to the cloud just to see how it would perform. Therefore, strategic IT capacity planning comes into the picture as the savior of budget, time, and energy. But our prior question still remains unanswered even in a larger scale: “What should be the measure of performance for comparison between the cloud and the hardware?” There are some attempts going on for comparison of cloud providers. Cloudharmony.com provides some good performance indicators for alternative cloud providers. Their performance unit “CCU” has a good perception in the business if you read the reviews. So one link of the chain is missing to have a good starting base for comparison between the hardware and the cloud, which is a relation between SPEC and CCU. I am expecting that it won’t be long before we see some attempt through defining and measuring this relation.
As strategic IT capacity planning is becoming a major attraction, the tools to enable it on a larger scale are also making themselves available. There is a lot to come next on this subject. Optimization and cost minimization will and should follow every capacity planning attempt to make the most benefit out of it. Either on the cloud or on the ground, the key to all these strategic efforts is to have a uniform and homogeneous measure of performance. Gravitant has developed a unique bottom to top approach in which the performance is proportional to expected computational power of the hardware or cloud configuration to resolve this issue. We will talk about this approach and its outcomes in more detail in our coming blogs.

Wednesday, July 27, 2011

Cloud Computing - 58% Average Savings Per Month

Application: CRM
Environment: Production
Capacity: 40 Web/App Servers, 12 DB Servers, 8 VPN Servers, 5TB Storage, 10 Mbps Bandwidth
Demand: 1000 concurrent users, 3.0% growth per year

Scenario results from CloudWiz:

To run additional scenarios (for free), please go to http://www.gravitant.com/cloudwiz-home.html

*Note that these results are simply for comparison and decision support.  All cost and savings results are based on publicly available data, and Gravitant is not responsible for any discrepancies in the numbers shown above.  To increase the accuracy of the results from CloudWiz, please contact us to schedule a calibration meeting with our Professional Services group.

Wednesday, June 29, 2011

Top 4 Cloud Providers on CloudWiz

Cloud providers seem to be popping up everyday in some part of the world, but some providers are emerging as the key players by dominating all the others in one or more aspects.  CloudWiz - the free cloud capacity and provider comparison tool - has enabled an apples to apples comparison of a number of providers, thus revealing the winners.

Quality of Service - Savvis
A number of compute, network, and storage benchmarks are run by CloudHarmony.org to evaluate the true performance of cloud providers.  Savvis outperforms all the other providers with a QoS rating of 9.71, with GoGrid following close behind with a QoS rating of 9.64.  The QoS ratings are given in terms of GQU (Gravitant Quality Units) which are explained in Gravitant's corporate website.

Infrastructure Cost - GoGrid
While many commodity cloud providers price their cloud services at very low on-demand rates, GoGrid offers the best value for money by providing enterprise class services at close-to-commodity prices.  GoGrid started off as a commodity cloud provider but is soon emerging as a strong competitor among enterprise cloud providers.  A large application of 400 GCUs (Gravitant Compute Units) costs $ 22,491 per month at GoGrid, while Rackspace charges $24,744 per month.  All other providers charge $40,000 per month or greater for the same compute capacity.

Total Cost - Rackspace
The total cost includes infrastructure as well as operations and support cost.  While many companies would migrate to the cloud for infrastructure savings, they would end up investing quite a bit on operations and support.  This is where Rackspace truly stands out from all the others due to their 'fanatical' customer support.  As a result, the total cost for 400 GCUs of cloud capacity with Rackspace is $80,941/month (for infrastructure as well as operations and support), while GoGrid charges $84,448/month.  Amazon comes in next at $103,928/month and all the others charge $125,000 or more.

Cloud Management - Terremark
We've all heard of VM sprawl and how monitoring and governance is of utmost importance in the migration to cloud. Terremark has made a name for itself with a very easy to use management console, as well as its standardization with jcloud APIs.  Furthermore, Terremark's pricing is package based as opposed to VM based which alleviates the need to scrupulously monitor and govern at the VM level.

So, it seems like GoGrid and Rackspace dominate from a cost perspective, while Savvis and Terremark dominate with QoS and cloud management.  This shouldn't come as a surprise since GoGrid and Rackspace are commodity cloud providers while Savvis and Terremark are enterprise cloud providers.

Now, what if we could mix and match?  What if we could migrate our mission critical LOB applications to Savvis or Terremark for the QoS and monitoring features, and at the same time deploy email exchange and some of our dev and test environments on GoGrid or Rackspace?  Well, that's where the CloudWiz tool plugs into Gravitant's CloudMatrix management console that allows a consumer to provision Virtual Data Centers from different providers and then monitor and govern them across the board.

See press releases at http://businesscloudnews.com/applications/351-cloudwiz-makes-cloud-evaluation-easy.html

For free access to CloudWiz or for more info on CloudMatrix, please email

Wednesday, June 1, 2011

CloudWiz (beta) - A wizard based decision tool for going cloud!

Gravitant announces the release of CloudWiz on June 1st, 2011.  Using this tool, potential customers can evaluate the option of going cloud in less than 5 minutes, thanks to the work of the Advanced Analytics group at Gravitant.  Cloud hosting providers can also use this tool for onboarding new clients, while showing ROI benefits to the office of the CFO.  This tool summarizes the decision of going cloud into a simple three step wizard.

Step 1. Plan Capacity
Step 2. Compare Vendors
Step 3. Analyze ROI

"This tool truly eliminates the mist around the cloud!" - Robert Erickson, EVP of Product Management.

Cloud providers interested in adding themselves to the list are requested to send an email to analytics-support@gravitant.com.  Gravitant is also giving out limited time free access to CloudWiz...

Friday, March 18, 2011

Reserved Capacity vs Usage based Capacity

As a cloud broker, Gravitant works with clients to match their needs with what different cloud providers have to offer.

In this process, we have noticed that most of our clients have high transaction volumes on their Dev and Test environments.  However, in the Production environment, transaction volume is low with a lot of variability.  This makes sense because transaction volume in the Production environment is customer driven.

And most of the cloud providers we work with have lower rates per hour for dedicated capacity and higher rates per hour for usage based capacity.

Therefore, the lowest cost solution for most of our clients is to go with dedicated capacity for their Dev & Test environments, and usage based capacity for their Prod environment.

However, this may not be the case for all clients.  Therefore, it is important to analyze historical transaction volume and utilization for each environment and application type separately in order to identify the optimal combination.  Eventually, the main objective is to derive the optimal target of reserved capacity for each environment/application.  More on this topic coming soon...

I would like to thank Robert Jenkins, CTO of CloudSigma, for his input to this discussion.

Wednesday, March 2, 2011

Free app for Cloud Consulting!

Gravitant has just launched their Cloud Value Planner free app for Cloud Consulting...

Customers can now simulate the capacity and cost impact of replacing their current servers with a private or public cloud.  Consultants can also use this tool to show potential customers the value of going cloud.

Check it out here...

Wednesday, February 9, 2011

Cloud Consulting in an app?

Researchers at Gravitant have successfully converted their Cloud Consulting practices into an app that will soon be available on the company website.  The app, which will be called the Cloud Value Planner, allows users to simulate the capacity and cost impact of replacing their current physical servers with virtual private cloud or public cloud options.

“The Cloud Value Planner is like consulting-in-a-box.  Questions that typically take 60-90 days and thousands of dollars to answer are solved within a few minutes using this app.” – Mohammed Farooq, CEO Gravitant

This simulator, which is still in its beta phase, will be launched as a free app before the end of February 2011.  Please check www.gravitant.com for updates.

Stay tuned for details on specifics of the app…