Rackspace is now competing with Amazon & Google GCE for your Cloud Storage Needs

Rackspace claims “Consistent”, “Reliable”, “Unlimited” Block Storage in the Cloud and are competing Head-To-Head with Amazon EBS (Elastic Block Store) & Google’s GCE (Google Computer Engine). A big boost for Open Source as they are using the OpenStack framework however, like ALL shared infrastructure the chances of major outages and failures cascading throughout their entire customer portfolio is pretty high.

They offer both SSD and traditional drive storage selections, their pricing is a bit higher than most and along with “Dreamhost” who just anounced their new “DreamCompute” service announcement earlier this month.

With the list of new Public Cloud providers growing like weeds, you can mark my words the short term future means massive acquisitions, failures and consolidation. There just is not a large enough demand for Public Cloud and as Private Cloud Technology becomes more reliable and the Major Outages like the recent (again) Amazon Outage continue to happen with the large Public Providers (which they will, as I’ve stated previously many times, if your going to outsource your critical infrastructure to a managed/shared/hosted solution you better be prepared to have your business “go down” just as I tell anyone wishing to purchase and ride a motorcycle, as everyone goes down eventually) and more companies continuing to build their own private clouds while others migrate off the public cloud hazard, look for the market to really consolidate down to two or three players in the Public Cloud Arena over the next 3 years.

ANOTHER AMAZON Cloud Outage takes down Major Company Businesses

The EVIL Cloud strikes again

As I’ve been saying for 2-3 years now, Cloud Technology is still WAY too early for any company to be using as their “end-all” strategy. And for sure, Public Clouds may be good for Mom & Pop organizations or for large inefficient ones like the US Federal Government (to be sure!) but for any major corporation or even a new fast Startup to throw their critical infrastructure onto Amazon, Rackspace, Google or any other Public Cloud and walk away expecting nothing but rainbows and pretty pictures, is foolhardy at best and downright idiotic and a fire able offense (in my opinion) at worst. I’m sure many board of Director’s are screaming bloody murder the past week and would not be surprised to see some heads rolling.

The problem is that ANYONE who has been in this business 10 years or longer KNOWS (or should) exactly what will happen. Anyone who has had a hosted server, Hosted DNS, or Hosted Website for ANY length of time KNOWS they eventually go down, they all do. So why ANYONE with any experience would place their core business and their whole company in that situation is beyond my comprehension.

The only thing I can think of, as I’ve also stated previously and have seen first hand, is that a lot of these new startups and I consider Reddit, Foursquare, Pinterest, etc… ALL as young and inexperienced startups with Young Executive staff (I’m talking ALL in their twenties) and a refusal to hire older, wiser heads to help them with their business and infrastructure strategies. You may think I’m full of hot air here, but believe me I’ve SEEN IT MANY MANY TIMES over the past few years folks. I’ve spoken till blue in the face in many cases and in one my 25 years of experience was over-ruled by a 21 year old software developer who thought he was an infrastructure specialist and demanded the company use Amazon Cloud Services. Even after a two week outage at Amazon almost lost them their company, the CEO who was also around 21 years old was apparently buddies with this software developer, and even though the one older person they had working there (the VP of Software Development who was in his late 40’s and was the one who recommended they call in me) also advised them to migrate to their own Private Cloud, they decided to ignore our combined experience of close to 50 years and doubled down with Amazon and Rackspace.

I cannot tell you how many new startups I’ve interviewed with who have fallen for the “marketing hype” and believe that Amazon is the worlds gift to Cloud and some (I’m not kidding here) even had no idea there was anything other than Amazon!  As I’ve said many people equate Cloud=Amazon and so in my Not so Humble Opinion, that is the major reason many of these companies are in trouble and some are bailing ship and finally deciding to build their own private cloud infrastructures and I would bet you anything they finally brought in someone like myself our “Cloud Consulting International” who finally convinced them, as every technology analyst has written reams about (ie, Gartner, IDC Network World, and others) how the Public cloud IS NOT SAFE and WONT BE FOR 5-10 YEARS YET for any sort of “critical” infrastructure or “sensitive” data and that the majority of businesses other than MoM”n”PoP shops need to build their own private cloud infrastructures FIRST and then SLOWLY, in the future, perhaps they will be able to migrate some of the infrastructure to the Public cloud.

Spiraling out of Control

What has happened with Amazon, EACH TIME that they have experienced an outage, something small has spiraled out of control into a Major Catastrophe. That is what typically happens when your relying on minimum waged (or not much higher waged) employees to manage a complex infrastructure. I’ve seen it during the past 18 years in almost every hosting facility/colocation facility I’ve worked in or visited. The fact is you have a massive amount of square footage and many hundreds of thousands of pieces of equipment if not millions in these large public hosting companies, OF COURSE they cannot afford to hire skilled workers like myself or others to manage all the little details, they’d go broke, so they hire college kids with no experience. You would be surprised.

So add it all up, massive infrastructure of millions of components, a NEW technology “cloud”, college aged kids with no or minimal experience running the place and in many instances in upper management even, A Massive Marketing Hype and Advertising Campaign telling everyone that their service is 100% Safe, Secure and Reliable (They still claim a 99.99% UPTIME! Which is a physical and scientific impossibility with the number and length of outages they’ve had the past 3 years, even a single hour outage can affect uptime percentage for months) and the Best Thing Since Sliced Bread and you have the makings for continued disaster after disaster.

My Best advice?

Do yourself a major Favor, Call “Cloud Computing International” or other respected cloud consulting company or hire yourself someone with at least 15 years of data-center and/or Cloud Infrastructure Experience BEFORE making any major commitments to your core infrastructure and the cloud.
Believe me, If all I was after was money, I’d be whitewashing this over like many others are doing so as not to kill the goose with the golden egg!

Understanding Our Cloud Future


Cloud computing, at it’s most basic level, is a set of services that enable companies to host data and applications on a massive computing utility, either on or off premise or hosted by a third party. Traditionally, enterprises supported their IT needs by hosting their data on individual servers. Cloud computing enables companies to host and utilize content in a way that works much like a public utility—you pay for what you use, have access to as much as you need, and don’t need to worry about the cost of maintaining the entire ecosystem. Managing just the last mile, and the devices you use to access content, means replacing capital expenditure (CapEx) models and costly IT support maintenance cycles with an operational expenditure (OpEx) model. The result is reduced cost, guaranteed up-time (if done properly), as well as simpler application upgrade cycles and quicker time-to-market.

Understanding Cloud Concepts

The term “cloud” has become confused through mass market and media hype. The term came about around 2004-2005 in its current meaning, previously it was a term used to mean the “internet” when designing WAN Networks. No the term means many things to many people however the true meaning consists of a specific set of applications who’s primary function is to enable your network to be “elastic”, allow you to “pay-as-you-go”, provide “100% up-time”, enable “simpler application development”, enable quicker “time-to-market”, and “reduce overall infrastructure costs”.

The following are just some of the more common cloud concepts to help us decipher the maze of acronyms and technical terms about cloud computing.


A number of companies and government agencies will rely on the cloud for more than half of their IT services by 2020, according to Gartner’s 2011 CIO Agenda Survey. A big chunk of those services will involve utilizing “virtualization,” which is the process of hosting multiple applications (primarily servers, desktops, load balancers, switches, routers and firewalls) on a single piece of hardware that was previously originally intended to accommodate only one server, desktop or other application.

Virtualization has been the true “cloud” enabler as it has allowed companies such as Amazon who had huge data-centers with massive wasted CPU & Memory cycles to “consolidate” many applications onto a single hardware device and sell those additional “virtualized” servers to customers. Now virtualization allows devices to even share cpu and memory across the network and this has allowed what is called “elastic” computing as well as virtually 100% up-time. By enabling devices to share their resources across the whole infrastructure (this can be a LAN or WAN and can be a single data-center or multiple data-centers across the globe), a single application now has practically unlimited resources it can draw on. The other aspect of elastic computing besides this masis that

Though there are considerations and limitations keeping enterprises from replacing their laptops with cloud-hosted solutions, companies are beginning to migrate their servers in large numbers to the cloud, and desktops will undoubtedly slowly start to follow. In terms of server virtualization, many companies today offer the ability to provision and control virtual machine instances in a cloud-hosted environment. These may be in the form of a private cloud offering, which isolates the physical infrastructure of your organization’s servers from other businesses. Or infrastructures could be hosted in a public cloud offering, where your IT will be co-located with others. “…factors that are increasing an organization’s interest in virtualization are speed and agility.

Virtualization enables you to do things faster, thus making your company more agile. Instead of delivering a new service in two months, companies are able to do it in two days.” (Cloud Computing Journal, “Cloud Computing & Virtualization: Hot Trends Organizations Can’t Ignore”August, 2011 http://cloudcomputing.sys-con.com/node/1950346)


There are Four Key Components which make up any “elastic computing” solution:

  • Extending Resources Across both the LAN & the WAN
  • Massive Consolidation or Resources
  • 100% Up-time Potential
  • Pay-as-you-Go

Virtualization has been the true “cloud” enabler (without it there would be no concept of “cloud” as we know it today), as it has allowed companies such as Amazon who had huge data-centers with massive wasted CPU & Memory cycles to “consolidate” many applications onto a single hardware device and sell those additional “virtualized” servers to customers, and call it a “cloud”. Now virtualization allows devices to even share cpu and memory across the network and this has allowed what is called “elastic” computing. By enabling devices to share their resources across the whole infrastructure (this can be a LAN or WAN and can be a single data-center or multiple data-centers across the globe), a single application now has practically unlimited resources it can draw on as ewll as the potential (if designed properly) for 100% up-time.

Extending Resources Across the LAN & WAN

Traditionally networks we’re designed so that a single hardware device would support a single application, such as a server, desktop, router, firewall, etc…with the advent of virtualization we can now move applications across the network (both the LAN & WAN) at will and on-demand, access unused CPU and memory from practically any device by any other device which may require more than it has installed, and do it all automatically.

The best way to picture this fundamental change is to imagine your home or work PC. Ocassionally (if it is like mine) it will slow down or hang when it is working too hard and you’ll see the hourglass or your cursor will show that it is “working” and stops you from doing anything while it finishes its job. Imagine if every appliance in your home had the same amount of RAM and the same processors as your PC and your PC could take additional processing power and memory from your any or all of them whenever it required? From your refrigerator, Oven, Dishwasher, Big Screen TV, Stereo, Toaster, etc…and each of them in turn could do the same, when not using your oven, dishwasher, washing-machine, etc you have all that Memory and those CPU’s just sitting there doing nothing, but now with “elastic” computing they all become practically a single entity where everything is shared, hard-drive space, memory and CPU’s.

In business, this translates to having resources instantly available for those “bursts” when your web server gets hit hard all at once. It allows you to use as much or as little resource as you need.

Massive Consolidation of Resources

Using virtualization, you can now put many applications on a single hardware device instead of only a single one. For example you can install 10 servers on the same hardware you used to only be able to run a single server on. You can have a single hardware device running many load balancers, all with their own separate operating systems, memory and CPU allocation. Same with just about any application as what virtualization does is installs multiple operating systems on a single device. The bottom line, you no longer have a one-to-one ratio of hardware and software which translates to less hardware overall and a massive reduction in cost.

100% Up-time Potential

When done properly, your servers should never go down. With virtualization, new “cloud” applications and tools and the proper back-end storage solution you can institute a high-availability which will guarantee your critical infrastructure never goes down for less than nuclear war.


With many public cloud solutions, all these changes mentioned previously enable cloud service providers (CSP’s) to charge customers for only the CPU and memory they are using and then sell the remainder to other clients who perhaps need more. There are many applications that can track customer utilization as well as the fact that almost all the cloud frameworks/platforms have API’s that allow you to develop your own custom tracking and billing solutions.


Cloud computing typically offers up services based on three different models. Infrastructure-as-a-service (IaaS) provides virtual machines and storage, which are then controlled and maintained by the consumer. IaaS helps solve an organization’s IT capacity need.

Raw (block) storage, firewalls, load balancers, and networks can all be provided through a cloud-hosting provider as an on-demand service, which allows IT departments to only pay for what they use. This enables businesses both large and small to leverage a full IT environment without the associated costs of typical data centers, which is becoming a bigger deal amongst startups as they try to stay in sync with the ongoing activities of larger organizations. Software-as-a-service (SaaS) gives companies the ability to install pre-packaged software in the cloud, and take advantage of “elasticity”—the ability to scale the amount of computing resources based on the number of people utilizing the service at any given time.

This also allows for multi-tenant access, meaning that a single cloud service can be accessed by multiple people at any time through the use of technologies like load balancing, which helps divide traffic amongst a number of different virtual machines. “True cloud services all use some mode of multi-tenancy—the ability for multiple customers (tenants) to share the same applications and/or compute resources.

It is through multi-tenant architectures that cloud services achieve high cost efficiencies and can deliver low costs. Multitenant architectures must balance these cost benefits with the need for individual tenants to secure their data and applications.”

(Forrester, “Understanding Cloud’s Multitenancy” March 2012) Software-as-a-service has been around for a long time in the consumer space. Email applications, such as Hotmail and Gmail, have been around for years. It’s the shift of SaaS into the enterprise that presents opportunities for companies today.

Cloud-hosted applications such as Salesforce.com and Microsoft Office 365 can offer distinct advantages for organizations trying to run and maintain their own implementations. Platform-as-a-service (PaaS) allows companies to deploy an entire application stack via a cloud host. These platforms offer building blocks and services to easily create application that are highly scalable and redundant.

Typically the cost of setting up and maintaining multiple layers means costly updates, difficulty with solutions working together on different hardware platforms, and the expense of allocating resources independently. PaaS allows companies to utilize an entire stack with minimal effort (literally the push of a button to install).

Companies benefit from having their IT maintenance managed exclusively by the cloud host and their allocation of resources automated to adjust for demand. This makes utilizing common application stacks much less cumbersome and costly.


Organizations can choose from three types of cloud environments: a private cloud, public cloud, or hybrid cloud. Private clouds are hosted environments that are wholly owned by the organization utilizing the service, sometimes within a company’s existing facilities.

This is often chosen by companies who, for security reasons, want to leverage the benefits of cloud computing but need to keep their sensitive data contained within their own cloud. Though companies can utilize a third party for hosting their private clouds, they’re typically kept on premise for greater control and security management.

More data-intensive applications may also be kept on premise if their performance is suffering from a weak broadband connection in a public cloud. Another benefit of the private cloud is the avoidance of “vendor lock.”

Private clouds give organizations complete control over the software they use with their cloud services, unless companies wish to utilize virtualization and maintain their own servers in a public cloud environment. This can be a very costly decision, though, negating the biggest benefit of the cloud: the utility computing aspect. Another benefit of the private cloud is the avoidance of “vendor lock.”

Private clouds give organizations complete control over the software they use with their cloud services, unless companies wish to utilize virtualization and maintain their own servers in a public cloud environment. This can be a very costly decision, though, negating the biggest benefit of the cloud: the utility computing aspect.

Public cloud is a cloud instance hosted by a third-party provider, which has shared physical resources to provide computing needs to customers all over the world. You’ll pay for what you use, and can either host your own servers through the use of virtualization or utilize existing services pre-installed in the cloud. For example, you may install your own server-side software, but choose to use a cloud provider’s SQL service to avoid the cost of maintaining a database server alongside an application server. The cloud-hosted SQL service could be set up and maintained by the cloud host, allowing you to focus solely on the maintenance of your own application server.

The most common public cloud application is email, with services like Hotmail and Gmail offered through the cloud to millions of people around the world. A hybrid cloud model allows companies to host some resources onsite in a private cloud, and some resources offsite in a public cloud. This is typically done for security reasons, when companies want their sensitive data to be managed onsite but the bulk of their application services and back-end utilities moved to a public cloud to take advantage of the utility model.

This is becoming more and more common among larger organizations, when the complexity and diversity of existing IT systems warrants a hybrid model.

“A private cloud, built using your resources in your data center, leaves you in control but also means you shoulder the management overhead. Public cloud services relieve you of that management burden but at the expense of some control. A hybrid approach might make it possible to realize the best of both worlds, but you’ll still have to pick private or public as the base for operations.” (Network World, “Tech Debate: Cloud: Public or Private?” http://www.networkworld.com/community/tech-debate-private-public-cloud)


Stateful applications remember one or more preceding events in a given sequence. The best example of a stateful application is an Internet browser, which has the ability to go “back” by remembering the user’s browser history. However, the web’s HTTP application layer is “stateless” because each request to fetch a website is isolated and treated as a separate event, with no memory of previous websites selected. This is important because as companies move applications into cloud-hosted environments, applications that require “state” to operate will have a hard time in a shared environment with shared resources.

The need to maintain state can easily drive up the cost of cloud-hosted instances, either due to the requirement to maintain that context within a given cloud instance, or to utilize a number of different resources to establish that state throughout the user’s interaction. This is especially cumbersome when you consider scaling those resources up and down based on utilization. It takes a skilled cloud developer and services like Amazon’s EC2 utilizing GigaSpaces eXtreme application platform or Microsoft’s Azure AppFabric Cache Service.

It takes a skilled and experienced developer to engineer an organization’s existing applications to operate successfully in a cloud environment, and re-engineer when necessary to ensure successful deployment and operation.

“There’s a growing understanding that applications in the cloud will be different; that agile development will never quite get to devops unless development for the cloud moves into the cloud.” (Information Week, “6 Ways Cloud Computing will Evolve in 2012” December http://www.informationweek.com/news/cloud-computing/infrastructure/232301052)


Cloud providers today offer the ability to store data and assets at an extremely low cost. Taking advantage of storage in the cloud is typically an easy first step in cloud adoption. There are a surprising number of storage options when it comes to the cloud. For companies looking to replicate or offload traditional disc storage, there are easy ways to integrate a cloud storage offering with an organization’s internal networks without users even knowing it.

Appliances living within your datacenter can route files directly to the cloud, which can replace expensive on-premise, SAN-based solutions. It can also be used as a way to implement a disaster recovery plan for your organization. Cloud providers offer additional storage services beyond attaching storage volumes to application servers. First, there are REST-based storage services for storing binary objects.

Examples of these offerings include Amazon’s S3 and Windows Azure BLOB storage. Assets such as documents, images, and media can be stored and retrieved over HTTP/HTTPS. There are also database offerings, both relational and non-relational, for storing application data. These services offer advantages over traditional database solutions because of their scalability, availability, self-maintenance, and built-in replication. With these different options available, application developers have a wealth of choices when it comes to storing data—it’s just a matter of understanding what storage offering is right for their applications.

“The now ubiquitous cloud is constantly evolving as a standard for business but the most common use today is data storage. But as organizations’ data volumes grow, so does the complexity of file formats, data de-duplication, and security. Finding the right cloud storage solution requires a layered approach that includes performance, reliability, scalability and, of course, cost.” (TechRepublic “How to create cost effective storage in the cloud” September 2011 http://www.techrepublic.com/webcasts/how-to-create-cost-effective-storage-in-the-cloud/3422473)

In addition to storage, many cloud providers offer the ability to deliver content to end users through distributed networks. By replicating and caching copies of content in data center locations around the globe, the amount of latency incurred by a user requesting content can be reduced.

Examples of these services are Amazon’s Cloudfront and Windows Azure Content Delivery Network. These cloud services allow consumers to access content hosted via the cloud on the cloud instance hosted closest to them, which makes a considerable difference when considering the ability to access streaming media anywhere in the world.


Security is a big topic within the cloud, and one that dominates the agenda of most enterprise conversations considering what assets to move to the cloud. Simply moving to the cloud doesn’t necessarily mean exposing your company to significant risk, as the cloud can offer organizations greater transparency and simplified risk assessment.

That said, it’s still up to organizations to put the right safeguards in place when it comes to utilizing a public cloud, as cloud providers themselves aren’t responsible for safeguarding corporate assets. Though encryption options may be available, considering what happens to your at-risk data is important to include in any security checklist when reviewing cloud-hosting providers. Considerations should also be made around hosting and processing data in specific jurisdictions when it comes to ensuring local privacy requirements are followed on behalf of the customer.

“While there are a variety of opinions about how secure various cloud services are, there has not been a consistent best practice related to cloud security. That is changing for 2012. This will be the year when IT and business management will begin to deal with the subtleties of setting rules and processes which clouds to use under which circumstances.

For example, open cloud communities with little security and no governance will be of limited value for companies that have to comply with industry and governmental requirements.

On the other hand, there is an emerging segment of public cloud offerings intended for companies that want a higher level of security and governance. Increasingly, organizations are looking to private clouds when governance needs to be strictly enforced.” (Information Week “5 Big Cloud Trends for 2012” December 2011, http://www.informationweek.com/news/cloud-computing/infrastructure/232200551)


Now that we’ve covered some of the key concepts, we’re going to dig into some often-quoted benefits of cloud computing that should align to your business considerations when moving to the cloud.


One of the biggest benefits of cloud services is only being charged for the resources you consume. This is the key attribute and benefit of the cloud, as it established the “utility” mindset in the customer, and helps drive cost savings by moving from costly maintenance and CapEx-based expenses to an entirely OpEx-driven cost model. This elasticity eliminates concerns about capacity and the ability to meet demand on a month-to-month basis, since a cloud host enables companies to scale their resources as needed.

The ability to provision, use, and de-provision compute and storage resources can turn into cost-saving measures within your organization. “When you take this goal apart, what folks are really after is the ability to transparently add and/or remove resources to an “application” as needed to meet demand. Interestingly enough, both in pre-cloud and cloud computing environments this happens due to two key components: load balancing and automation.”

(F5DevCentral, “The Secret to Doing Cloud Scalability Right” November 2011 https://devcentral.f5.com/weblogs/macvittie/archive/2011/11/09/the-secret-to-doing-cloud-scalability-right.aspx)

Cost management is a huge benefit to cloud computing as you’ll only pay for what you need to use and nothing more. There won’t be money spent on resources collecting dust for a “just in case” peak utilization scenario, and the typical IT support costs associated with maintaining farms of servers is no longer a concern as it’s rolled into the cost of utilizing a cloud host. Unless you’re utilizing cloud services onsite in a private cloud environment, a huge benefit with cloud will come from controlling and streamlining the cost of hosting and serving data to enterprise users and consumers.

Return on equity (ROE) is a more important analysis than ROI when CIOs and IT executives are considering a move to cloud computing. ROE is actually the best (but certainly not only) measure for making a decision to move to the cloud. (Gartner, “Financial Ratios in Cloud Strategies” September 2011, page 1)


The ability to host and maintain applications in the cloud is significant, because a lot of the services that those applications will rely on—most notably, databases—can be maintained within the cloud alongside the application environment. This can have a huge impact on reducing the cost of maintaining and upgrading multiple services and applications through automating the maintenance of application stacks, allowing for easier access and use when it comes to hosting and running enterprise applications in the cloud.

Furthermore, having the ability to utilize multi-tenant support for applications means that you can have more users accessing that same service around the world with the ability to scale up and down as resource needs fluctuate. “Software-as-a-service (SaaS) avoids most, if not all, [web application maintenance and management] maintenance. But the trade-off is the loss of some flexibility and control. On-premises deployments require a staff person or contractor to manage it all. But with the right tools and the infrastructure cloud, most of it can be automated.“

(Standing Cloud, “Application Lifecycle Management” May 2012 http://www.standingcloud.com/sites/all/themes/standingcloud/images/media/standing-cloud-white-paper-application-lifecycle-management.pdf)


The ability to literally weather a storm depends upon the availability of IT resources in geographically diverse areas. Dispersing IT resources can help companies avoid a catastrophic outage in the case of an emergency. Cloud computing enables resources to be duplicated around the world, allowing enterprises to ensure 100% uptime regardless of what happens anywhere in the world. This is significant, as a typical outage for a company can be very costly, depending on the size and duration.

“Emerging technologies that fundamentally decentralize applications and data greatly improve business resilience and simplify disaster and network recovery. They are designed to handle less-than-perfect performance from all components of the infrastructure… by combining the various required elements—including storage, load balancing, database, and caching—into easily managed appliances or cloud instances. Unlike conventional infrastructures where scale, redundancy, and performance are increased by “scaling up” and adding additional tiers of components, this provides an architecture where additional capabilities are added by “scaling out” and adding additional, identical nodes.” (Cloud Computing Journal, “Improved Business Resilience with Cloud Computing, January 2011, http://cloudcomputing.sys-con.com/node/1687210)


It’s likely that your company is already using cloud solutions in some fashion or another, so the question is how to take advantage of it in increasing amounts, and how to utilize hybrid cloud models to manage your organization’s information. CCI has expertise in helping companies work through all these issues, and can help your company address your most pressing cloud needs by establishing everything from a strategy and roadmap to vendor selection and deployment. Visit us at https://cloudwiser.wordpress.com to see how we can begin to help your company continue to embrace the cloud or how cloud technology can best benefit you and your customers.