Cloud: Public, Private, Hybrid or not at all


As part of my work undertaking IT due diligence on businesses hoping to sell or receive an investment, I see many types of companies .  Many of these are providing products and services for their customers or running applications for their staff to use to run the business.   One question that investors and CTO’s ask is “are we/they making best use of cloud technologies?” Across the world, there is a popular view that cloud is the answer to all computing issues and business not embarking on that path are deficient.

Computing in the cloud can be easily explained as a concept.   In practice, the cloud can be a bewildering place which requires changes to software architecture, operations and a host of new skills to be acquired.  In defiance of popular belief, Cloud computing is not suitable for every business from a cost or quality of service perspective. 

For start-ups,  placing their workload on a public cloud is likely to be the only sensible choice.  More established businesses might be better served by running a traditional data centre or managed service contract. Others with global customers or those with very dynamic requirements will find the cloud a perfect match for their needs.

Organisations contemplating cloud deployment face a number of choices:

  • Public Cloud – Google, Microsoft Azure, Amazon ECs, Oracle etc.
  • Private Cloud  – providing a cloud-like experience from the businesses own data centres or co-location facilities.
  • Hybrid Cloud – part of the organisations’ workload runs on the public cloud and part on the businesses own infrastructure.  Increasing seen as the sweet spot for many businesses.
  • Multi-Cloud – essentially splitting workloads between multiple cloud providers.

Advantages of the public cloud

  • Lead time – Capacity is always available. No waiting 6 weeks for equipment to be manufactured or sacrificing specification to buy equipment that is available off the shelf.  No waiting for the internal IT teams to have a slot in their schedule to install the equipment.
  • Flexibility – With traditional models, once capacity has been acquired, it can be difficult to dispose of it (sell the equipment, cancel contracts for data centre space etc).  With the cloud, if capacity is no longer required then a business deletes their virtual world.  No more costs.
  • Capital Investment – Buying equipment takes time,  negotiating prices and navigating internal budgets and approvals.  It could be a large up-front cash purchase (tying up capital) or perhaps lease finance needs to be arranged.  In the public cloud, none of these constraints apply, although the expected monthly costs need to fall into an op-ex budget.
  • Reduced support and maintenance work.  When using the public cloud, it’s somebody else’s responsibility to replace a failing disc or broken power supply.
  • High Availability & Resiliency – Public cloud providers generally offer 99.9% uptime guarantees and can provide a range of back up and failover capabilities. If you have the budget, they will have a solution.
  • Latency – Particularly important for organisations with a global footprint.  Options to distribute all or part of your workload in global data centres that are closer to customers reduces latency times and creates a better user experience.
  • Features – Cloud providers are expanding the range of features that their platforms offer.  In recent times, there has been an emphasis on reducing time to market with Microsoft and AWS providing tools to make it easier for engineering teams to build, test and deploy software into production.

Disadvantages of the public cloud

  • Learning time –  Cloud computing is a different paradigm to traditional models.   Learning how to configure storage and computing instances, using automation to deploy virtual infrastructure (networks, computers) is very different to on-premises environments.  The control panels and apis offered by the major cloud providers are very different.  Adapting monitoring and security systems and processes requires knowledge.  Acquisition of skills takes time, especially as these Cloud engineers posses  “hot skills” in the labour market making good practitioners difficult to find and / or expensive to hire.
  • Security and Monitoring – While the major providers do include security capabilities as standard, the instances that business teams create still need to be secured and monitored.   There are a variety of approaches which can include the platform providers solutions (at a price) or  businesses own licensed software, although care is needed as many software vendors have different licensing models for use on the public cloud.  How will patches be applied? (to defend against bugs) how will permissions (of which there are many) be applied to employees in the control panel?  How will staff access the control panel, many organisations only permit control panel access from their corporate network.  
  • Prevented by policies and implementation technologies such as disk encryption, it is meant to be impossible for a cloud vendor to access client companies data and make this available to anybody they are or feel obliged to share it with. 
  • However, in practice a foreign government or agency could compel the provider to hand over data or worse, a process mistake or poor implementation of a policy (e.g. not disposing of hard drives securely) could render data accessible to malicious parties.  Cloud customers have no control over this and rely on the word of the cloud vendor.

The cloud allows in house operations teams to be relieved of hardware maintenance, however, monitoring and security operations continue and often facing different types of threat to those associated with a wholly owned data centre.

  • Vendor lock-in.  As well as fighting to differentiate their cloud offerings, the major vendors are trying to lock customers into their platform with unique features and interfaces.  The aim is to make it expensive to move workloads to another provider which enables the cloud service to increase their prices in the future.  Irrespective of lock-in, once a commitment has been made to a cloud platform the effort and disruption of moving to another platform can be high.  Businesses moving to a public cloud provider must determine if they completely trust the selected vendor or want to retain the flexibility to move to a different cloud provider should it become necessary.
  • Reliability – Although cloud providers do offer the capability to run business applications in multiple regions, they can and do suffer major outages and these can impact multiple regions preventing failover.   The impact of an outage on the services provided must be assessed and if deemed significant it may be appropriate to take a multi-cloud approach.
  • Cost Control – Public clouds can be complex to cost.  It’s necessary to look at costs to determine who the cheapest provider is, but also to compare with costs for other models such as a hybrid cloud.  Everything costs on a cloud platform and it’s easy to run up large bills.  Vendors have differing levels of service and specification so it can be quite hard to cross compare pricing.

Public Cloud Pricing

A standard web server running the LAMP stack would require would need a Virtual Machine (VM) with 2 x CPU, 240gb Hard Drive and 3.5GB of RAM.   The following prices are based on vendor pricing calculators on 31st December 2019.  This is not intended to be a definitive price comparison, but to show the variation on pricing and specification for a simple computing requirement.  Calculating costs for a larger more complex estate becomes quite difficult.

I priced the VM’s below in July 2019 and updated it again for this article, not only had the prices changes (generally upward) but the same specification of VM’s/storage had also change.

  • On Azure:
    • 2 x CPU, 7gb RAM, 100GB temporary storage, 120 storage transactions and 240gb of OS Storage with 256gb snapshot (for backup), 730 hours per month.
      • US West Region $116.34 per month
      • UK South $142.11 per month.
  • On AWS:
    • T2 Medium VM, 2 x CPU 4gb RAM, 240GB Magnetic storage, 1 x snapshot (3gb changed), 730 hours per month.
      •  US West (Oregon) $47.20 per month
      • EU (London) $55.53 per month
  • On Google:
    • N1-Standard-2, 2xCPU, 7.5GB RAM, 375GB SSD storage, 730 hours per month.
      •  US-West Los Angels $94.31 per month
      •  London (europe-west2) $98.54 per month
  • Contention and performance.  Although performance is generally good, most cloud providers offer (for greater cost) less contented hardware for more compute-intensive workloads.
  • Change control.    Cloud environments, especially when combined with modern software engineering practices are able to move far faster than traditional “Change Approval Boards” (CAB) can.  While perhaps not an issue for nimble startups, corporates with regulatory and audit requirements may find the cloud akin to the Wild West while they seek to determine how to retain traditional audit trails in this new paradigm.

What is a Private Cloud?

A private cloud is a selection of computing resources that are owned and run by the organisation.  Rather like a public cloud employees / teams can create computing workloads and scale on demand.  The organisation will typically have multiple data centres (to provide for failover and proximity to customers). 

A true private cloud goes beyond a traditional corporate data centre where applications could only run on selected and often dedicated hardware.  In the private cloud, the corporation has recreated many of the features of public cloud services in as much as applications can be run on any available equipment. 

For many large organisations, this transition can be an expensive and slow journey.

Hybrid Cloud.

The hybrid cloud embraces both the traditional corporate setting and the public cloud.  An organisation might choose this model to allow them to scale at peak times.  For example, a retailer may have sufficient computing capacity to handle standard trading volumes but at peak times, such as “Black Friday”, it may want to call upon additional resources to handle seasonal spikes.

Another scenario is the organisation keeping mission-critical and highly sensitive systems on corporate systems but using the public cloud for less important or sensitive workloads.  This might include development and test environments.

Another usage pattern is to use the public cloud for backup and disaster recovery purposes.  This does, of course, require the environment to fully secured but has the advantage that much of the virtual infrastructure can be left turned off (saving costs) until a DR event occurs.

Amazon have introduced another twist in the Hybrid/Cloud discussion in the form of “Outposts”. This enables customers to install AWS mandated hardware in their data centre and run AWS locally, creating a seamless link between on prem and public cloud operations.

Managed Service/Co-Location

Sitting in this the middle of this are providers of co-location and managed services.  Many of these organisations have good discounts with server vendors such as HPE and Dell which they can incorporate into a complete hosting package.  While these typically tie companies into 3 or 5 year service contracts, they can offer an effective private cloud with a low upfront capital cost and the balance of the equipment and services paid for monthly over the contract.  These contracts can be difficult to exit if the business changes and a different configuration is needed, but for established organisations with known workloads, this can be a very cost-effective option and is likey to be more cost effective that switch to the public cloud.

Software Architecture

Virtually any application can be moved to a cloud computing environment.  At is’s simplest, virtual servers and can “lifted and shifted” to the cloud. However, to exploit cloud computing (public, private or hybrid)  to its full extent, the provision of infrastructure should be by code.  This becomes more important when applications are constructed as services, perhaps employing containers that can autoscale.  Infrastructure as Code (IaC) is blurring the lines between software and infrastructure teams.  It requires that architects need to understand both software and infrastructure requirements and that product managers need to have a greater awareness of operations requirements such as monitoring etc.  Operations teams also need to be aware of environments that can shrink or expand in size (why did ‘x’ just appear on or disappear from our monitoring screens). 

Dev/Test Tooling

Modern software engineering methods also require that development teams can “spin up” and “trash” environments on demand, particularly where they have invested in test automation or require engineers to each have a dedicated production like development environment.

Development and Testing have become the latest way to attract customer to the Cloud. Any organisation that is writing code and using “DevOps” practices (or any kind of pipeline) needs to be able to spin up development and test environments quickly and trash them at the end. It is vritually impossible to undertake modern software engineering practices without the ability to create an endless number of test systems to support pipelines.

As on premise servers in larger companies can be difficult to obtain, many have found the cloud the allure of endless environments to be over powering and moved Dev/Test workloads to the cloud.

The Cloud Vendors have recognised this and are now providing ready made code and test environments with DevOps tooling included. Azure have gone further and created toolsets that allow code to be built and deployed locally (in a desktop or laptop) or in the cloud, breaking down the barriers between local and cloud based development.

While this is speed code and test, the cloud providers have workout that once the application is ready for production it becomes to easy to base the live app on their cloud infrastructure.

Determining Cloud Strategy

The first task for any organisation is to determine why they believe the cloud will assist them.  As with any project or strategic initial they should define the key performance indicators (KPIs) and describe their vision, i.e. what success will look like.    It is important to understand any applicable policies and regulations, particularly the GDPR which places equal responsibility on both data controller (the organisation) and the processor (the cloud provider).

The identification of impacted teams is another key considerion.  Most of the change will fall on operations departments but development and service teams will are also impacted.  For the agreed strategy, it is also worth considering who the “opinion formers” are in the organisation and how both supporters and potential detractors should be engaged.

Finally, consider the likely challenges of implementing the cloud strategy.  These can be quite profound as new software architectures might be required.  There could also be some fundamental shifts in provisioning, for example changing over to Infrastructure as Code (IaC) brings software engineering into the heart of traditional operations teams, this goes beyond many traditional aspects of automation.   Employees will need to acquire new skills or additional hires made.


Cloud computing in any form is a powerful capability that can have a dramatic effect on an organisations ability to service it’s customers.   The major public cloud vendors are rapidly increasing the number of features that their platforms offer to appeal to the widest range of customers, however, this creates a large volume of complexity.  While capital costs are reduced, operating costs will increase and can become difficult to contain without having the right controls in place.