As someone who provides cost control for AWS and Google Cloud Platform (GCP), I thought it might be useful to compare pricing between the two.
In AWS, the compute service is called “Elastic Compute Cloud” (EC2). The virtual servers are called “Instances”.
In GCP, the service is referred to as “Google Compute Engine” (GCE). The servers are also called “instances”. However, in GCP there are “preemptible” and non-preemptible instances. Non-preemptible instances are the same as AWS “on demand” instances.
Preemptible instances are like AWS “spot” instances, in that they are a lot less expensive, but can be preempted with little or no notice. The difference is that GCP preemptible instances can actually be stopped without being terminated. That is not true for AWS spot instances.
Flocks of these instances spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS.
The similar concept can be created within GCP using “instance groups”. However, instance groups are really more of a “stack”, which are created using an “instance group template”. As such, they are more closely related to AWS CloudFormation stacks.
AWS and GCP Compute Sizing
Both AWS and GCP have a dizzying array of instance sizes to choose from, and doing an apples-to-apples comparison between them can be quite challenging. These predefined instance sizes are based upon number of virtual cores, amount of virtual memory and amount of virtual disk. They have different categories.
- Free tier – inexpensive, burst performance (t2 family)
- General purpose (m3/m4 family)
- Compute optimized (c4 family)
- GPU instances (p2 family)
- FPGA instances (f1 family)
- Memory optimized (x1, r3/r4 family)
- Storage optimized (i3, d2 family)
GCP offers the following predefined types:
- Free tier – inexpensive, burst performance (f1/g1 family)
- Standard, shared core (n1-standard family)
- High memory (n1-highmem family)
- High CPU (n1-highCPU family)
Both providers take marketing liberties with things like memory and disk sizes. For example, AWS lists its memory size in GiB (base2) and disk size in GB (base10).
GCP reports its memory size and disk size as GB. However, to make things really confusing this is what they say on their pricing page: “Disk size, machine type memory, and network usage are calculated in gigabytes (GB), where 1 GB is 230 bytes. This unit of measurement is also known as a gibibyte (GiB).”
This, of course, is pure nonsense. A gigabyte (GB) is 109 bytes. A gibibyte (GiB) is 230 bytes. The two are definitely NOT equal. It was probably just a typo.
If you look at what is actually delivered, neither seems to match what is shown on their pricing pages. For example, an AWS t2.micro is advertised as having 1 GiB of memory. In reality, it is 0.969 GiB (using “top”).
For GCP, their f1.micro is advertised as “0.6 GB”. Assuming they simply have their units mixed up and “GB” should really be “GiB”, they deliver 0.580 GiB. So, both round up, as marketing/sales people are apt to do.
With respect to pricing, this is how the two seem to compare, by looking at some of the most common “work horses” and focusing on CPU, memory and cost. (One would have to run actual benchmarks to more accurately compare):
The bottom line:
In general, for most workloads, AWS is less expensive on a CPU/Hr basis. For compute intensive workloads, GCP instances are less expensive.
Also, as you can see from the table, both providers charge uplifts for different operating systems, and those uplifts can be substantial! You really need to pay attention to the fine print. For example, GCP charges a 4 core minimum for all their SQL uplifts (yikes!). And, in the case of Red Hat Enterprise Licensing (RHEL) in GCP, they charge you a 1 hour minimum for the uplifts and in 1 hour increments after that.
AWS vs. Google Cloud Pricing – Examining the Differences
Cost/Hr is only one aspect of the equation, though. To better understand your monthly bill, you must also understand how the cloud providers actually charge you. AWS prices their compute time by the hour, but requires a 1 hour minimum. If you start an instance and run it for 61 minutes then shut it down, you get charged for 2 hours of compute time.
Google Compute Engine pricing is also listed by the hour for each instance, but they charge by the minute, rounded up to the nearest minute, with a 10 minute minimum charge. So, if you run for 1 minute, you get charged for 10 minutes. However, if you run for 61 minutes, you get charged for 61 minutes. On the surface, this sounds very appealing (and makes me want to wag my finger at AWS and say, “shame on you, AWS”).
If you are new to public cloud, once you get past all the confusing jargon, the creative approaches to pricing and the different ways providers charge for usage, the actual cloud services themselves are much easier to use than legacy on-premise services.
The public cloud services do provide much better flexibility and faster time-to-value. The cloud providers simply need to get out of their own way. Pricing is but one example where AWS and GCP could stand to make things a lot simpler, so that newcomers can make informed decisions.
When comparing AWS vs. Google Cloud pricing, AWS oEC2 n-demand pricing may on the surface appear to be more competitive than GCPP pricing for comparable compute engine’s. However, when you examine specific workloads and factor in Google’s more enlightened approach to charging for CPU/Hr time and their use of Sustained Use Discounts, GCP may be less expensive. AWS really needs to get in-line with both Azure and Google, who charge by the minute and have much smaller minimums. Nobody likes being charged extra for something they don’t use.