by INVOKE Team
Posted on April 19, 2018 at 12:00 AM
image courtesy http://surviveyourchildsaddiction.com/one-size-not-fit/
Based on recent reports , size of the public cloud computing in year 2017 is around $130B with rate of around 19% increase. Companies rapidly moving to cloud hosting to get benefit of scalability and cost savings. But, soon after migration, they are realizing that, cloud economics pertaining to cost savings are really not working as they are expecting. One reasons for this is, using “one solution fits all” approach.
Every operations team member agrees that, there is no such thing as “one size fits all” when they work in on-premise data center. Same applies for cloud operations too. The concept of “one size fits all” is old and it is pretty much NOT applicable in cloud hosting world. Why? Due to breadth of cloud services and possible cloud deployment architectures companies adopt.
In this article, we limit discussion to AWS computing cost. Yet, the core principle, “different cost saving approaches need to be applied across different deployment setup” can be applied across all cloud hosting resources.
For example, typical company using AWS EC2 for their cloud hosting, there will be different environments like production code running on few AWS EC2 instances, few others running QA tests, few more EC2 instances for DEV activity etc.,
These possible different deployment scenarios require different operating needs is the primary reason why we advise “one cost optimization solution” will not yield best possible optimization in cloud savings.
AWS EC2 instances running production code. Companies would like to run these 24/7 in most of cases because any delay in accessing applications would results in business loss, which companies wouldn’t like to experience. What sort of optimization can be done in this scenario?
Right Capacity Sizing: Because these instances need to be run 24/7, first thing we need to check is, are these EC2 servers being used up to optimal capacity? Users may have rented a m.large capacity to host their application, but application running on those servers using only 50% of CPU, which means they are very underutilized. Based on your specific requirements, picking appropriate size ones will eliminate some of the waste.
Right sizing is most overlooked and underestimated cost optimization technique. Teams can get benefit by using tools like AWS Trusted Advisor to understand the resource usage and appropriately provision the EC2 instances.
Reserved Instances: Based on AWS documentation , Amazon EC2 Instances (RI) provide a significant discount (up to 75%) compared to On-Demand pricing. If you are sure that you need an instance for an year for sure, then RI are best option because this saves good amount of money on your AWS EC2 bill.
Auto Scaling groups : This is another better option you could explore to provision more EC2 instances when there is high load and reduce active instances when load is low. You could use different schemes provided by AWS to configure when to AutoScale.
Combination of “Right Capacity Sizing” and “EC2 Reserved Instances (RI)” will help companies reduce waste on AWS EC2 spending. But, can these same principles be applied to other deployment environments like DEV, QA servers etc.,? BIG NO. WHY?
QA/DEV or any other setup where application not required to run always. Because these EC2 instances don’t need be in ON state forever, keeps “Amazon EC2 RI” out of the equation. Others might still argue that upto 70% savings there with RI, so opt for them. Here are few counter points:
Right Capacity Sizing:
Right sizing technique can be applied in this QA/DEV setup case. Whatever type of deployment it is, this optimization always saves money on your EC2 instances bill.
Turn on resources ONLY when needed:
Apart from choosing appropriate capacity, another important point to to observe is, limit EC2 instances up time.
Due to the fact that these EC2 servers are not needed running continuously, the best optimization technique is, limiting up time to only the time they are needed instead of keep running them, either scheduled or always on.
Because (most of the) cloud resource will be billed per second, leaving EC2 instances up and running when no one using them will not help in reducing AWS bill. Based on recent RightScale study, in 2017 alone, around $10B cloud spend was wasted and one of the top reason for the waste was, idle instances.
Like how we practices saving in our daily life by switch off the lights when no one in room to save electricity. Turning off the car engines when not driving to save on gas, companies can save on hosting costs by shutdown the instances when no one accessing application hosted on them and bring them up ONLY when users use applications.
One of the common practice companies following now a days is, using AWS EC2 schedulers to limit AWS EC2 instances up time to specific windows, like shut off during night hours. Though these solutions help in saving some AWS EC2 bill, these are not optimal solutions. Why?
In a typical company setup, no one uses applications starting X AM till Y PM and in a day there will be meetings, lunch breaks etc., which accounts around 3 to 4 hours per day, that is very big waste in cloud spend. On demand cost optimization solutions like INVOKE Cloud could solve this problem, we are going to publish a detailed post about this topic soon, stay tuned.
Auto Scaling groups: again, this is NOT an applicable optimization approach for these DEV/QA EC2 instances, because real use case for auto scaling is supporting unpredictable loads and supporting elasticity. Except cases where our requirement is, testing auto scaling functionality (or) load handling testing, there wouldn’t be never any need to scale up/down these servers, hence this option also is not applicable here.
In summary, there is a clear distinction between why some optimizations work in non 24 hour deployment setup but won’t work in production and vice versa. Teams need to pick right approaches applicable for the environment. Have more questions? Talk to us , we could help you.