Are Your Cloud Infrastructure Costs Out of Control?

A pretty interesting article came across my LinkedIn feed the other day that I thought I'd share:

http://segment.com/blog/the-million-dollar-eng-problem/

It's a great explanation of how some very smart folks at Segment were able to significantly cut their AWS bill with three months of serious detective and engineering effort. It's a great piece of work for sure and I think one of the best points made in the article is that cost efficiency is something that should be thought about from the start to prevent problems from occurring in the first place. Of course, ongoing attention is still required, but building efficiency (in terms of cloud infrastructure costs) into the default mode of building, deploying and running applications is key. This is great for new development, but it doesn't help much for existing applications.

This did get me thinking though. Segment, as a startup, has built their business on one SaaS product (complex though it may be). But what if you are a bank with hundreds of applications in ongoing development and you are migrating them to cloud and containers to get cloud native capability. You are most likely going to run into similar problems, but it simply isn't going to be scalable to put in three months of detective and engineering work on each application to optimize for cloud infrastructure. How do you even know which application to start with to optimize your resources?

Now the Segment guys are right, no vendor's tool is going to solve all cloud infrastructure cost efficiency issues - many of them are going to be specific to the applications and require deep expertise. However, if you are running applications in containers on Kubernetes or some other container runtime, the appLariat Container Automation Platform can help you keep cloud infrastructure costs down in some fundamental ways that don't have anything to do with the details of your application. With appLariat you get policy-based control of your deployments and your Kubernetes cluster. This means you can start and stop application deployments and scale your Kubernetes container runtime clusters up and down in an automated fashion so that you don't pay for infrastructure when it isn't being used. You can also scale up and down staging and production instances of your applications based on load (and adjust your cluster size) so you can avoid paying for a peak load deployment at all times. One of our initial customers was able to reduce their staging environment in Amazon from 129 100% reserved nodes to 65 nodes (50% reserved/50% on-demand), resulting in almost a 50% reduction in their AWS bill, while maintaining quality of service.

Of course, if you use appLariat to optimize usage of your cloud infrastructure through controlling application deployments and cluster size from development through production with policy-based automation, you'll also be able to see which applications contribute the most to your cloud infrastructure bills from the product's capacity and costing dashboards. You then know where to dedicate your resources to investigate and resolve any application issues like Segment did.

 

 

 

Close Menu