Engineering

Good Things Ahead for Serverless and Containers

Good Things Ahead for Serverless and Containers

Have you ever spent a week in Las Vegas without attending a single show, concert, or placing a single bet at the casino? It’s odd, I know. But this is how I felt after attending AWS re:Invent a couple of months ago… flying home to rest and absorb all the new services and features announced.

You can find the keynote videos and summaries here if you’re curious. I want to share some key info about the two tracks I followed closely on serverless and containers that have been circling in my mind, two areas that will impact my work in the coming months.

Serverless

If you are a Serverless enthusiast like myself, you probably know that a lot of announcements were made before re:Invent. But it was the following releases that caught my attention during the conference, and here’s why.

AWS Lambda Provisioned Concurrency

Before, if you wanted to optimize a Lambda function, you could only tweak your code and the execution environment. Now, you can optimize the Lambda service itself establishing the Provisioned Concurrency setting the function to keep a minimum of warm environments ready to take the workload and keep a better latency between requests. You can check the new Lambda pricing page for the cost associated with Provisioned Concurrency.

Amazon RDS Proxy (preview)

If you had to work with Lambda and RDS in the past, you know the hassle of dealing with MySQL libraries that don’t work well with ephemeral environments such as Lambda. RDS Proxy is coming to the rescue with the management of a pool of connections and the authorization with the database, thus improving performance and security. No doubt AWS is looking for better integration between RDS and Lambda, as it is now with DynamoDB. RDS Proxy works with MySQL but we expect to see it soon with PostgreSQL as well.

Containers

AWS keeps increasing their offering on containers technology, proving that microservices architectures and its orchestration are greatly popular and not going anywhere anytime soon. There were several announcements around Elastic Container Service (ECS).

ECS Capacity Providers

This announcement could easily go unnoticed, but it opens up the possibility of a couple of new features for ECS. With capacity providers, you can better define rules to place your workloads in EC2 and Fargate.

Fargate Spot

 That’s right, thanks to Capacity Providers we can now deploy our containers with Fargate at a lower price (up to 70% discount). It’s the same idea behind EC2 Spot where we pay less for using AWS idle infrastructure, but this time with container workloads. Careful though, you might want to use Fargate along with Fargate Spot to maintain the availability of your services. More on that here.

Amazon ECS Cluster Auto Scaling 

You know the pain of scaling your EC2 servers based on your ECS services resources requirements. Setting up the Auto Scaling Group scaling policies was hard and prone to errors. With the release of Capacity Providers, now the management of the compute resources is done within ECS and offers the possibility to scale to zero. This article explains how this feature works.

Serverless + Kubernetes

After 2 years of the promise that Fargate would work with Kubernetes, now we can deploy pods to AWS without managing EC2 servers!

Deploying and operating a Kubernetes cluster is a headache, even with the help of managed services like EKS. With Fargate, we can take away the operational tasks of managing a fleet of servers and deploy applications in a serverless fashion. You can keep using the APIs and tools for managing Kubernetes objects. You can learn more here.

I can’t wait to see what else is in store for serverless and containers technology. If you want to learn more about these tools, contact me or fill out this form to chat with one of our experts.

By Eloy Vega Castillo, Site Reliability Engineer at Wizeline
By Eloy Vega Castillo, Site Reliability Engineer at Wizeline

Nellie Luna

Posted by Nellie Luna on February 19, 2020