Migrating REST APIs to AWS Serverless

Our in-house AWS expert, Brett Andrews, shares how to migrate your existing REST APIs to AWS, as well as how to evolve an application over time to be more cloud-native and leverage the entire AWS and Serverless ecosystem.

One of the most common use cases of Serverless architecture is serving REST APIs with Amazon API Gateway and Lambda. In this article we’ll cover how to migrate your existing REST APIs to AWS, resulting in saved costs, reduced operational overhead, “infinite” scaling, and more. We’ll then go a step further and see how we can evolve our application over time to be more cloud-native and take advantage of the entire AWS and Serverless ecosystem.

AWS provides tools such as  AWS-serverless-express and AWS-serverless-java-container that make migrating Node.js and Java REST APIs a breeze. aws-serverless-express is framework agnostic (you’d be forgiven for thinking otherwise), which means it works not only for Express, Koa, Hapi, and Sails, but also vanilla Node.js HTTP servers also. aws-serverless-java-container also boasts a large number of framework support, such as Spring, Spring Boot, Apache Struts, Jersey, Spark, and Micronaut.

Let’s take a basic Express application

Now, your application is likely to be significantly more complex than this contrived example, but the migration process will be similar for applications of any size. However, there are limitations to consider. If your application isn’t stateless (that is, you store state/data on the server), you’ll need to move that state elsewhere (thankfully, AWS offers plenty of services that take care of this for you).

To prepare our application for Lambda, we need to do two things. First, replace the `app.listen(3000)` line with `module.exports = app` (Lambda doesn’t let you run on ports like this). Next, we need to create our Lambda handler, which is a thin wrapper like this:

This is all we need to do to get our code Serverless-ready. Now let’s get it online. We’ll use the Serverless Framework tool to define our infrastructure as code and deploy to AWS. Create a `serverless.yaml` file in your project with the following:

Make sure you’ve set up your AWS credentials before continuing

Now simply run `npx sls deploy` to deploy your Express app to Lambda. Once complete, the command will output some HTTP endpoints that allow you to take your new Serverless Express app for a spin! With just these few steps we’re able to take advantage of some of what AWS has to offer, including worry-free infrastructure, auto-scaling, and pay-for-what-you-use.

We could just leave it there and be happy with the improvements we’ve gained, however, there’s so much more to take advantage of in the AWS ecosystem. Let’s look at how API Gateway enables us to use the strangler pattern to migrate pieces of our application away from a single monolithic Express application into their own Lambda Functions.

Let’s say we’ve noticed that our `/admin` endpoint requires elevated permissions that the rest of our application doesn’t need and that our logic for creating users requires more CPU or memory than the rest of our application. Because we’re security, cost, and performance-focused people, we can split these into separate Lambda Functions: one that handles all of the `/admin` operations, and the other that deals only with creating users. First, let’s update our API Gateway endpoints in `serverless.yaml`:

Since we’ll now have multiple Lambda Functions, it’s a good idea to package them individually for performance reasons, so we’ve instructed Serverless to do so with `package.individually: true`. We’ve also added our first Serverless Framework plugin. By default, all Lambda Functions defined in a Serverless template share a common IAM role, which isn’t ideal for security. This particular plugin allows us to define IAM permissions at the individual function level. Finally, we’ve added our two new Lambda Functions, connected them via API Gateway. Let’s take a closer look at each:

For the `createUser` function, we’ve specified a handler of `create-user/lambda.handler` and told it to listen on the `POST /users` endpoint that takes priority over the generic `{proxy+}` endpoint we defined earlier.

We’ve also increased the `memorySize` from the default we set of `256` to the maximum Lambda allows of `3008`. Lambda doesn’t have an option for increasing processing power directly, rather (from the [Lambda docs](, “Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent of one full vCPU (one vCPU-second of credits per second).”

Now we need to create our new Lambda function logic dedicated to creating a user. We’ll assume we have the core of this logic defined in a controller as is best practice in Express:

For the `admin` function, we’ve added our elevated permissions that grant it complete access to DynamoDB. You should always scope your roles down as tightly as possible (it’s unlikely even an admin panel needs the ability to drop tables), but in this scenario, we would be able to remove those elevated permissions from our main `express` function, which I consider a win. I’m a huge fan of iterative improvements; we can always scope down our `admin` function’s permissions further in the future.

You may notice we’re reusing the same `lambda.handler` for our `admin` function that we’re using with our main `express` function. This enables us to use the same code deployed to a new function with a different configuration. In the future, we could iterate on this by extracting the admin panel into its own Express app (reducing code and improving performance and security) or even refactor it to a lightweight framework built specifically for Lambda such as Jeremy Daly’s lambda-api.

If you have any questions or want to learn more, get in touch or follow me over at @AWSbrett.

Brett Andrews, Staff Software Engineer at Wizeline
Brett Andrews, Staff Software Engineer at Wizeline

Nellie Luna

Posted by Nellie Luna on May 13, 2020