This is part 4, the final part of my Serverless mini-series. You can find the previous parts here:
Part 1
Part 2
Part 3

As I mentioned in the previous posts, I’ve been working on a local development environment for a serverless architecture.
In the first post we covered invoking Lambdas locally using the Serverless framework, and we’ve done this for multiple different runtimes.
The second post followed up on this by introducing AWS SAM Local for running our Lambdas locally as an API Gateway, as Serverless Framework didn’t support this.
The third post added in LocalStack, so that we can mimic AWS services locally without needing anything to be deployed.
In this post we’re going to streamline the dev environment using Docker Compose so that we can run things with a single command.
As always I’ve added the code for this post on Github, you can find the code for the series here.

Part 4

The problem I have with what’s been accomplished so far in the series is that it involves multiple steps, and those involve blocking processes so it also means requiring multiple terminals.
The goal of this post is to get this down to a single command to run everything.

In order to do this, we’re going to need Docker Compose, at least version 1.18.0.
You can check your version using the command docker-compose -v, if you don’t have it installed you can follow the instructions here.

I’ve copied over the code from the previous post - so we already have an API with two endpoints with a Serverless template. As per the previous post, we can generate a SAM template from the Serverless template.

As we already know, LocalStack is available as a Docker image. So what we need to do first is find an image for AWS SAM Local and set up a Docker Compose file to spin up our services as one.
Fortunately, someone has already covered the work of publishing a Docker image we can use, you can find that here.
For LocalStack, we need to define the ports we’ll be binding to so that the services are reachable outside the container, and the environment variables to define what services we want LocalStack to start up.
AWS SAM Local will also need the ports section, and a volumes config section - this is to load the current directory onto the container so that our code can be used, and the docker.sock so that containers can be started inside the AWS SAM container. The final part needed for this service is a command to start up AWS SAM Local when the container starts.
The compose file should look like this now:

version: "3.5"

services:
  localstack:
    image: localstack/localstack
    ports:
      - "4567-4582:4567-4582"
      - "8080:8080"
    environment:
      - SERVICES=s3,dynamodb
  api:
    image: cnadiminti/aws-sam-local
    links:
      - localstack
    ports:
      - "3000:3000"
    volumes:
      - $PWD:/var/opt
      - /var/run/docker.sock:/var/run/docker.sock
    command: local start-api --docker-volume-basedir "$PWD" --host 0.0.0.0

You can now run the command docker-compose up to start our services. The output will show output from both services. Once you see the output stating the endpoints have been started and that LocalStack is ready, in your browser you can navigate to either http://localhost:3000/hello or http://localhost:3000/goodbye and see our endpoints are hit.

However, if you do so, you’ll notice we get a onnection error:

('Connection aborted.', OSError(99, 'Cannot assign requested address')): ConnectionError

Our services are having trouble communicating with each other. To fix this we need to make two changes.

First, we need to set up a network that our services will share, and pass that through to AWS SAM Local so that containers it creates also know how to access it.
A default network is created by Docker Compose automatically, however it becomes tricky to pass that through to the AWS SAM container by name if we don’t control it. To do this, we create a networks section in the compose file where we can specify a custom network and network name. On each service we need to add a network config, using the custom networks identifier.
The next part is to pass the name of the network in the command of the AWS SAM service.
NOTE - The identifier and the name are different. In the compose file you can reference the network by it’s identifier. Accessing Docker from elsewhere we need to use the name.

Our compose file should now look like this:

version: "3.5"

networks:
  compose_network:
    name: custom_network

services:
  localstack:
    image: localstack/localstack
    ports:
      - "4567-4582:4567-4582"
      - "8080:8080"
    environment:
      - SERVICES=s3,dynamodb
    networks:
      - compose_network
  api:
    image: cnadiminti/aws-sam-local
    links:
      - localstack
    ports:
      - "3000:3000"
    volumes:
      - $PWD:/var/opt
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - compose_network
    command: local start-api --docker-volume-basedir "$PWD" --host 0.0.0.0 --docker-network custom_network

The next thing we need to do is to change the URLs for DynamoDB and S3 in our environment config files. Because we’re sharing a network between our services, we can now reference them by name, so our URLs now become http://localstack:4572 for S3 and http://localstack:4569 for DynamoDB. Don’t forget that now we’ve changed the environment variables, we’ll need to re-run the command to export the SAM template!

If you now run the docker-compose up command and hit the endpoints we can now see things working!

I want to make one last change just to make things easier to run. It currently takes two commands to get things working - one to export the template, and one to run Docker Compose. To change this, I created a Makefile with two commands. I added an init command to run the npm install for the project, and to run docker-compose build - this will pull down any required Docker images and build any of your own required Dockerfiles. The second command is run which will export our SAM template and run the Docker Compose. So our Makefile looks like this:

init:
	npm install
	docker-compose build

run:
	serverless sam export -o template.yml
	docker-compose up

Now we can use make init to get the project set up from scratch, and we can use make run to export the template and run the services. This makes things much easier, especially for a new developer being onboarded on to the project.

That’s as far as I’ve gotten with setting up a local serverless dev environment! Over these posts we’ve covered using Serverless to invoke lambdas locally and AWS SAM Local to run them as an AWS API Gateway locally. We’ve also introduced LocalStack so that we can mimic AWS services locally without needing to deploy anything. On top of that, we’ve seen how to use Docker and a basic Makefile to really streamline the entire process.

As always, you can find the code for this post here.