We’ve started working with a new client, and they’re looking at developing a new system architected on AWS Lambdas, primarily through an API Gateway. One of my first concerns with this was around the development experience - I like being able to run things locally, and I don’t want to be deploying a bunch of services every time I want to run something.

In this series of posts I’m going to cover how to set up a local development environment for the project.
The goal is to be able to run everything locally - invoking Lambdas, running them through an API Gateway, and also using AWS Services without deploying any.

Further through the series I’ll also cover streamlining the process. I like things to be easy to run, preferably with a single command, and I also want it to be as easy as possible for new team members to pick up.

I’ll be adding the code to my Blog Posts repo on Github - the code for this post can be found here.

The code for each part of this mini-series will be set up in a separate folder, making it easy to follow.

Part 1

The first step we needed to tackle was to invoke the functions locally.

We decided to use the Serverless Framework as a few of our colleagues have some experience with it. Fortunately, Serverless gives us an easy way to invoke lambdas locally, for all of the runtimes we wanted to support: NodeJS, Python and Java.

First things first, we need a few lambdas. As these posts aren’t about writing the code itself, I won’t go into detail here. If you’ve checked out my code from Github, you’ll see that I’ve created Lambdas for all three of the above mentioned runtimes.

To make this easier, I’ve created the Lambdas using the Serverless CLI, by running the serverless create command along with the templates aws-python3, aws-nodejs and aws-java-maven.

(If you don’t have Serverless, you can install it by running npm install -g serverless, and then run serverless -version to check it’s installed ok. If you don’t have NPM, you can install NodeJS here)

I then removed all the unnecessary comments etc to keep it as simple as possible.

These Lambdas do nothing special, simply printing out Hello and the name provided in the event.

For example, this is how the Python Lambda started:

import json

def hello(event, context):
    name = event["name"]
    print('Hello {0}'.format(name))</pre>

So now we have three lambdas, using different runtimes.

We now need to set up our serverless.yml file. This will define our functions, and any information around them that we wish to set - such as the timeout length, memory size, log retention length etc.

Using this information, the Serverless CLI can invoke the functions on your AWS account, provided your credentials are set correctly, or you can invoke them locally. Serverless can also be used to deploy your Lambdas.

The definitions are very basic right now, just defining the runtime we’re using (Python, Node and Java - and their versions) and the handler for the lambda function.

For example, the Python yml:

service: HelloPython
provider:
    name: aws
    runtime: python3.6
functions:
    hello:
        handler: handler.hello

The only difference between this and the NodeJS Lambda is the runtime and name of the service. The Java definition differs slightly as we have to tell Serverless where to find the compiled JAR file:

service: HelloJava
provider:
    name: aws
    runtime: java8
package:
    artifact: target/hello-dev.jar
functions:
    hello:
        handler: com.serverless.Handler

Now we want to configure serverless to use environment variable files.

Given we’re invoking things locally, and you might want to deploy these at some point, it’s useful to be able to switch out values that will change - this will be needed in the next part of this series.

In the serverless.yml file, we want to edit the provider section of the file, specifically we want to add an environment attribute, and a stage attribute which will be used by environment to determine which file to load.

Our Python yml now looks like this:

service: HelloPython
provider:
    name: aws
    runtime: python3.6
    stage: local
    environment: ${file(env.${self:provider.stage}.yml)}
functions:
    hello:
        handler: handler.hello

As you can see we’re telling Serverless to use a file, and it will find this based on the stage variable defined elsewhere in the file. I’ve set mine to local, so it will be looking for an env.local.yml file, if you change this to dev to make use of the env.dev.yml file, you’ll see the different output.

I’ve modified the Lambdas to simply print out what environment you’re running on, as well as the Hello message. For example with local you’ll see Running on local printed out.

You can see the new code for the Python Lambda here:

import json
import os

def hello(event, context):
    running_env = os.getenv('RUNNING_ENVIRONMENT')
    name = event["name"]
    print('The running environment is {0}'.format(running_env))
    print('Hello {0}'.format(name))

As mentioned before, the only real difference between the runtimes is that, because it’s a compiled language, the Java lambda requires the package property to point the packaged JAR file.

This also means you need to build the Java code before you can invoke it. If you’re unfamiliar with Java like I am, you might find this tutorial on getting started with a build useful.

Once you’ve built the Java project, you can now try invoking each function locally.

You can open the terminal in the same directory as the serverless.yml file, and run the following commands:

serverless invoke local -function PythonFunction
  
serverless invoke local -function NodeFunction
  
serverless invoke local -function JavaFunction

These commands will invoke each individual function, and as you can guess by the name, it will prove that each of the different runtimes can be invoked this way.

You’ll notice that the output for each of the functions was to print the two lines, as previously mentioned.

First, we’re printing out The running environment is local - this was just to show the environment variables working. If you change the value of RUNNING_ENVIRONMENT in the env.local.yml file, you’ll see the change in output.

The second output was Hello but without anything else given.

Let’s start passing in some event data and outputting values based on that.

Event data can comprise of whatever you want, but would usually contain information about how and why the Lambda is being invoked - for templates of events from different sources check here. For the sake of simplicity I’ve added an event.json file to the root of the project which simply defines a name property which the Lambdas can use:

{
    "name": "Ian"
}

Now we want to invoke the functions again, but this time we want to supply the path to the event data file, like so:

serverless invoke local -path event.json -function PythonFunction

serverless invoke local -path event.json -function NodeFunction

serverless invoke local -path event.json -function JavaFunction

Running these commands now, you’ll see Hello Ian printed out by each one!

(Well, the Java Lambda outputs a bit more as it’s set to return an ApiGatewayResponse in the boilerplate, and I didn’t see the point in changing it for this example)

And that’s it for the first part of this mini-series!

In the next part I want to cover how to expand on what we’ve done here to be able to run the lambdas locally through API Gateway.

As always, you can find the code for this post here.