How to create an AWS Lambda Layer

1_wzWUZ05IBI5eEWF3uBAoUA

At re:Invent 2018 a few new amazing features for Lambda are announced. One of them was the Layers.

Layers let you keep your deployment package small, which makes development easier. You can avoid errors that can occur when you install and package dependencies with your function code. For Node.js, Python, and Ruby functions, you can develop your function code in the Lambda console as long as you keep your deployment package under 3 MB.

Basically, a layer is a collection of packages that you can upload and use them on Lambdas. That means you will not have to build your lambda each time you deploy something new, you only revision the layer when something changed in the required packages. You can also build a layer, let’s say the dependencies for SciPy(python package), and use it on every lambda that needs SciPy.

You can add up to 5 layers per Lambda function, beware tho, the order matters, since each layer will overwrite the previous layer’s identical files.

In this post, I will show you how to create your own layer and check on how to add custom files like selenium drivers to your lambda.

1. Requirements

I strongly suggest creating your environment for lambda in an Amazon Linux ec2 instance since lambda uses the same OS. So first things first, let’s see what resources we are going to use.

  1. An S3 bucket to save our layers
  2. An EC2 instance (t2.micro) to create our environment and save it to our S3 bucket
  3. An IAM role for the EC2 instance to access our S3 bucket

2. Creating the AWS Resources

Creating the S3 bucket

First, we are going to create the S3 bucket. Go to your AWS console and click on the S3 Service.

Click the Create Bucket button (Screenshot from 2018-12-09 13-37-10) and give a meaningful name, something like my-amazing-lambda-layers. Beware that this name must be unique in the entire AWS.

Creating the IAM Role for the EC2 instance and S3

Now we need the IAM role for EC2 to be able to access the S3 bucket and upload our layers. Log in to your console and open the IAM Service.

First, we will create a Policy that will allow access only to the bucket we just created. Click on Create Policy.

  1. Choose the service to be S3
  2. On actions click the read and write options
    1. On read, click the GetObject
    2. On write, click the PutObject
  3. On resource, go on the bucket and click on Add ARN
    1. Go to your S3 bucket and grab the name, paste it in the field
  4.  On JSON view it should look something like this

You can simply paste this piece of JSON to your policy, but remember to change the bucket name on Resource. Instead of my-amazing-lambda-layers it should be your bucket name.

Lastly, click on Preview Policy, select a meaningful name and click on Create Policy

Next, we need to create the role for our ec2 machine. Click on Roles in the left side of your screen and click on Create Role. Select EC2 and then click on Next:Permissions. Search by the name of the policy you just gave and attach it to the role. Click on Next:Tags, create any tags if you want (I strongly recommend to assign tags to everything). Lastly, give a meaningful name and description and create the role.

Create the EC2 instance

Now, we will create the EC2 instance to build our layer. Go to EC2 and click on Launch Instance. Select the Amazon Linux 2 AMI (HVM), SSD Volume Type and then the t2.micro as Instance Type. 

Select your VPC (you can use your default VPC here if you do not want to, go and create one) and on the IAM Role select the role we just created. Next, select the storage (default of 8GB is sufficient) and make sure the Delete on Termination is checked. Finally, click on Tags, assign meaningful tags and launch the instance. Do not forget to create a keypair so you can connect through ssh to the server. 

For windows users follow this guide to set up Putty

3. Create/Build your AWS Lambda Layer

The following tutorial will be for python packages, you can follow the basically the same process to deploy packages for different languages. Some paths may be different or the commands but the general flow remains the same.

Now we have to start our EC2 instance and connect to it. To build the layer we are going to use a Makefile that will contain all the necessary commands to gather the packages, zip them and upload them to s3.

First of all, we need to create a file that will hold all of our requirements named requirements.txt. Add there all the requirements you will need for your python project, we are going to install the packages using pip install -r requirements.txt

Now, we will go line by line the script to understand the flow and be able to adjust it to your needs. 

Line 1-13

We set the variables that we will use for the build. Make sure to include your bucket and the python version you want.

Line 16

The build command that will execute the scripts we to build the environment and upload the zip to s3. You can run one by one the commands if you want.

Line 18-27

We create the virtual environment, we install the necessary requirements and we create the folder tree for the python packages for the layer. You can add any other packages or files you want to include there, but make sure you save them to the root of the build folder (in our case in the build folder). The python packages will be copied in the full path, as we will see in a while.

Line 29-30

We will now copy all the packages from our local environment (variable VIRTUAL_ENV) to the build folder under the last folder specified by the variable PYTHON_PATH. Make sure that the path `$(VIRTUAL_ENV)/lib/python3.7/site-packages/` is the path for your python. I have python3.7 so it is included in the path. If you have a different version find the path of your site-packages in your virtual environment and correct that line.

Line 32-37

Remove any unnecessary packages from your python build

Line 39-40

Zip the contents of your build folder

Line 42-43

Upload the zip to your bucket

Example: Contents in the zip file

Screenshot from 2018-12-12 15-04-11

As you can see the packages is in the folder python/…/site-packages/ and the file1, file2 is generic files I want to include in my layer. File1 and file2 could be the chromedriver for selenium or nltk’s extra packages. The name of the files could be anything you want. 

4. Creating a Lambda Layer

To create a layer go to your lambda functions and click Layers on the left side of your screen. Click the Create Layer button. Select on `Code entry type` the `Upload file from Amazon S3` and ​grab the zipped file’s url from your s3 bucket and paste it to `Amazon S3 link URL`. Select your runtime, for me, since I compiled to python3.7 I will select python3.7. We are done! Now you only need to select the layer for your Lambda function. Go to your Lambda function:

Screenshot from 2018-12-12 15-15-59

Click on the Layers and select your layer.

In Lambda the contents of this zip could be found in folder /opt 

That’s it for today folks! If you have any suggestions please let me know in the comment section below or send me a tweet to @siaterliskonsta. If you think I am missing something, or you need more explanation please let me know.

Have a good day.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s