AWS Lambda + Node Modules, no Docker required
This article outlines the process and some of the gotchas of automating image compression by integrating node third party modules with Amazon’s S3 and Lambda services. The code for my actual Lambda function is at the bottom, I hope this can save you some of the time I lost working it all out!
I’m working on an on a very image-heavy website where users upload lots of their own content. As any web-developer knows: large, slow loading images can be the easiest way to make your website look slow and destroy the UX, no matter which lightning-fast front-end framework or cloud content delivery network you have.
I decided to use AWS Lambda to automate the process of compressing the images. Lambda is a pay-as-you-go service that lets you run code without provisioning or managing servers, and is an extremely cost effective way of relieving load off your own servers.
My image compression pipeline
- The images are uploaded to an S3 bucket containing the full-size images
- This triggers an event notification that notifies the Lambda function that a new object has been uploaded
- The Lambda function loads the new object and compresses it using mozjpeg
- It is then re-uploaded to a bucket containing compressed images that are used by the website
Using node modules in Lambda
Most real-life applications for Lambda functions are likely to require third party libraries. In my case I was using common libraries, such as async, as well as less common libraries in the forms of imagemin and imagemin-mozjpeg. To use third party libraries in a Lambda function it’s as easy as npm (or equiv.) installing them into the project as you would with any other, then zipping the folder containing all your files and uploading it directly to Lambda using the console.
Or so I thought…
Some modules are pickier than others
The biggest stumbling block I faced during this AWS journey was when I tried to execute my Lambda function with imagemin and mozjpeg. I triggered the function by uploading an image to the initial S3 bucket, to be faced by this message in the logs:
Not too descrptive…
After a little bit of digging I found that my problem was that I was installing modules into the project using my Windows computer and then uploading them to Lambda, which runs on Linux (or Amazon Linux to be exact).
While some modules are indifferent to the OS which is installing them, other modules aren’t. The Windows-generated binaries I created when I installed the imagemin modules were not compatible with Lambda’s Linux OS.
Where Docker comes in (or doesn’t in this case)
One solution to this is to create Docker Container running Linux and then install the modules from inside it. There is a handy Amazon Linux image that can be used if this is the route that you’d like to go down:
>> docker pull amazonlinux
>> docker run -v $(pwd):/my-lambda-fn -it amazonlinux
One drawback of this for Windows users is that Docker doesn’t support Windows 10 Home edition. That means lowly Home editions users like myself are demoted to using the legacy Docker Toolkit. I decided to try and find another solution. This is when I thought of Amazon CodeBuild (and to be honest, I was actually pretty annoyed at myself that it didn’t occur to me earlier…).
Using CodeBuild to install dependencies
CodeBuild is yet another PAYG service that Amazon offers, giving users the opportunity to near-enough instantly spin up a build environment to build their applications. The simplicity of its configuration gave me the opportunity to create and implement a build environment in minutes without installing anything — no CLI’s, no hassle, no wasted time.
To install your dependencies using CodeBuild’s Ubuntu environment simply:
- Add a buildspec.yml file to your project’s root directory containing the following code:
- npm installartifacts:
- Create a bucket in S3 and upload your zipped project folder to it. Your project tree should look something like this:
Obviously you can split the files down as many times as you like, just remember that index.js will be the entry point for Lambda by default. Remember not to include your node_modules folder if you have one already — the modules are being installed during the build by the Ubuntu machine so that they they agree with Lambda!
- Create a new CodeBuild project running an Ubuntu/Node environment, select the S3 bucket that you just uploaded your code to as the source and set the Artifact upload location to the same S3 bucket, with Artifact packaging set to ‘Zip’.
- Build your project!
The logging is quite good for if you have any problems, and if you don’t then you will have a nice shiny built project in the S3 bucket you specified in the form of a zipped file with your Lambda code and a node_modules folder. Your modules will now be ready to go hand-in-hand with Lambda, without having to mess around with having to go anywhere near a VM (…or at least not one on your machine)!
Hooking up to Lambda
Now that your modules are installed correctly for Amazon Linux you can change the source of the code for the Lambda function to the S3 bucket containing the artifact from the build process, and you’re ready to go.
Any time you want to edit the function just re-zip the code, upload it to the same location with the same name to overwrite your old code, click ‘Build’ in your CodeBuild environment, and re-copy the artifact destination into Lambda and click ‘Save’. Easy as pie.
This is the website I am working on, for any one who might have been curious/likes vintage furniture!
For all those that came here looking for help compressing images, here is the code that I used to compress the images. I have never used mozjpeg before but I have to say I’m thoroughly impressed with it.
I have also written a piece on uploading photos to S3 and getting to grips with IAM and Cognito, check it out if you think it could save you some time. I know how much reading about other people’s experiences helped me when I was learning to use these services!