Recently I had the chance to work with Amazon Web Services. This allowed me to bring together knowledge with hands-on experience. More importantly, I also learned about a tool that made the development process easier.

This tool is LocalStack, and is best described from its homepage. “A drop-in replacement for AWS in your development and testing environments”. Imagine having the ability to emulate key AWS services locally on your machine. This speeds up development cycles and allows you to experiment, learn, and test. And all this without the worry of billing or cloud connectivity.

What is LocalStack?

LocalStack can emulate AWS services. This makes it possible to develop without relying on cloud services. It supports multiple services such as Lambda, S3, DynamoDB, and more. This provides flexibility for various use cases. Although a paid version is available, the free version includes many key features. You can discover the full list on the LocalStack feature coverage page.

LocalStack instance can be managed through several methods:

  • LocalStack CLI: start and manage LocalStack container from the command line
  • Docker: use local Docker installation via Docker CLI
  • Docker-Compose: define and run LocalStack as part of the docker-compose file
  • LocalStack Docker Extension: integrates with Docker Desktop using a dedicated plugin
  • LocalStack Desktop: a standalone desktop application to manage LocalStack via UI
  • Helm: deploys LocalStack in a Kubernetes cluster

You can read more about getting started here.

I’ll focus on managing the LocalStack container using docker-compose. The solution will consist of the following service composition, Lambda function, which when triggered, will upload an object into the S3 bucket and store information in a DynamoDB table.

For the setup, the docker-compose file would look like this:

version: "3.8"

services:
  localstack:
    container_name: "demo-localstack"
    image: localstack/localstack
    ports:
      - "4566:4566" # Port that is used for LocalStack services emulation
    environment:
      # Defines the AWS services to emulate (defaults to all if not specified)
      - SERVICES=s3,lambda,dynamodb,iam
      # Configures default region and credentials for interacting with services
      - AWS_DEFAULT_REGION=eu-west-1
      - AWS_ACCESS_KEY_ID=test
      - AWS_SECRET_ACCESS_KEY=test
    volumes:
      # Mounts the Docker socket, necessary for some services like Lambda
      - "/var/run/docker.sock:/var/run/docker.sock"
      # Ensures that ./localstack directory content will
      # be executed and available on container startup
      - "./localstack:/etc/localstack/init/ready.d"

Key elements to pay attention to:

  • Port mapping: emulated services are available on port 4566
  • Environment variables: these define the active AWS services, default region, and credentials used by the emulation
  • Volumes: the Docker socket is required for Lambda functionality, while the ./localstack directory allows custom initialization scripts to run at startup.

This setup lays the groundwork, making it easy to emulate AWS services locally. The next steps will show how to put this configuration into action.

Managing resources

Once the container is up and running, the next step is setting up the resources. Like starting the container, there are several ways to manage these resources.

Command line

The AWS CLI works with LocalStack. The only tweak needed is adding the –endpoint-url parameter to point to the LocalStack container. For example, listing S3 buckets can be done with:

aws --endpoint-url=http://localhost:4566 s3api list-buckets

LocalStack also provides a handy wrapper for AWS CLI commands called awslocal. It simplifies the syntax and eliminates the need for the –endpoint-url parameter. This is the same command using awslocal:

awslocal s3api list-buckets

With awslocal, setting up the resources is straightforward. Here’s how to prepare the resources for this setup:

S3 Bucket

awslocal s3api create-bucket \
    --bucket notes \
    --region eu-west-1 \
    --create-bucket-configuration LocationConstraint=eu-west-1

DynamoDB Table

awslocal dynamodb create-table \
    --table-name notes \
    --attribute-definitions AttributeName=file_name,AttributeType=S \
    --key-schema AttributeName=file_name,KeyType=HASH \
    --billing-mode PAY_PER_REQUEST

Lambda Function

awslocal lambda create-function \
  --function-name notes-processor-lambda \
  --runtime python3.12 \
  --handler "notes_processor_lambda.lambda_handler" \
  --role arn:aws:iam::000000000000:role/admin \
  --zip-file "fileb:///etc/localstack/init/ready.d/notes_processor_lambda.zip"

You can run these commands manually, however the downside is repeating the process every time. To save time you can automate with a script, as explained in the next section.

Running initialization scripts in LocalStack

LocalStack allows us to run custom scripts during different phases of its lifecycle: BOOT, START, READY, and SHUTDOWN. These phases are documented on the initialization hooks reference page.

In the Docker Compose setup, we used a volume property that points to the ./localstack directory. This directory contains the scripts LocalStack will run during its lifecycle. The volume mount is:

- "./localstack:/etc/localstack/init/ready.d"

This makes it possible to automate resource creation when the container is ready.

Below is a simple initialization script. It combines the commands for creating an S3 bucket, a DynamoDB table, and a Lambda function. As well as some additional logging to keep track of the process:

#!/bin/sh

echo "Creating S3 bucket."

awslocal s3api create-bucket \
    --bucket init-notes \
    --region eu-west-1 \
    --create-bucket-configuration LocationConstraint=eu-west-1

echo "Created notes S3 bucket."
echo "Creating notes DynamoDB table."

awslocal dynamodb create-table \
    --table-name init-notes \
    --attribute-definitions AttributeName=file_name,AttributeType=S \
    --key-schema AttributeName=file_name,KeyType=HASH \
    --billing-mode PAY_PER_REQUEST

echo "Created notes DynamoDB table."
echo "Creating notes processing Lambda function."

awslocal lambda create-function \
  --function-name init-notes-processor-lambda \
  --runtime python3.12 \
  --handler "init_notes_processor_lambda.lambda_handler" \
  --role arn:aws:iam::000000000000:role/admin \
  --zip-file "fileb:///etc/localstack/init/ready.d/init_notes_processor_lambda.zip" \

echo "Created notes processing Lambda function."

Commands to create an S3 bucket and DynamoDB table are straightforward. However, creating the Lambda function includes two parameters worth noting:

  • –role arn:aws:iam::000000000000:role/admin: specifies IAM role assigned to Lambda function. In our case, it’s just a mocked value (IAM rules can be enforced, but only in the Pro version)
  •   –zip-file “fileb:///etc/localstack/init/ready.d/init_notes_processor_lambda.zip”: this points to the deployment package (a .zip file) containing the Lambda function code and any dependencies. The zip file is stored in the ./localstack directory. This is mounted to the container.

As an added touch, I’ve prefixed the resource names with init- to make it clear that they are part of the initialization process.

Setting up AWS resources with terraform

LocalStack makes it easy to work with IaC tools like Terraform, Pulumi, CloudFormation, and Ansible. In this section, I’ll cover how to set up Terraform to create the same resources we made earlier. The key part is configuring the AWS provider for LocalStack. After this, we can write our resource definitions just like we normally would with HashiCorp Configuration Language (HCL).

Here’s a simple example of how we  can define our resources in Terraform:

provider "aws" {
  region                      = "eu-west-1"
  access_key                  = "test"
  secret_key                  = "test"
  s3_use_path_style           = true
  skip_credentials_validation = true
  skip_metadata_api_check     = true
  skip_requesting_account_id  = true
  endpoints {
    s3       = "http://s3.localhost.localstack.cloud:4566"
    dynamodb = "http://localhost:4566"
    lambda   = "http://localhost:4566"
    iam      = "http://localhost:4566"
  }
}

resource "aws_s3_bucket" "terraform_notes_bucket" {
  bucket = "terraform-notes"
}

resource "aws_dynamodb_table" "terraform_notes_dynamodb_table" {
  name         = "terraform-notes"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "file_name"

  attribute {
    name = "file_name"
    type = "S"
  }
}

data "aws_iam_policy_document" "mock_lambda_role" {
  statement {
    effect = "Allow"

    principals {
      type = "Service"
      identifiers = ["lambda.amazonaws.com"]
    }

    actions = ["sts:AssumeRole"]
  }
}

resource "aws_iam_role" "notes_processing_lambda_iam" {
  name               = "terraform_notes_processing_lambda_iam"
  assume_role_policy = data.aws_iam_policy_document.mock_lambda_role.json
}

resource "aws_lambda_function" "notes_processor_lambda" {
  function_name = "terraform_notes_processor_lambda"
  runtime       = "python3.12"
  handler       = "terraform_notes_processor_lambda.lambda_handler"
  role          = aws_iam_role.notes_processing_lambda_iam.arn
  filename      = "${path.module}/../terraform_notes_processor_lambda.zip"
}

This setup will spin up an S3 bucket, a DynamoDB table, and a Lambda function. It works just like the previous examples. However, there are a few things to point out:

  • IAM roles and policies for lambda: the role and policy for the Lambda function are mocked here. In a real AWS environment, we would define them more rigorously. But, for LocalStack this is enough to create the Lambda function.
  • Prefixing resources: I’ve added a Terraform- prefix to resource names so that it’s clear they were created through Terraform

For more information on creating AWS resources with Terraform, read the official Terraform documentation here.

How to test the setup?

After setting up the resources, it’s time to verify that everything is working. We can check the state of the resources using awslocal, the LocalStack CLI. Here we can confirm that the emulated services are working.

S3 Buckets

To check which S3 buckets have been created, we can list them using the following command:

awslocal s3api list-buckets

The expected response

{
  "Buckets": [
  {
    "Name": "init-notes",
    "CreationDate": "2024-12-09T18:29:46.000Z"
  },
  {
    "Name": "terraform-notes",
    "CreationDate": "2024-12-09T18:31:07.000Z"
  },
  {
    "Name": "notes",
    "CreationDate": "2024-12-09T18:36:40.000Z"
  }
]
}

DynamoDB tables

Next, to check the DynamoDB tables, we use the following command:

awslocal dynamodb list-tables

The expected response:

{
  "TableNames": [
    "init-notes",
    "notes",
    "terraform-notes"
  ]
}

Lambda Functions

And finally, to check the Lambda functions, we can run the command:

awslocal lambda list-functions

The expected response

{
  "Functions": [
    {
      "FunctionName": "init-notes-processor-lambda",
      "Runtime": "python3.12",
      "Handler": "init_notes_processor_lambda.lambda_handler"
	…
    },
    {
      "FunctionName": "terraform-notes-processor-lambda",
      "Runtime": "python3.12",
      "Handler": "terraform_notes_processor_lambda.lambda_handler"
	…
    },
    {
      "FunctionName": "notes-processor-lambda",
      "Runtime": "python3.12",
      "Handler": "notes_processor_lambda.lambda_handler"
	…
    }
  ]
}

Invoking Lambda function

As stated, our setup includes a Lambda function designed to perform two tasks. One, placing objects into an S3 bucket, and two, storing the object’s information in a DynamoDB table. To test this, we will invoke the init-notes-processor-lambda function. The code, prepared specifically for this task, is as follows:

from datetime import datetime
import boto3

region = "eu-west-1"
access_key = "test"
secret_key = "test"
endpoint_url = "http://localstack:4566"

dynamodb = boto3.resource(
    "dynamodb",
    region_name=region,
    aws_access_key_id=access_key,
    aws_secret_access_key=secret_key,
    endpoint_url=endpoint_url
)

s3 = boto3.resource(
    "s3",
    region_name=region,
    aws_access_key_id=access_key,
    aws_secret_access_key=secret_key,
    endpoint_url=endpoint_url
)

def lambda_handler(event, context):
    file_name = event.get("fileName")
    file_content = event.get("content")

    bucket_name = "init-notes"
    file_key = f"{datetime.now()}/{file_name}"
    s3_object = s3.Object(bucket_name, file_key)
    s3_object.put(Body=file_content)

    table_name = "init-notes"
    notes_table = dynamodb.Table(table_name)
    note = {
        "file_name": file_name,
        "content": file_content
    }
    notes_table.put_item(Item=note)

Each client used in the Lambda function code is configured to use the LocalStack container URL. To test the function, we invoke it with a sample payload containing the file name and content:

awslocal lambda invoke \
  --function-name init-notes-processor-lambda \
  --payload '{"fileName": "example.txt", "content": "This is an example file content"}' \
  output.json

Expected Lambda function response:

{
  "StatusCode": 200,
  "ExecutedVersion": "$LATEST"
}

A status code of 200 indicates that the function was successful. To confirm the function’s actions, we can check the results by listing the objects in the S3 bucket and scanning the DynamoDB table.

Verifying the S3 upload

We can list the objects in the init-notes S3 bucket to check the file was uploaded successfully:

awslocal s3api list-objects --bucket init-notes

The expected objects listing response:

{
  "Contents": [
    {
      "Key": "2024-12-09 19:13:25.644140/example.txt",
      "LastModified": "2024-12-09T19:13:25.000Z",
      "Size": 31
	…
    }
  ] 
}

This confirms that the file example.txt was successfully uploaded to the init-notes bucket.

Verifying the DynamoDB entry

Finally, to check that the file metadata was stored correctly in DynamoDB, we scan the init-notes table:

awslocal dynamodb scan --table-name init-notes

Expected scanning response:

{
  "Items": [
    {
      "file_name": {
        "S": "example.txt"
      },
      "content": {
        "S": "This is an example file content"
      }
    }
  ] 
}

This confirms that the example.txt file’s name and content were successfully added to the DynamoDB table.

Interacting with LocalStack from Java

Just as with Lambda, we can integrate Java in the same way. To do so, the AWS SDK must be configured to point to the LocalStack endpoint. By using AWS service clients, such as those for S3 and DynamoDB, we can interact with these resources programmatically.

First, we need to add the necessary dependencies to our project:

//maven
<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>aws-sdk-java</artifactId>
    <version>2.28.7</version>
</dependency>

//gradle
implementation(‘software.amazon.awssdk:aws-sdk-java:2.28.7’)

Once the AWS SDK is added, it provides access to a wide range of service clients. For this example, we will focus on S3Client and DynamoDbClient. These clients can be configured to use the LocalStack endpoint. Additional settings, such as the region and access/secret key pair, need to be specified. This setup allows us to replicate the process of verifying Lambda invocation results. It works in a way similar to the method we used before.

Here’s an example using Java:

Region region = Region.EU_WEST_1;
String accessKeyId = "test";
String secretAccessKey = "test";
String endpointUrl = "http://localhost:4566";
StaticCredentialsProvider credentialsProvider = StaticCredentialsProvider.create(
        AwsBasicCredentials.create(accessKeyId, secretAccessKey));

S3Client s3Client = S3Client.builder()
        .region(region)
        .credentialsProvider(credentialsProvider)
        .endpointOverride(URI.create(endpointUrl))
        .serviceConfiguration(S3Configuration.builder()
                .pathStyleAccessEnabled(true)
                .build())
        .build();

DynamoDbClient dynamoDbClient = DynamoDbClient.builder()
        .region(region)
        .credentialsProvider(credentialsProvider)
        .endpointOverride(URI.create(endpointUrl))
        .build();

System.out.println("DynamoDB init-notes table items:");
dynamoDbClient.scan(ScanRequest.builder()
                .tableName("init-notes")
                .build()).items()
        .forEach(System.out::println);
System.out.println("S3 init-notes bucket contents:");
s3Client.listObjects(ListObjectsRequest.builder()
                .bucket("init-notes")
                .build()).contents()
        .forEach(System.out::println);

Running this code, we should see the following output in the console. This confirms that the data from both S3 and DynamoDB is accessible:

DynamoDB init-notes table items:
{file_name=AttributeValue(S=example.txt), content=AttributeValue(S=This is an example file content)}

S3 init-notes bucket contents:
S3Object(Key=2024-12-09 19:13:25.644140/example.txt, LastModified=2024-12-09T19:13:25Z)

Final thoughts

In summary, LocalStack is a powerful tool for developers looking for a local environment to emulate AWS. It offers a variety of approaches to managing containers and resources, from manual command usage to automated scripts and Infrastructure as Code (IaC) tools like Terraform.

By integrating LocalStack into applications via the AWS SDK, you can easily interact with locally emulated AWS resources. This creates a perfect environment for testing, learning, and experimentation.

Every code snippet featured in this article is part of a project available on GitHub.

5/5 - (6 votes)