Sending Messages using Amazon SQS

Sending Messages using Amazon SQS

Step-by-step Guide to Sending Messages to Amazon SQS from a VPC Endpoint

I recently explored Amazon SQS, or Simple Queue Service, to understand its role in the AWS ecosystem. I needed to grasp the service's role within the AWS ecosystem. I thus learned the two main types of queues - Standard and FIFO. I also learned to send a message to a queue from an EC2 instance via a VPC endpoint. It was a valuable lesson learned. This article demonstrates how to set up a standard queue in the Admin console. I will share how I provisioned VPC resources with AWS CloudFormation. This includes an EC2 instance, a standard queue, and subnets, among others. I will demonstrate the attempt to send a message from the EC2 both before and after the endpoint is created. Finally, I will show how to retrieve the sent message from the queue. Let’s see how it all unfolds.

A background

According to AWS, Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS allows for the relay of messages between a producer and a consumer. This is quite useful where the integrity of such messages which could be events is treasured and it is important that only who the message is sent to receives it. So in AWS, there are two main types; Standard and FIFO(First In, First Out).

So how are these two different?

For the Standard queue type, the priority is that at least, a message is delivered once. In some situations, due to an architecture that may be highly distributed, it is possible to have a message delivered more than once. Also when messages start delivering, the order is not really an issue.

The Standard queue can be used in the following scenarios;

Allocating tasks to multiple worker nodes – For example, handling a high volume of credit card validation requests

Batching messages for future processing – Scheduling multiple entries to be added to a database at a later time.

The Standard queue type is the default for AWS.

The FIFO queue is distinctly different. Here order is crucial. It has all the capabilities of the Standard queue type but has the added condition of not tolerating duplicates. The order in which messages are sent and received is strictly preserved and a message is delivered once and remains unavailable until a consumer processes and deletes it. Some situations that require a FIFO queue include;

E-commerce order management system where order is critical

Communications and networking – Sending and receiving data and information in the same order and

Online ticketing system – Where tickets are distributed on a first come first serve basis.

So in this project, I will, following instructions in the AWS documentation, dynamically provision among other resources, a FIFO queue. This will be accomplished using AWS CloudFormation, an IaC tool.

AWS CloudFormation allows you to provision resources through the management console using a script which can be a yaml file.

The image above shows a section of the CloudFormation template I obtained from the AWS documentation and used for setting up the infrastructure.

The process of executing this is quite straightforward and involves the creation of stacks. A stack represents a single CloudFormation instance.

The image above shows two stacks I created earlier, including the one for this project.

To complete the process, we will just upload the yaml script and wait for the resources to be provisioned. There are other factors to be considered when carrying out this kind of task in production. However for this demonstration, we keep it simple and just do the basics.

The stack for this tutorial includes the following resources:

  • A VPC and the associated networking resources, including a subnet, a security group, an internet gateway, and a route table

  • An Amazon EC2 instance launched into the VPC subnet

  • An Amazon SQS queue

The image above shows some resources that were set up after the CloudFormation process had its run.

So having provisioned these, we now move on to the main issue; we will send a message from the EC2 instance to the queue and confirm if indeed it is received. This, as I came to understand and demonstrate, will depend on certain conditions.

First, we need to create an Amazon EC2 pair and we will use icacls command(on a windows machine, for Linux/mac, use chmod) to create some restrictions on the EC2 instance. This will grant only the owner or user of the file read permission while restricting everyone else entirely.

Having done that, we can go ahead to check if the EC2 instance is NOT publicly accessible. We can use the following steps;

  1. SSH into the instance through CMD or PowerShell.

ssh -i SQS-VPCE-Tutorial-Key-Pair.pem

where ec2-203.. is the public address of the instance

  1. Ping amazon.com

  2. You would confirm that no response is received as the resource is unreachable.

So, in order to be able to send a message to SQS, we would need to make our EC2 instance reachable. So how do we do this?

One way is the creation of a VPC endpoint for SQS queue

To connect our VPC to Amazon SQS, we must define an interface VPC endpoint. After we add the endpoint, we can use the Amazon SQS API from the EC2 instance in our VPC. This allows us to send messages to a queue within the AWS network without crossing the public internet.

Steps to creating a VPC endpoint from the Management console

  1. Choose Endpoints and choose Create Endpoint and on the Create EndPoint page, for Service Name, choose the service name for Amazon SQS.

  2. For VPC, we choose the name of the VPC created earlier via CloudFormation

  3. We go ahead to choose the appropriate subnet and security group

  4. Finally, we choose Create endpoint and choose Close afterwards

Amazon VPC will start creating the endpoint and show a pending status.. When the process is complete, Amazon VPC will display the status as available.

With this completed, we can go ahead to send a me

ssage to our Amazon SQS queue.

  1. Reconnect to our EC2 instance

    ssh -i SQS-VPCE-Tutorial-Key-Pair.pem

  2. We make an attempt to publish a message to the queue again using the following command, for example;

    aws sqs send-message --region eu-west-2 --endpoint-url sqs.eu-west-2.amazonaws.com --queue-url sqs.us-west-2.amazonaws.com/123456789012 --message-body "Hello from Amazon SQS."

  3. The sending attempt succeeds and the MD5 digest of the message body and the message ID are displayed, for example:

    { "MD5OfMessageBody": "a1bcd2ef3g45hi678j90klmn12p34qr5", "MessageId": "12345a67-8901-2345-bc67-d890123e45fg" }

This confirms that a message has been successfully sent from our EC2 instance to the SQS queue.