This is the second post where I cover some AWS services. Besides the more static elements (such as S3) I also wanted to talk about key elements within any architecture: compute and databases.
These are in fact not so easy to get right. You have to be very clear about your target, understand what technologies you need, in what quantity and for how long. So I want to talk about some considerations that I had to make to put online recently a system that had very specific requirements.
The project architecture
In order to best convey the requirements, I need to tell you about the system itself: it is an architecture in which several front-ends make requests to a single back-end, which in turn is connected with a database.

How many requests are made? How often? All of these are legitimate questions. The system is rarely used, but it needs to be available whenever required. A requirement then, is definitely its reliability. Since the servers on which it is currently running seem to enjoy randomly crashing, it is necessary to move it to a more "reliable" system. Therefore, AWS.
Our first approach
Let's start from the beginning. Naively we decide to approach EC2 (Elastic Cloud Compute), the Virtual Private Server service offered by AWS. The system we need to allocate doesn't have any particular RAM or CPU requirements, so a mid tier class server would also meet our requirements.
On the other hand we need a database. In fact, the system requires the saving of several data and must maintain a history of the sessions that have previously been recorded. We decide not to proceed with a PostgreSQL or MySQL installation inside the EC2 instance, but to rely on the robust RDS systems offered by Amazon. After all, they promise us complete database management on their side. Including backups.

Let's proceed then with the creation of an EC2 istance. First, you need to go to the page of the service where you will find the button to start the creation of our instance.

The procedure is pretty straightforward and once all the required steps have been completed we can connect to our instance via ssh to upload our backend.
We now create a new database with Postgres within RDS.

Okay.. now let's talk about money. Because what we just did doesn't make a lot of sense economically speaking. Not with respect to our requirements at least.
In fact, we just started two instances that will stay on 24/7 for an app that won't be used very often. Don't get me wrong, It's definitely reliable.. but we're not spending our money too well.
Total: 40.89 USD
We're spending way too much...
Let's see what alternatives are there: let's start with databases, the most expensive thing in our system. Instead of a normal PostgreSQL instance, we can in fact rely on Aurora serverless. What is Aurora? It's an AWS proprietary RDBMS that allows in some regions (mind you, not yet all) to have a serverless database. What does it entail? You don't use it, you don't pay for it. Simple.
So let's start the setup of Aurora serverless.

It is important to note that in some regions it is not yet available and this is an important factor to consider when deciding on the placement of this service.
Do we really need an EC2 instance for the compute side of our application at this point though? Not at all, a single container will be fine as well (let's try to get tighter and tighter with the resources we require).
To get started with containers, what better service than ECS, the Elastic Container Service (no, not EKS, I'm not enough of a Kubernetes fan yet... I will surely become one in the future). So let's start the setup for our application with docker and containers.
To avoid going into too much technical detail, I'll just say that stuff happens involving VPC and Security Groups, but for simplicity I'll use the default ones. Also, the new interface for creation of clusters and task definitions within ECS doesn't seem intuitive to me at all. So at this point we have a container using Fargate, a technology claimed to be serverless, that contains our backend.

...and it doesn't turn off. Yes, that's right, it doesn't turn off. It doesn't scale to zero. What serverless is it then? It's true that this way I don't have to manage the server, I need it to actually run only when it's being used (whether it lasts one minute more or one minute less...). In addition to various connectivity issues that can then be found between a serverless RDS and a container that runs all the time, we're still overspending.
The real Serverless
Here comes Lambda. Lambdas are the service that comes to mind for many when talking about FaaS (Function as a Service). The question is, how do I turn the entire backend into FaaS? The answer is that it may not be necessary.
As a matter of fact, our backend happens to be written in NodeJS using the popular Express library. Using the serverless-http package provided directly by AWS, it is in fact possible to convert our app listener into a perfect handler for FaaS functions. Here's how to modify your code
// mostrare il prima e il dopo del codice
Now we have to deploy our backend. There are a couple of ways to do this: the first, is to upload it as code (up to 50mb zipped or 250mb unzipped) while the second is through a docker image. Since we have now created the container for our attempt with ECS, let's make some adaptations to make it ready for Lambda.
However, the lambda functions are not immediately "ready to use" as an EC2 instance (which is already provided with a convenient IP) can be. In order to use them it is necessary to setup the HTTP Gateway. Here is how you can initialize one:
// spiegazione HTTP Gateway
at this point we can directly use the address of our Gateway to start the execution of our Lambda.
Conclusions
Some considerations are necessary, however: is this the best way to do things? I'm not convinced. In fact, instead of having a single monolithic server acting as a Lambda function, it would be possible to divide it in single sub-functions according to their characteristics. This would be useful both to not load more code than needed for execution and to decrease the execution time of the Lambda itself. There are in fact six economic factors to consider. Although we have one million Lambda invocations per month for free, this should not push us to execute code unnecessarily, because the execution time also comes into play with the costs.
On the other hand we must also consider the cost of refactor: is it worth investing time to refactor code that still performs its functions correctly at almost no cost? This is a consideration that the entire team must take into account with respect to the other tasks that need to be done, based on priorities.
Getting to have the code as a Lambda function is satisfying enough for me personally.