Serverless Website in AWS and its pricing

This blog speaks about how we hosted our website for almost nothing on AWS. We've talked about the architecture on our other blogs here and here, and this post will focus on the cost aspects of this setup.

I say almost nothing as there is definitely some cost involved in the initial phase and it is going to cost you money after the end of the free trial period of 1 year. However, taking a serverless approach as we have, it is cheaper than running the smallest available server– a t2.nano on a no-upfront RI, which works out to about $53.568/year without considering the Route53 and other costs. The icing on the cake? You don't have to worry about constantly backing up your server and patching it to defend against the latest vulnerability out there.

Below is the breakup of the costs involved in the first year and the costs after that:

Year 1: $13

Subsequent Years: $51.72 per year

These estimates have been arrived at as per the below traffic estimates and assumptions.

Year 1

Opex in the first 12 months for Static/Dynamic Site:

Buying a domain: Starts with $12 in Amazon but can go up depending on what top level domain you want to purchase.

There is no cost associated with the opex in the first year as AWS is quite generous with its free tier.

Caveats: You need to keep utilization of the resources within the AWS free tier.

S3 storage : 5GB of standard storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of data transfer out every month. A website needs only a few MB to few 100 MBs. So we are unlikely to cross this.

CloudFront : 50 GB Data Transfer Out and 2,000,000 HTTP and HTTPS Requests each month for one year. Should be pretty good for a basic website.

ACM : This service is free.

Route 53 : $0.5 for the hosted zone. Queries to Alias records that are mapped to Elastic Load Balancers, Amazon CloudFront distributions, AWS Elastic Beanstalk environments, and Amazon S3 website buckets are free. As we are mapping the CNAME of the Cloudfront in the R53, the queries should be free. Even if you add other records and get hits on them, it is charged at just $0.40 per million queries.

DynamoDB : 25 GB of Storage, 25 Units of Read Capacity and 25 Units of Write Capacity – enough to handle up to 200M requests per month. We don't expect to exceed this limit

API Gateway : 1 Million API Calls per month. Enough for the website needs.

Lambda : 1,000,000 free requests and 3.2 million seconds of compute per month free. More than what a simple website can consume.

Total opex cost in the first year: If your usage stays within the free tier - $0.6 (Only for R53) + $12 for domain purchase = $12.6 ~ $13

Opex after 1 year (in Mumbai region):

S3 storage : Let us consider this a bit on the higher side. About 500 MB, Storage - $0.0125,

Hits to S3 will be minimal as we are using CloudFront. Let's put it at about 10,000 requests. - $0.004 per 10,000 requests

CloudFront : Let us assume the data transfer out to be 10GB per month : $1.70

Request Pricing for All HTTP Methods. About 50,000 http requests and 50,000 https requests : $0.045+ $0.06 = $0.105 ie about 10cents

Domain renewal : This will continue to be the same price that you paid earlier. If you have paid $12, then you will have to pay $12 again at the end of the year to keep the domain.

Route53 charges : The Hosted zone will be 50 cents. The queries to the Cloudfront CNAME are free. So, you should not be charged for these which should contribute higher number of your queries. For all other queries, you will get charged $.04 per million queries.

API Gateway(Singapore): $4.25 per million API calls received, plus the cost of data transfer out, in gigabytes.

Even if you take a pretty good number of 100k API calls+data transfer out, you will get charged about 60 cents.

DynamoDB and Lamdba are part of non expiring offers. ie These free tier offers do not automatically expire at the end of your 12 month AWS Free Tier term and are available to all AWS customers.

DynamoDB : 25 GB of Storage, 25 Units of Read Capacity and 25 Units of Write Capacity – enough to handle up to 200M requests per month. Dont think we will exceed this.

Lambda : 1,000,000 free requests and 3.2 million seconds of compute per month free. More than what a simple website can consume.

Total cost per month = $0.0165(s3) + $1.8 (Cloudfront) + $0.9 (R53 queries and hosted zone) + 0.6 API G/W = $3.31/month

Total cost per year = $12(Domain Renewal) + (3.31x12) = $51.72 per year.

Effective cost per month = $4.31 per month

Do note that the domain purchase and renewal is the biggest cost factor in the above equation. You will pay this money whether you use Amazon or godaddy or any other vendor. However the cost of hosting a static website in S3 is dirt cheap.

Please note the above calculation was done for Mumbai region on 11th April 2017. As AWS keeps dropping their prices, I would not be surprised if all the above calculation goes haywire in a few months. However, this should give you an idea of how much you will have to shell out if you decide to host a static website on S3. Though the above calculation should be good enough for most of the simple websites hosting basic static content, you will have to do your own calculation for your specific requirements.

S3 Direct upload with Cognito authentication

We recently needed to demonstrate AWS RDS for a customer’s existing Oracle database running in their colo datacenter. Their Oracle DB dump was about 200 GB in size, and had to be moved to an AWS account securely.

Let’s first discuss the existing options and why it wasn’t right for our situation and then we will explain how we solved it using S3 direct upload using Cognito authentication.

Since we were dealing with large files, we wanted our customer to upload the files directly to Amazon S3. But unfortunately our customer is relatively new to AWS and training them to upload using AWS CLI or the Management Console would delay the project so we started looking for alternate options.

Problem statement: A customer needed to transfer an Oracle database dump of 200 GB securely to an AWS account.

We considered Cyberduck as our second option. Cyberduck is an open source client for FTP and SFTP, WebDAV, and cloud storage, available for macOS and Windows. It supports uploading to S3 directly using AWS credentials. We could create a new IAM user with limited permission and share the credentials with the customer, along with credentials we need share S3 bucket and folder names. But again in this solution also, the customer needs to install a external software installation and then follow certain steps to upload the files. It meant they had to take approvals to install software, and that was adding to the delay. This may be slightly easy compared to the first option but still introduced a lot of friction.

While investigating further for a friction free solution, we discovered that we can directly upload files into S3 from the browser using multi-part upload. Initially we were doubtful if this will work for large files as browsers usually have limitations on size of the file that can be uploaded. We thought unless we try it, we will never know so we decided to give it a shot.

We can directly upload files from the browser to S3 but how to make it secure?

Browsers expose the source code so obviously we can’t put credentials in the source and we thought we should use S3 Signed URLs and very soon we realized that we need to predefine the object key/filename to be stored while generating the pre-signed URL, which is again not a very desirable option for us. In order to make this process dynamic in our Serverless website, we need to write a AWS Lambda function which can generate the pre-signed URL based on file name the user provides, and call it using API gateway. While this is a possible solution, we found a better solution using Amazon Cognito.

Cognito has user pools and identity pools. User pools are for maintaining users and identity pools are for generating temporary AWS credentials using several web identities including Cognito user identity.
We created a user pool in Cognito and associated it to a identity pool. Identity pool provides credentials to both authenticated and unauthenticated users based on associated IAM roles and policies. Now any valid user in our Cognito user pool can get temporary AWS credentials using the associated identity pool and use these temporary credentials to directly uploaded files to S3.

Cognito architecture for secure S3 uploads

We have successfully implemented the upload solution using above architecture and testing by uploading 200 GB files and it works seamlessly. Our customer was successfully able to upload their DB files within no time.

Login Page

Landing Page After Login

Completed and In-Progress Uploads

References for the code using AWS JavaScript SDK:

The Tighter You Hold on, The Faster Customers Slip Away

It's human nature; either choose to be a part of the customer's journey, or get out of the way. It doesn't matter how good your relationship is, or how attractive the pricing is. If you try to block customers from getting something, they will explore their options. I have seen a lot of occasions where blocking may appear to work in the short run and there may be inertia initially, but technology will change, contracts will expire, and business models will get disrupted.

The current disruption in the relational database space is a prime example. For years, Oracle has been the market leader, but as independent analyst Curt Monash highlights, there are 3 factors threatening this position. Wider cloud adoption and growth of apps such as Big Data are relevant in the context of this article and can potentially make serious dents in Oracle's market share.

The real threat to Oracle however, is Amazon’s Aurora offering, coupled with the Database Migration Service, which helps customers move their databases to the AWS platform…

As cloud adoption grows, providers like AWS and Google have been offering managed relational databases with open-source engines with services such as Amazon RDS and Cloud SQL for some time now. While Cloud SQL offers the MySQL engine, RDS supports MySQL, PostgreSQL and MariaDB as open source options, and Oracle and Microsoft SQL Server in the licensed variety.

Growth of applications which do not need relational databases is helping NoSQL databases gain ground, with MongoDB and Cassandra widely popular in this category. Oracle has it's own NoSQL database, but in terms of adoption, it has a fair way to go.

The real threat to Oracle however, is Amazon's Aurora offering, coupled with the Database Migration Service, which helps customers move their databases to the AWS platform and onto MySQL and PostgreSQL(preview). Aurora was launched in 2014 and has been growing fast. At re:Invent 2016, Andy Jassy stated that more than 14,000 databases have been migrated to Aurora, it is becoming a mainstream option. Google's recent announcement of the public beta of Cloud Spanner also promises to make this space even more interesting with claims of a "strongly consistent and horizontally scalable relational database". We're yet to test this, but Google states that Spanner has been running internally for years, which lends it some credibility.

Oracle's response was to make Oracle on cloud more expensive last month. They effectively doubled the licensing fee of running an Oracle database on the cloud overnight with one announcement. This might be a strategy to make Oracle cloud look more attractive, but if customers were not scrambling to move off Oracle earlier, this will definitely give them more incentive to do so. Jim Mlodgenski, CTO with database consulting firm OpenSCG seems to agree and is quoted in this article by Ben Kepes; "Oracle's move here may backfire and have the opposite effect by giving customers a reason – and a sense of urgency – to accelerate their migration plans. Since Oracle's license policy announcement, we've seen an increase in customers interested in moving off of Oracle to other options like PostgreSQL and Aurora."

As parody commentator cloud_opinion tweeted recently, "a successful business model is your biggest vulnerability because it blinds you from innovating", and Oracle seems to have fallen into this trap.

Make a website dynamic using APIs

This is Part-2 of a multi-part blog post of how you can run a fast dynamic website in a completely serverless manner using managed services provided by AWS

In the previous post, we have discussed how to setup a static website using S3.

Every website has at least one dynamic section like contact us, email subscription or a feedback form. When hosting the website using serverless technologies such as Amazon S3, APIs are essential for powering the dynamic sections of the website. Amazon API Gateway gives us the ability to create any number of arbitrary APIs using Amazon Lambda as the backend. The best thing about these two services is, their Free Tier quota is quite high so you won’t have to pay anything until your website gets very popular.

Lets create a contact-us api for a website. First we need a data store to keep our contact-us form data, and for that we will use Amazon DynamoDB (NoSQL Database). Then we need an Amazon Lambda function to process the POST requests of the contact-us form. And finally we need an API endpoint to call the Lambda function from the website for that we will use Amazon API Gateway.

DynamoDB Table Creation

Logon on AWS Management console and select DynamoDB and follow these steps.

  1. Click Create table.
  2. Enter mywebsite-contact-us as Table name.
  3. Enter email as Partition key.
  4. Leave the rest of the settings as it is and click Create.

Note: Table creation may take few minutes.

Once the table is ready, lets write a Lambda function in Node.js to store contact-us form data into this table.

Lambda Function Creation

Logon on AWS Management console and select Lambda and follow below steps.

  1. Click Create a Lambda function.
  2. Select Blank function.
  3. Skip Configure Triggers and click Next.
  4. In Configure function, set mywebsite-contact-us as Name and select Nodejs 4.3 as Runtime.
  5. Copy below function to Lambda function code.

    'use strict';
    const AWS = require("aws-sdk");
    const dynamo = new AWS.DynamoDB.DocumentClient();
    exports.handler = (event, context, callback) => {
       const contactInfo = {};
       contactInfo.Item = {
         "message": event.message
       contactInfo.TableName = "mywebsite-contact-us";
       dynamo.put(contactInfo, callback);
  6. Select Create a custom role as Role.
  7. Set mywebsite_lambda_role as Role Name and Click Allow.
  8. Click Next and then Create function.

Lambda function Permissions

The Lambda function needs permissions to store data in the DynamoDB table mywebsite-contact-us. Logon on to the AWS Management console and select IAM and follow these steps to set required permissions.

  1. Click on Roles and select mywebsite_lambda_role.
  2. Under Inline Polices select Create Role Policy.
  3. Select Custom Policy and enter mywebsite-contact-us-table as Policy name.
  4. Copy and paste the following policy.
       "Version": "2012-10-17",
       "Statement": [
               "Sid": "mywebsiteContactUsTableWrite",
               "Effect": "Allow",
               "Action": [
               "Resource": [

Now that the Lambda function has right permissions to write to the mywebsite-contact-us DynamoDB table, lets create an API end point for this Lambda function using API Gateway.

API Creation

Logon on to the AWS Management console and select API Gateway and follow these steps.

  1. Click “Getting started”.
  2. Click “Create API”.
  3. Select check box “Import from Swagger”
  4. Paste following swagger template and then click “Import”
     "swagger": "2.0",
     "info": {
       "title": "mywebsite"
     "paths": {
       "/contact-us": {
         "post": {
           "produces": [
           "responses": {
             "200": {
               "description": "200 response",
               "schema": {
                 "$ref": "#/definitions/Empty"
               "headers": {
                 "Access-Control-Allow-Origin": {
                   "type": "string"
         "options": {
           "consumes": [
           "produces": [
           "responses": {
             "200": {
               "description": "200 response",
               "schema": {
                 "$ref": "#/definitions/Empty"
               "headers": {
                 "Access-Control-Allow-Origin": {
                   "type": "string"
                 "Access-Control-Allow-Methods": {
                   "type": "string"
                 "Access-Control-Allow-Headers": {
                   "type": "string"
     "definitions": {
       "Empty": {
         "type": "object",
         "title": "Empty Schema"

The mywebsite API is created with contact-us resource and we will now have to integrate it with the Lambda function mywebsite-contact-us to process contact-us form requests from your website.

API Gateway and Lambda integration

  1. Select mywebsite API.
  2. Click on POST method under contact-us resource.
  3. For Integration type select Lambda function.
  4. Select the region of mywebsite-contact-us Lambda function.
  5. Set Lambda function as mywebsite-contact-us and click Save.
  6. Click OK to give permisison for API Gateway to call mywebsite-contact-us Lambda function.
  7. Click on OPTIONS method under contact-us resource.
  8. For Integration type select Mock and click Save.
  9. Click Enable CORS under Actions for Cross-Origin Resource Sharing. This is needed for posting contact-us form data to this API from your website.

The mywebsite API is created now and we need to deploy it so that we can make calls to the API.

Deploy the API

  1. Select Deploy API under Actions.
  2. Select New Stage and Set test as stage.
  3. Keep Invoke URL handy as we will need this for testing.

Now everything is ready and we need to make sure everything is working as expected.

Test the API

  1. We use Postman for API testing but feel free to use any REST client.
  2. Our testing URL will be Invoke URL appended with contact-us, for example the testing URL should look like
  3. Select method as POST and use the testing URL.
  4. Post following JSON as request body.
     "email": "", 
     "name": "Dilip",
     "message": "I am just testing :)"
  5. Logon on to the AWS Management Console, and navigate to DynamoDB. Here check the items of the table mywebsite-contact-us and you should find one entry with the above details.

Architecture for Contact us request flow

We have covered only few aspects the website here and there is lot to cover so in upcoming posts, so watch out for future posts in this series.

Read more.

Host a static website

This is Part-1 of a multi-part blog post of how you can run a fast dynamic website in a completely serverless manner using managed services provided by AWS

In upcoming posts, we will explain

To use this method your website should be entirely made using HTML, CSS, JS only.

S3 Web hosting

Amazon S3 (Simple storage service)which is a widely popular storage service, supports static website hosting. You just need to put all your website files in an S3 bucket and then follow the steps below to get your website up and running in a few minutes with nearly no cost.

Bucket creation

The S3 Bucket name should be same as your website name. Example if your website is then you will need to create bucket with name When you create the bucket make sure you select the region which is closest to your customers. Also a point to be noted here is S3 bucket names are globally unique. If someone already has a bucket with name MyBucket then you can’t create a bucket with the name MyBucket in any region.

Bucket Permissions

We are assuming that you are serving your website directly from S3.

Now we need to make all the files in this S3 bucket publicly readable. To do so you can set following permissions to the S3 bucket.

  "Statement": [{
    "Sid": "Allow Public Access to All Objects",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    // Replace following with your website name.
    "Resource": "*" 

Enable website hosting

Now you are just a few steps away from making your website live. For this part logon to AWS Management Console and then go to S3 and select your website bucket and then go to bucket properties and enable Static Website Hosting.

We have just setup a static website now and we will share a lot more details in upcoming posts, so stay tuned.

Read more.