Introduction

This guide is based on the Cloud Resume Challenge and follows the steps listed inside it. That said, I strongly recommend purchasing the original book as it covers much more details. This post will only show the method I’ve done for my challenge. Think of this post as a helpful guide rather than a complete answer key.

The elements cover in this post:

  1. AWS (S3 bucket, Cloudfront, IAM, Lambda Function, DynamoDB, Identity Provider OIDC, DNS, SSL Manager and more)
  2. CI/CD (Git Action)
  3. infrastructure as code (Terraform)
  4. Frontend Language (HTML, CSS) -> Now switch to Hugo
  5. Backend Language (Python, Javascript)
  6. Test (Cypress)
Please read through the <challenge> on the offical website before looking into my post, because I will directly dive into how I solve the problem.

1. Certification

The challenge recommonds the Certified Cloud Practitioner as a basic level. However I passed the Solutions Architect - Associate for a higher level approach. Ultimately the certificate offers professional knowledge regarding cloud services.

Q&A

  • Q: Is getting a certificate worthy?
  • A: The short answer: yes and no. If you want to become a cloud engineer or similar role (Site reliabilty Engineer, DevOps engineer), the answer is yes! If your goal is just to learn cloud services, then a certificate will not be worthy. Here are 2 major benefits:
1. Experience from the Certificate: The resume challenge will guide you through a limited set of AWS resources, specifically focusing on how to host a static website. Most of the time, you will need to handle various scenarios.
  Let me offer you some cases:
   - Are you familiar with VPC networks and EC2 instances, which are commonly used by companies?
   - How to prevent accidental deletion in bucket? (Versioning / MFA)
   - Do you understand the architectural difference between a company that wants to migrate services from on-premise to the cloud while treating the on-premise data center as a backup, versus a company that wants to extend its data storage to the cloud but keep all services hosted on-premises?
   - ...

These values are not provided in resume challenge , but you will encounter in certification test. So like I said, define your goal of this challenge, whether you want to dive deep into the cloud world or not.


2. Career: It does add some values to your resume to help you stand out from other candidates, especially with this cloud project.

  • Q: Any good resources you recommond to prepare for the exam?
  • A: I used the Dojo bundle along with their exam. This was the only resource I used to prepare for the exam. You are absolutely free to explore any other lessons. (Note: I do not receive any compensation from Dojo and have no personal affiliation with them. I recommend it simply because it was the only resource I used; I cannot guarantee the quality of other materials).
*Impotant* Don't pay full price for the certification exam! Look for coupons or vouchers online!

2. Getting Started with AWS and IAM role

Head over to AWS and register an account. Yes, you’ll need to enter your credit card info, but don’t panic, AWS has a free plan that lasts for a year.

As for the IAM role, I recommend just sticking with the root user for now. It gives you full access to all the services, otherwise you’ll be tired with a lot of access denials later on.

Warning: This is a bad practice for security reasons. Don't do this long term! Especially if you're setting this up in a real production environment!

3. HTML & CSS

The foundation of building a website’s frontend: How you code it depends on your own style and taste. I use Hugo along with its Papermod theme to build my website. I strongly recommend using an existing tool to build your portfolio instead of hand-coding everything with plain HTML and CSS. Here’s why:

  1. If you’re just starting out and don’t plan on becoming a frontend developer, it’s really not the best use of your time.
  2. Let’s be real, writing perfect CSS for a beautifully designed website is super hard, especially when you’re still learning.
  3. Even if you manage to finish your website and the design meets your expectations, consider whether the time spent was worth the result you achieved.

Some other popular tools: Adobe, Notion, Wix

I’m definitely not trying to discourage anyone from writing their own HTML and CSS. In fact, I absolutely take my hat off for anyone who practice writing good CSS code. My point is just a friendly heads-up. I personally spent over 30 hours coding my site from scratch, and honestly, it still didn’t come close to what Hugo gave me in way less time. Switching to Hugo was a game changer.

I came across an HTML & CSS Tutorial that I found really interesting. Just to clarify, I didn’t use this tutorial in my own learning journey, but I thought the course designer’s final task of building a YouTube-style webpage was pretty cool.

4. Static Website

We store the HTML & CSS files on S3 bucket. Here’s how you can do it:

  1. Log in to your AWS S3 console.
  2. Click on “Create bucket” (it’s in an orange box).
  3. Enter a unique bucket name. This name has to be unique across all AWS accounts in globe. Then, hit “Create bucket”. [You don’t need to change any other settings.] Create Bucket
  4. Go to your newly created bucket, click Upload, and upload your files. Make sure that your index.html file is right in the root directory of the bucket. This means when you click on your bucket, you should see index.html directly in the file section, not inside any folder!

For future convinience, use AWS CLI to upload the files by terminal. A useful tutorial video from Frank if you need

Commands we oftenly use:

// This command updates new file and delete the files that are not presented in the new updated repository.
aws s3 sync ./your_folder/ s3://your-bucket --delete --exclude "*.DS_Store" --exclude ".gitignore" --exclude ".git/*"

// Clean up all files in your bucket
aws aws s3 rm s3://your-bucket --recursive

5. HTTPS

Using HTTPS with CloudFront has several benefits:

  1. Encryption: HTTPS encrypts your content while it’s being transferred between AWS and the user’s PC, keeping it secure.
  2. Traffic Control: CloudFront helps manage traffic more efficiently and scales with demand. (Distribution got its name for a reason :)
  3. Cost: Serving S3 content via CloudFront is FREE. (Just a note: if you’re hosting your website via S3, AWS does charge for read times.)

Let me break down the CloudFront setup for you:

  1. Log in to your AWS CloudFront console.
  2. Click on “Create distribution” (it’s in an orange box).
  3. Update the following sections:
    • Origin domain: Choose the bucket you created.
    • Origin access: Set this to Origin access control settings to ensure that only your distribution can access the S3 content.
      • Click Create New OAC, then Create.
    • Viewer Protocol policy: Select Redirect HTTP to HTTPS.
    • WAF: Do Not Enable
    • Default root object:Only change this if your index.html is named something different, like “project1.html” or “random_name.html”. CloudFront needs to know the name of the root object to serve your content correctly.
  4. After creating your distribution, wait about 3-5 minutes for it to deploy. Once it’s ready, go to your new distribution details. You’ll find a section prompting you to create a policy and paste it into S3.
    • Go to S3 -> Permission in the navbar -> Scroll down to Bucket Policy -> Paste your policy there.

If you need a policy template, here’s one you can use:

{
  "Version": "2008-10-17",
  "Id": "PolicyForCloudFrontPrivateContent",
  "Statement":
    [
      {
        "Sid": "AllowCloudFrontServicePrincipal",
        "Effect": "Allow",
        "Principal": { "Service": "cloudfront.amazonaws.com" },
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::your_bucket/*",
        "Condition":
          { "StringEquals": { "AWS:SourceArn": "your_cloudfront_arn" } },
      },
    ],
}

Find your ARN ARN For Cloudfront

  1. Until now, you should be able to see your site from CloudFront URL (Distribution domain name in The above picture)

6. DNS

You can buy a domain from anywhere. Some popular options are Route S3, Cloudflare, or any other DNS provider.

  1. A domain typically costs around $10 a year, and that’s the only real expense for this project.
  2. Refer to your DNS provider for a simple guide on purchasing the domain, it’s usually just a simple process of typing the desired domain name, paying, and you’re good to go.

Now, let’s move on to connecting your domain with CloudFront so your S3 content can be accessed through your domain (just like how my site is at https://www.ziirui-resume-website.com). We’ll also use AWS Certificate Manager to add SSL for security.


Steps for Creating SSL and Verifying Your Domain
  1. Log in to the Certificate Manager Console
  2. CLick Request (orange button) and then Next
  3. Enter your domain name (ex: example.com), then hit Request

You’ve now created an SSL certificate, but we need to verify that the domain is yours. Here’s how to do that:

  1. Scroll down to the Domains section of your new SSL. You’ll see a CNAME name and a CNAME value. these are like a key and value pair. You’ll need to create a record in your DNS provider for each of them. Don’t panic! Just search “add record in [Your_DNS_provider_name]” on your browser for step-by-step videos online. Reminder:

    • The record type is CNMAE.
    • The record name is CNAME name.
    • Record target is the CNAME value.
    • Record status is No Proxy.
  2. The Do this step for all of your SSL domains. Once you’ve added those records, check back in the Certificate Manager. If everything’s set up correctly, you’ll see “Success.” Cname success

  3. After verification, we have proved the DNS ownership to Certificate Manager, now we can connect DNS with our CloudFront. Step below:

    1. Copy your CloudFront URL
    2. Go to your DNS provider and create two CNAME records:
      • One with Name = www and Target = CloudFront URL.
      • Another with Name = your root domain (e.g., example.com) and Target = CloudFront URL.

Final look for DNS provider console:

Cname result


Setting SSL in CloudFront (Don't panic! It's simple, you've done most of the job):
  1. Go to your CloudFront distribution
  2. Under General, click Edit Settings.
  3. In Alternate domain name (CNAME), enter your domain (“your_root_domain” and “www.your_root_domain”)
  4. In Custom SSL certificate, Select your SSL. Cloudfront SSL
  5. Under Custom SSL certificate, select the SSL certificate you created.

Now, test your website using your domain. If all went well, your content should be accessible via your personal DNS!

7. Database

For this part, we’ll be using DynamoDB to keep track of our visitor count.

Here’s how to create a DynamoDB table:

  1. Same as always, open your DynamoDB console
  2. Click the Create Table (it’s orange).
  3. Fill out the form:
    • Table Name: You can choose any name you like.
    • Partition Key: Enter id and select String
    • Table Setting: Choose Customize settings
    • Read/write capacity settings: Set this to On-demand instead of Provisioned to avoid extra charges.
  4. Click Create Table

Explanation: The partition key can be anything; it’s just a way to retrieve the visitor counter from Python later on. We choose On-demand for the table settings to only pay for what we use, rather than the default Provisioned setting which can lead to charge overhead.

  1. Once your table is created, click Explore items (found in the orange box at the top right).
  2. Scroll down and hit the Create item button.
  3. For the id value section, you can enter 1 for simplicity.
  4. In the top right corner, click Add new attribute and select Number for the attribute type. This will be used for the visitor count.
  5. Create an attribute named count and set its value to 1.

Here’s what it should look like: dynamodb_table_item

Now you have an item with an id of 1 and a count value of 1. Next, we’ll set up a Lambda Function to retrieve and increment this value.

8. Lambda API & Python

Setting up a Lambda Fucntion should be pretty straightforward for you. Just a quick tip: make sure to choose the programming language (the Runtime) you prefer for interacting with the database. For this guide, we’ll use Python. Remember, this Lambda function will trigger every time someone visits your domain.

We’ll be using the Boto3 library for working with DynamoDB in Python. Without further word, here’s a complete code template for your Lambda function:

Warning: Don’t copy this code if you’re not familiar with Python or the Boto3 library. Doing so without understanding could lead to issues and will only cause you more trouble in the long run.
import json
import boto3

 #connect to table
dynamodb = boto3.resource("dynamodb")
table_name = "your_table_name"
table = dynamodb.Table(table_name)

def lambda_handler(event, context):
  id = "your_id"
  response = table.get_item(Key={
      'id':id
  })

  # If the item exists, increment the count by 1
  if "Item" in response:
      views = response["Item"]["your_number_type_key_name"]
      views += 1
      table.put_item(Item = {
          "id" : id,
          "your_number_type_key_name" : views
      })
  # If the item doesn’t exist,indicating this is the 1st time our site being visited, then create a view of 1
  else:
      views = 1
      table.put_item(Item = {
          "id" : id,
          "your_number_type_key_name" : views
      })

  # The content our javascript will receive in html file.
  return_format = {
          'statusCode': 200,
          'body': json.dumps({
              'message': 'success',
              'count': int(views),
              'event': event
          })
      }


  return return_format

After we set up this Python code, you can test your Lambda Function in the AWS console. It should work fine for retrieving DynamoDB data. Now, we need to set up a URL for Lambda so that our local HTML file can trigger this Lambda function.

Steps to set up the URL: Watch this video for a step-by-step guide.

Noticeable changes to make:

  1. Allow Origins: Add your domain values here, such as https://example.com and https://www.example.com
  2. Allow headers: Add the value content-type
  3. Allow methods: Add POST
  4. Max age: Set this to 43200 seconds, which equals 12 hours for caching.
  5. Allow credentials: Enable this option to reduce verification times.
  6. Click Create Function, and the Function URL should appear at the bottom right corner of your screen.
Note: This is a basic setup for convenience, as Function URLs do not address DDoS protection.
Two alternative solutions are:
1) Switch to API Gateway for a more web-friendly and secure access method, allowing your website to make requests while restricting access from unauthorized users.
2) Configure the IAM role in Lambda URLs to only allow CORS (your domain) to use the link.

8. Javascript

Now that we’ve got the Lambda function reading data from DynamoDB, the next step is getting that data into our HTML file so we can display it on the website. This is where JavaScript comes in handy!

We’ll be using a simple format from AWS to help us fetch and display the DynamoDB values.

(Here is a Javascript tutorial if you need.)

// Be sure to attch this id in your HTML, (e.g. <p class="your_counter_id_in_html"> times</p>)
const counter = document.querySelector(".your_counter_id_in_html");

async function update_views() {
  try {
    const response = await fetch("your_lambda_function_URL", {
      method: "POST", // Set the method to POST
      headers: {
        "Content-Type": "application/json/update_counter",
      },
      body: JSON.stringify({}),
    });

    const data = await response.json();
    counter.innerHTML = `${data.count}`;
  } catch (error) {
    counter.innerHTML = `43`;
  }
}

update_views();

Congraduation! You’ve done the webiste part of this challenge. For the rest content is where we dive into the cloud world on CI/CD and Infrastructure as code. So if you inspire to be a cloud engineer, like to challenge yourself

9. CI/CD

Here’s how we’ll use Github Action. If you’re not familiar with CI/CD (Continuous Integration and Continuous Deployment), let me give you a quick scenario:

Imagine you push your files to GitHub, but you need those files to be updated on AWS. Without CI/CD, you’d have to manually update them on AWS, uploading HTML and CSS files to an S3 bucket, updating your Python code in a Lambda function, and possibly adjusting policies in different places. And if a bug pops up after all that work, guess what? You’d have to go through the whole process again!

CI/CD automates these tasks, so you don’t have to handle all that manual work every time you make a change. In our final result, all you need to do is push your code to GitHub using git push origin main, and GitHub will automatically update the changes on AWS for you.


Steps to create CI/CD pipeline in github action:

  1. In your local repository, create a workflow file called front-end-cicd.yml in the .github/workflows directory. If you don’t have the “.github” and “workflows” folders, create them manually. You should end up with .github/workflows/front-end-cicd.yml in your repository.
  2. Now we’re writing in YAML. I’ll provide you with two templates for your choice:

Option 1: Github OIDC

  1. The key difference between OIDC and Jakejarviss’s CI/CD is that OIDC doesn’t require your AWS Access Key. Instead, it uses a trusted third-party identity provider for authentication. For example, you can use your Google, GitHub, Facebook, or LinkedIn account to sign in to LeetCode. All these third-party companies are trusted by LeetCode.

  2. When we use GitHub OIDC, GitHub can automatically access your AWS resources based on your GitHub identity, without requiring any AWS keys. This is more secure and convenient.


Steps to set up OIDC in AWS:

  1. Set up an identity provider in AWS for GitHub.
  2. After creating the OIDC provider, go to the provider and select Assign Role from the top right corner.
  3. Select Creating a New Role
  4. Modify the following fields, then click Next:
    • Trusted entity type: Web Identity
    • Audience: Select sts.amazonaws.com
    • GitHub organization: Enter your GitHub account name, for example, my name is zirui2333
  5. For the policy, select AdministratorAccess for simplicity, then click Next.
  6. Enter a role name of your choice: github_oidc, then click Create Role.
  7. Go back to the OIDC provider and select Endpoint verification.
  8. Click Manage in Thumbprints and add the hex 1b511abead59c6ce207077c0bf0e0043b1382612 (This might change every year, check online “Github Thumbprints + current year”), then click Save changes.

Steps to set up a CI/CD YAML file in GitHub:

  1. Copy the following template into your YAML file:

    # Sample workflow to access AWS resources when workflow is tied to branch
    # The workflow Creates static website using aws s3
    name: AWS Frontend
    on:
      push:
        paths:
          - "Your_html_folder/**" # Trigger CI/CD on changes in your HTML folder
        branches: "main"
    env:
      BUCKET_NAME: ${{ secrets.AWS_S3_BUCKET }}
      AWS_REGION: "us-east-1"
      # permission can be added at job level or workflow level
    permissions:
      id-token: write # This is required for requesting the JWT
      contents: read # This is required for actions/checkout
    jobs:
      #Upload frontend code
    
      S3PackageUpload:
        runs-on: ubuntu-latest
        steps:
          - name: Git clone the repository
            uses: actions/checkout@v4
          - name: configure aws credentials
            uses: aws-actions/configure-aws-credentials@v3
            with:
              role-to-assume: "github-cicd" # Or replace to the name you set up for iam role
              role-session-name: aws_frontend_workflow
              aws-region: ${{ env.AWS_REGION }}
          # Upload a file to AWS s3
          - name: Copy Actual_Resume_Web to s3
            run: aws s3 sync ./Your_html_folder/ s3://${{ env.BUCKET_NAME }}/ --delete --exclude "*.DS_Store" --exclude ".gitignore" --exclude ".git/*"
    
  2. Remember to add Secrets for your S3 bucket name and other credentials.

  3. Once you’ve added the secret and modified the template, push the file to GitHub using the following commands:

      git add .github/workflows/front-end-cicd.yml
      git commit -m "Create CI/CD pipeline for frontend"
      git push origin main
    

Option 2: Jakejarvis’s CI/CD

  1. Copy the jakejarvis’s CI/CD template into your YAML file, and replace the top part with:

    on:
      push:
        paths:
          - "Your_html_folder/**" # This line indicates that CI/CD will only happen if git push contains any files on html folder, in other words, CI/CD will trigger only if there's any change in your frontend code
        branches: "main"
    
  2. Make your S3 bucket publicly readable. Ensure your S3 bucket settings are also configured for public access.

  3. Add Secrets to GitHub:

    • Follow the steps for adding Secrets, including your S3 bucket name and Access Key ID.
    • I previously shared a video from Frank explaining how to create an access key. You can revisit that here if needed.
    • If you’ve created the access key but can’t view it again due to AWS’s security policies, you can retrieve it from your terminal by running:
      cd ~ # Go to root
      cat .aws/credentials # Print the access key and id
    
  4. After locating your keys, add them to your GitHub Secrets. Important: Please comment out the SOURCE_DIR: ‘public’ line in the Jakejarvis template to prevent conflicts in your system.

  5. Once you have modified the template and added your Secrets, push the file to GitHub with:

      git add .github/workflows/front-end-cicd.yml
      git commit -m "Create CI/CD pipeline for frontend"
      git push origin main
    

10. Infrastructure as code

This is another essential skill in cloud computing world. We will use Terraform.

For the people who doesn’t know Infrastructure as code, it’s used for cloud service Automation, in our case is AWS services like S3 bucket, dynamoDB, cloudFront distribution, SSL manager and more…. Let me give you a scenario: Imagine you went through all the steps I listed previously, you saw lots of screenshot configuration from AWS console right? They’re one of the ways how you can create services in AWS, but imagine soemthing goes wrong, and you need to re-create all the services again (Oh No)! You defintealy don’t want to manually went over every single service creation again. Therefore here comes the Infrastructure as code, in simple word, we will write codes for those services, sepecifically their configuratioins. So that next time if you want to recreate a series of services, you can just run the code, and it will automatically create everything for you! Sounds amazing right? Let’s get started!

Steps for terraform configuaration:

  1. If you’re using VScode, go ahead with this link . If you’re using other compiler, search “Terraform Installation” + “Your compilor name” for guideline
  2. Once installation finish, go to your root repo and create a folder named infra. Inside it, create two files main.tf and provider.tf. So their paths should be infra/main.tf and infra/provider.tf
Note: Infra folder is just a name for you to distinguish other code from infrastructure part, you can name it anything you want.

"main.tf" and "provider.tf" are mandatory,

"provider.tf" is where to write configuration for Terraform specifically, telling it we're writing code for AWS, the version of Terraform we're gonna use and the region our AWS services to be, blablabla.

"main.tf" in the other hand is gonna be the massive place we write AWS services configuration. Don't panic, I will show you examples on how to do it!

Write provider.tf

  1. copy the code:
  # Specify Terraform works for AWS, the tool and the version of the tool
  terraform {
    required_providers {
      aws = {
      source  = "hashicorp/aws"
      version = ">= 4.9.0"
    }
  }

  # All the configuration status will store in this bucket. I will explain the reason for this later.
  backend "s3" {
    bucket = "your-terraform-backup-bucket-name"
    key    = "terraform.tfstate"
    region = "us-east-1"
    }
  }

  # Simply the regions
  provider "aws" {
    alias  = "us_east_1"
    region = "us-east-1"
  }
  1. Because we specify a bucket to update our Terraform code, go to console to create a bucket or using the command below if you have AWS CLI install.

      # Remember bucket name needs to be globally unique, replace "your-terraform-backup-bucket-name" for another name you like
      aws s3 mb s3://your-terraform-backup-bucket-name --region us-east-1
    
    The reason for backup bucket is that "terraform.tfstate" is a file terraform will create to manage our AWS services that have been created. Just like your github will have ".git" to check new change in your local code that are different from the one on the github. Terraform has this file created locally.
    
    However, let I said previously in CI/CD portion, we want implement an automation pipeline that once we push the code the github, github will deal with AWS instead of us dealing with it. So we will make our terraform to
    
  2. After succefully configuring provider.tf, we’ll dive into the boss provider.tf

11. Test Cypress

This section is still in progress, so stay tuned!

Feel free to check out my other feature in my portfolio!