Introduction
This guide is based on the Cloud Resume Challenge and follows the steps listed inside it. That said, I strongly recommend purchasing the original book as it covers much more details. This post will only show the method I’ve done for my challenge. Think of this post as a helpful guide rather than a complete answer key.
The elements cover in this post:
- AWS (S3 bucket, Cloudfront, IAM, Lambda Function, DynamoDB, Identity Provider OIDC, DNS, SSL Manager and more)
- CI/CD (Git Action)
- infrastructure as code (Terraform)
- Frontend Language (HTML, CSS) -> Now switch to Hugo
- Backend Language (Python, Javascript)
- Test (Cypress)
Please read through the <challenge> on the offical website before looking into my post, because I will directly dive into how I solve the problem.
1. Certification
The challenge recommonds the Certified Cloud Practitioner as a basic level. However I passed the Solutions Architect - Associate for a higher level approach. Ultimately the certificate offers professional knowledge regarding cloud services.
Q&A
- Q: Is getting a certificate worthy?
- A: The short answer: yes and no. If you want to become a cloud engineer or similar role (Site reliabilty Engineer, DevOps engineer), the answer is yes! If your goal is just to learn cloud services, then a certificate will not be worthy. Here are 2 major benefits:
1. Experience from the Certificate: The resume challenge will guide you through a limited set of AWS resources, specifically focusing on how to host a static website. Most of the time, you will need to handle various scenarios.
Let me offer you some cases:
- Are you familiar with VPC networks and EC2 instances, which are commonly used by companies?
- How to prevent accidental deletion in bucket? (Versioning / MFA)
- Do you understand the architectural difference between a company that wants to migrate services from on-premise to the cloud while treating the on-premise data center as a backup, versus a company that wants to extend its data storage to the cloud but keep all services hosted on-premises?
- ...
These values are not provided in resume challenge , but you will encounter in certification test. So like I said, define your goal of this challenge, whether you want to dive deep into the cloud world or not.
2. Career: It does add some values to your resume to help you stand out from other candidates, especially with this cloud project.
- Q: Any good resources you recommond to prepare for the exam?
- A: I used the Dojo bundle along with their exam. This was the only resource I used to prepare for the exam. You are absolutely free to explore any other lessons. (Note: I do not receive any compensation from Dojo and have no personal affiliation with them. I recommend it simply because it was the only resource I used; I cannot guarantee the quality of other materials).
*Impotant* Don't pay full price for the certification exam! Look for coupons or vouchers online!
2. Getting Started with AWS and IAM role
Head over to AWS and register an account. Yes, youâll need to enter your credit card info, but donât panic, AWS has a free plan that lasts for a year.
As for the IAM role, I recommend just sticking with the root user for now. It gives you full access to all the services, otherwise you’ll be tired with a lot of access denials later on.
Warning: This is a bad practice for security reasons. Don't do this long term! Especially if you're setting this up in a real production environment!
3. HTML & CSS
The foundation of building a website’s frontend: How you code it depends on your own style and taste. I use Hugo along with its Papermod theme to build my website. I strongly recommend using an existing tool to build your portfolio instead of hand-coding everything with plain HTML and CSS. Hereâs why:
- If you’re just starting out and don’t plan on becoming a frontend developer, it’s really not the best use of your time.
- Letâs be real, writing perfect CSS for a beautifully designed website is super hard, especially when you’re still learning.
- Even if you manage to finish your website and the design meets your expectations, consider whether the time spent was worth the result you achieved.
Some other popular tools: Adobe, Notion, Wix
I’m definitely not trying to discourage anyone from writing their own HTML and CSS. In fact, I absolutely take my hat off for anyone who practice writing good CSS code. My point is just a friendly heads-up. I personally spent over 30 hours coding my site from scratch, and honestly, it still didnât come close to what Hugo gave me in way less time. Switching to Hugo was a game changer.
I came across an HTML & CSS Tutorial that I found really interesting. Just to clarify, I didnât use this tutorial in my own learning journey, but I thought the course designerâs final task of building a YouTube-style webpage was pretty cool.
4. Static Website
We store the HTML & CSS files on S3 bucket. Here’s how you can do it:
- Log in to your AWS S3 console.
- Click on “Create bucket” (it’s in an orange box).
- Enter a unique
bucket name
. This name has to be unique across all AWS accounts in globe. Then, hit “Create bucket”. [You donât need to change any other settings.] - Go to your newly created bucket, click Upload, and upload your files. Make sure that your index.html file is right in the root directory of the bucket. This means when you click on your bucket, you should see index.html directly in the file section, not inside any folder!
For future convinience, use AWS CLI to upload the files by terminal. A useful tutorial video from Frank if you need
Commands we oftenly use:
// This command updates new file and delete the files that are not presented in the new updated repository.
aws s3 sync ./your_folder/ s3://your-bucket --delete --exclude "*.DS_Store" --exclude ".gitignore" --exclude ".git/*"
// Clean up all files in your bucket
aws aws s3 rm s3://your-bucket --recursive
5. HTTPS
Using HTTPS with CloudFront has several benefits:
- Encryption: HTTPS encrypts your content while it’s being transferred between AWS and the user’s PC, keeping it secure.
- Traffic Control: CloudFront helps manage traffic more efficiently and scales with demand. (Distribution got its name for a reason :)
- Cost: Serving S3 content via CloudFront is FREE. (Just a note: if you’re hosting your website via S3, AWS does charge for read times.)
Let me break down the CloudFront setup for you:
- Log in to your AWS CloudFront console.
- Click on “Create distribution” (it’s in an orange box).
- Update the following sections:
- Origin domain: Choose the bucket you created.
- Origin access: Set this to
Origin access control settings
to ensure that only your distribution can access the S3 content.- Click Create New OAC, then Create.
- Viewer Protocol policy: Select
Redirect HTTP to HTTPS
. - WAF:
Do Not Enable
- Default root object:Only change this if your
index.html
is named something different, like “project1.html” or “random_name.html”. CloudFront needs to know the name of the root object to serve your content correctly.
- After creating your distribution, wait about 3-5 minutes for it to deploy. Once it’s ready, go to your new distribution details. You’ll find a section prompting you to create a policy and paste it into S3.
- Go to S3 -> Permission in the navbar -> Scroll down to Bucket Policy -> Paste your policy there.
If you need a policy template, here’s one you can use:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement":
[
{
"Sid": "AllowCloudFrontServicePrincipal",
"Effect": "Allow",
"Principal": { "Service": "cloudfront.amazonaws.com" },
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your_bucket/*",
"Condition":
{ "StringEquals": { "AWS:SourceArn": "your_cloudfront_arn" } },
},
],
}
Find your ARN
- Until now, you should be able to see your site from
CloudFront URL
(Distribution domain name in The above picture)
6. DNS
You can buy a domain from anywhere. Some popular options are Route S3, Cloudflare, or any other DNS provider.
- A domain typically costs around $10 a year, and that’s the only real expense for this project.
- Refer to your DNS provider for a simple guide on purchasing the domain, itâs usually just a simple process of typing the desired domain name, paying, and youâre good to go.
Now, let’s move on to connecting your domain with CloudFront so your S3 content can be accessed through your domain (just like how my site is at https://www.ziirui-resume-website.com). Weâll also use AWS Certificate Manager to add SSL for security.
Steps for Creating SSL and Verifying Your Domain
- Log in to the Certificate Manager Console
- CLick Request (orange button) and then Next
- Enter your domain name (ex: example.com), then hit Request
Youâve now created an SSL certificate, but we need to verify that the domain is yours. Here’s how to do that:
Scroll down to the Domains section of your new SSL. Youâll see a CNAME name and a CNAME value. these are like a key and value pair. Youâll need to create a record in your DNS provider for each of them. Don’t panic! Just search “add record in [Your_DNS_provider_name]” on your browser for step-by-step videos online. Reminder:
- The record type is
CNMAE
. - The record name is
CNAME name
. - Record target is the
CNAME value
. - Record status is
No Proxy
.
- The record type is
The Do this step for all of your SSL domains. Once you’ve added those records, check back in the Certificate Manager. If everythingâs set up correctly, youâll see “Success.”
After verification, we have proved the DNS ownership to Certificate Manager, now we can connect DNS with our CloudFront. Step below:
- Copy your CloudFront URL
- Go to your DNS provider and create two
CNAME
records:- One with Name =
www
and Target = CloudFront URL. - Another with Name = your root domain (e.g., example.com) and Target = CloudFront URL.
- One with Name =
Final look for DNS provider console:
Setting SSL in CloudFront (Don't panic! It's simple, you've done most of the job):
- Go to your CloudFront distribution
- Under General, click Edit Settings.
- In Alternate domain name (CNAME), enter your domain (“your_root_domain” and “www.your_root_domain”)
- In Custom SSL certificate, Select your SSL.
- Under Custom SSL certificate, select the SSL certificate you created.
Now, test your website using your domain. If all went well, your content should be accessible via your personal DNS!
7. Database
For this part, we’ll be using DynamoDB to keep track of our visitor count.
Hereâs how to create a DynamoDB table:
- Same as always, open your DynamoDB console
- Click the Create Table (it’s orange).
- Fill out the form:
- Table Name: You can choose any name you like.
- Partition Key: Enter
id
and selectString
- Table Setting: Choose
Customize settings
- Read/write capacity settings: Set this to
On-demand
instead ofProvisioned
to avoid extra charges.
- Click Create Table
Explanation: The partition key can be anything; it’s just a way to retrieve the visitor counter
from Python later on. We choose On-demand
for the table settings to only pay for what we use, rather than the default Provisioned
setting which can lead to charge overhead.
- Once your table is created, click Explore items (found in the orange box at the top right).
- Scroll down and hit the Create item button.
- For the
id
value section, you can enter1
for simplicity. - In the top right corner, click Add new attribute and select
Number
for the attribute type. This will be used for the visitor count. - Create an attribute named
count
and set its value to1
.
Hereâs what it should look like:
Now you have an item with an id
of 1
and a count value of 1
. Next, weâll set up a Lambda Function to retrieve and increment this value.
8. Lambda API & Python
Setting up a Lambda Fucntion should be pretty straightforward for you. Just a quick tip: make sure to choose the programming language (the Runtime
) you prefer for interacting with the database. For this guide, we’ll use Python. Remember, this Lambda function will trigger every time someone visits your domain.
Weâll be using the Boto3 library for working with DynamoDB in Python. Without further word, hereâs a complete code template for your Lambda function:
Warning: Donât copy this code if youâre not familiar with Python or the Boto3 library. Doing so without understanding could lead to issues and will only cause you more trouble in the long run.
import json
import boto3
#connect to table
dynamodb = boto3.resource("dynamodb")
table_name = "your_table_name"
table = dynamodb.Table(table_name)
def lambda_handler(event, context):
id = "your_id"
response = table.get_item(Key={
'id':id
})
# If the item exists, increment the count by 1
if "Item" in response:
views = response["Item"]["your_number_type_key_name"]
views += 1
table.put_item(Item = {
"id" : id,
"your_number_type_key_name" : views
})
# If the item doesnât exist,indicating this is the 1st time our site being visited, then create a view of 1
else:
views = 1
table.put_item(Item = {
"id" : id,
"your_number_type_key_name" : views
})
# The content our javascript will receive in html file.
return_format = {
'statusCode': 200,
'body': json.dumps({
'message': 'success',
'count': int(views),
'event': event
})
}
return return_format
After we set up this Python code, you can test your Lambda Function in the AWS console. It should work fine for retrieving DynamoDB data. Now, we need to set up a URL for Lambda so that our local HTML file can trigger this Lambda function.
Steps to set up the URL: Watch this video for a step-by-step guide.
Noticeable changes to make:
- Allow Origins: Add your domain values here, such as
https://example.com
andhttps://www.example.com
- Allow headers: Add the value
content-type
- Allow methods: Add
POST
- Max age: Set this to
43200
seconds, which equals 12 hours for caching. - Allow credentials: Enable this option to reduce verification times.
- Click Create Function, and the Function URL should appear at the bottom right corner of your screen.
Note: This is a basic setup for convenience, as Function URLs do not address DDoS protection.
Two alternative solutions are:
1) Switch to API Gateway for a more web-friendly and secure access method, allowing your website to make requests while restricting access from unauthorized users.
2) Configure the IAM role in Lambda URLs to only allow CORS (your domain) to use the link.
8. Javascript
Now that weâve got the Lambda function reading data from DynamoDB, the next step is getting that data into our HTML file so we can display it on the website. This is where JavaScript comes in handy!
Weâll be using a simple format from AWS to help us fetch and display the DynamoDB values.
(Here is a Javascript tutorial if you need.)
// Be sure to attch this id in your HTML, (e.g. <p class="your_counter_id_in_html"> times</p>)
const counter = document.querySelector(".your_counter_id_in_html");
async function update_views() {
try {
const response = await fetch("your_lambda_function_URL", {
method: "POST", // Set the method to POST
headers: {
"Content-Type": "application/json/update_counter",
},
body: JSON.stringify({}),
});
const data = await response.json();
counter.innerHTML = `${data.count}`;
} catch (error) {
counter.innerHTML = `43`;
}
}
update_views();
Congraduation! You’ve done the webiste part of this challenge. For the rest content is where we dive into the cloud world on CI/CD and Infrastructure as code. So if you inspire to be a cloud engineer, like to challenge yourself
9. CI/CD
Here’s how weâll use Github Action. If you’re not familiar with CI/CD (Continuous Integration and Continuous Deployment), let me give you a quick scenario:
Imagine you push your files to GitHub, but you need those files to be updated on AWS. Without CI/CD, youâd have to manually update them on AWS, uploading HTML and CSS files to an S3 bucket, updating your Python code in a Lambda function, and possibly adjusting policies in different places. And if a bug pops up after all that work, guess what? Youâd have to go through the whole process again!
CI/CD automates these tasks, so you don’t have to handle all that manual work every time you make a change. In our final result, all you need to do is push your code to GitHub using git push origin main
, and GitHub will automatically update the changes on AWS for you.
Steps to create CI/CD pipeline in github action:
- In your local repository, create a workflow file called
front-end-cicd.yml
in the.github/workflows
directory. If you don’t have the “.github” and “workflows” folders, create them manually. You should end up with.github/workflows/front-end-cicd.yml
in your repository. - Now weâre writing in YAML. I’ll provide you with two templates for your choice:
- Github OIDC (Recommonded, safe and prevent key leakage)
- Jakejarvisâs CI/CD (Not Recommonded for secure reason, but itâs generally easy to set up)
Option 1: Github OIDC
The key difference between OIDC and Jakejarviss’s CI/CD is that OIDC doesnât require your AWS Access Key. Instead, it uses a trusted third-party identity provider for authentication. For example, you can use your Google, GitHub, Facebook, or LinkedIn account to sign in to LeetCode. All these third-party companies are trusted by LeetCode.
When we use GitHub OIDC, GitHub can automatically access your AWS resources based on your GitHub identity, without requiring any AWS keys. This is more secure and convenient.
Steps to set up OIDC in AWS:
- Set up an identity provider in AWS for GitHub.
- After creating the OIDC provider, go to the provider and select Assign Role from the top right corner.
- Select
Creating a New Role
- Modify the following fields, then click Next:
- Trusted entity type:
Web Identity
- Audience: Select
sts.amazonaws.com
- GitHub organization: Enter your GitHub account name, for example, my name is
zirui2333
- Trusted entity type:
- For the policy, select
AdministratorAccess
for simplicity, then click Next. - Enter a role name of your choice:
github_oidc
, then click Create Role. - Go back to the OIDC provider and select Endpoint verification.
- Click Manage in Thumbprints and add the hex
1b511abead59c6ce207077c0bf0e0043b1382612
(This might change every year, check online “Github Thumbprints + current year”), then click Save changes.
Steps to set up a CI/CD YAML file in GitHub:
Copy the following template into your YAML file:
# Sample workflow to access AWS resources when workflow is tied to branch # The workflow Creates static website using aws s3 name: AWS Frontend on: push: paths: - "Your_html_folder/**" # Trigger CI/CD on changes in your HTML folder branches: "main" env: BUCKET_NAME: ${{ secrets.AWS_S3_BUCKET }} AWS_REGION: "us-east-1" # permission can be added at job level or workflow level permissions: id-token: write # This is required for requesting the JWT contents: read # This is required for actions/checkout jobs: #Upload frontend code S3PackageUpload: runs-on: ubuntu-latest steps: - name: Git clone the repository uses: actions/checkout@v4 - name: configure aws credentials uses: aws-actions/configure-aws-credentials@v3 with: role-to-assume: "github-cicd" # Or replace to the name you set up for iam role role-session-name: aws_frontend_workflow aws-region: ${{ env.AWS_REGION }} # Upload a file to AWS s3 - name: Copy Actual_Resume_Web to s3 run: aws s3 sync ./Your_html_folder/ s3://${{ env.BUCKET_NAME }}/ --delete --exclude "*.DS_Store" --exclude ".gitignore" --exclude ".git/*"
Remember to add Secrets for your S3 bucket name and other credentials.
Once youâve added the secret and modified the template, push the file to GitHub using the following commands:
git add .github/workflows/front-end-cicd.yml git commit -m "Create CI/CD pipeline for frontend" git push origin main
Option 2: Jakejarvisâs CI/CD
Copy the jakejarvis’s CI/CD template into your YAML file, and replace the top part with:
on: push: paths: - "Your_html_folder/**" # This line indicates that CI/CD will only happen if git push contains any files on html folder, in other words, CI/CD will trigger only if there's any change in your frontend code branches: "main"
Make your S3 bucket publicly readable. Ensure your S3 bucket settings are also configured for public access.
Add Secrets to GitHub:
- Follow the steps for adding Secrets, including your S3 bucket name and Access Key ID.
- I previously shared a video from Frank explaining how to create an access key. You can revisit that here if needed.
- If you’ve created the access key but can’t view it again due to AWS’s security policies, you can retrieve it from your terminal by running:
cd ~ # Go to root cat .aws/credentials # Print the access key and id
After locating your keys, add them to your GitHub Secrets. Important: Please comment out the SOURCE_DIR: ‘public’ line in the Jakejarvis template to prevent conflicts in your system.
Once you have modified the template and added your Secrets, push the file to GitHub with:
git add .github/workflows/front-end-cicd.yml git commit -m "Create CI/CD pipeline for frontend" git push origin main
10. Infrastructure as code
This is another essential skill in cloud computing world. We will use Terraform.
For the people who doesn’t know Infrastructure as code, it’s used for cloud service Automation, in our case is AWS services like S3 bucket, dynamoDB, cloudFront distribution, SSL manager and more…. Let me give you a scenario: Imagine you went through all the steps I listed previously, you saw lots of screenshot configuration from AWS console right? They’re one of the ways how you can create services in AWS, but imagine soemthing goes wrong, and you need to re-create all the services again (Oh No)! You defintealy don’t want to manually went over every single service creation again. Therefore here comes the Infrastructure as code, in simple word, we will write codes for those services, sepecifically their configuratioins. So that next time if you want to recreate a series of services, you can just run the code, and it will automatically create everything for you! Sounds amazing right? Let’s get started!
Steps for terraform configuaration:
- If you’re using VScode, go ahead with this link . If you’re using other compiler, search “Terraform Installation” + “Your compilor name” for guideline
- Once installation finish, go to your root repo and create a folder named
infra
. Inside it, create two filesmain.tf
andprovider.tf
. So their paths should beinfra/main.tf
andinfra/provider.tf
Note: Infra folder is just a name for you to distinguish other code from infrastructure part, you can name it anything you want.
"main.tf" and "provider.tf" are mandatory,
"provider.tf" is where to write configuration for Terraform specifically, telling it we're writing code for AWS, the version of Terraform we're gonna use and the region our AWS services to be, blablabla.
"main.tf" in the other hand is gonna be the massive place we write AWS services configuration. Don't panic, I will show you examples on how to do it!
Write provider.tf
- copy the code:
# Specify Terraform works for AWS, the tool and the version of the tool
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
}
# All the configuration status will store in this bucket. I will explain the reason for this later.
backend "s3" {
bucket = "your-terraform-backup-bucket-name"
key = "terraform.tfstate"
region = "us-east-1"
}
}
# Simply the regions
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
}
Because we specify a bucket to update our Terraform code, go to console to create a bucket or using the command below if you have AWS CLI install.
# Remember bucket name needs to be globally unique, replace "your-terraform-backup-bucket-name" for another name you like aws s3 mb s3://your-terraform-backup-bucket-name --region us-east-1
The reason for backup bucket is that "terraform.tfstate" is a file terraform will create to manage our AWS services that have been created. Just like your github will have ".git" to check new change in your local code that are different from the one on the github. Terraform has this file created locally. However, let I said previously in CI/CD portion, we want implement an automation pipeline that once we push the code the github, github will deal with AWS instead of us dealing with it. So we will make our terraform to
After succefully configuring
provider.tf
, we’ll dive into the bossprovider.tf
11. Test Cypress
This section is still in progress, so stay tuned!
Feel free to check out my other feature in my portfolio!