Deploying Next.js on AWS with S3 + CloudFront

Quick tutorial to configure GitHub → S3 → CloudFront pipeline.

Prereq

  • repo on Github
  • Next.js static website
  • want to use custom domain, but don’t want to use AWS Route 53 for NS/DNS

Steps

1 Setup AWS

1.1 Request for a certificate in ACM

  • public cert → example.com + DNS validation → add CNAME in your DNS

    • wait 5+ min
    • check for whether DNS added an extra dot at the CNAME value

1.2 Create S3 Bucket

  • name whatever
  • click on bucket

    • disable Properties > Static website hosting (since files will be hosted via CF)
    • enable Permissions > Block public access (since only CF will have access)
    • edit Permissions > Bucket policy to:

      {
          "Version": "2008-10-17",
          "Id": "PolicyForCloudFrontPrivateContent",
          "Statement": [
              {
                  "Sid": "AllowCloudFrontServicePrincipal",
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "cloudfront.amazonaws.com"
                  },
                  "Action": "s3:GetObject",
                  "Resource": "arn:aws:s3:::AWS_BUCKET_NAME/*",
                  "Condition": {
                      "StringEquals": {
                          "AWS:SourceArn": "arn:aws:cloudfront::AWS_ACCOUNT_ID:distribution/AWS_CLOUDFRONT_DEPLOYMENT_ID"
                      }
                  }
              }
          ]
      }
      AWS_BUCKET_NAME → your bucket name
      AWS_ACCOUNT_ID → 12-digit number, click account on right top corner
      AWS_CLOUDFRONT_DEPLOYMENT_ID → 14-digit alphanumeric, check CF

1.3 Create CloudFront Distribution

  • click on distribution

    • edit Settings

      • Alternate domain name (CNAME) to example.com
      • Custom SSL certificate to what you made earlier
      • Default root object to entry point to your app (probably index.html)
    • create Origin

      • Origin domain to s3 bucket (do NOT use s3-website domain)
      • Origin access to Origin access control settings

        • click Create new OAC

          • Signing behavior to Sign requests
          • Origin type to S3

2 Test Next.js export

2.1 Add static export to Next.js config

/** @type {import('next').NextConfig} */

const nextConfig = {
  output: "export",
  distDir: "out",
  images: {
    unoptimized: true,
  },
};

export default nextConfig;
images config controls static image exports, see https://nextjs.org/docs/app/api-reference/next-config-js/images#aws-cloudfront to enable image optimization on CF
Next.js <Image> optimization doesn’t work on static exports

2.2 Upload static file to S3

  • upload contents of the .out folder to the root directory of S3

    • use cli or just web
  • go to CF distribution domain name ([14-digit alphanumeric].cloudfront.net) and check whether the website is accessible

3 Setup Github Actions

3.1 Create IAM Policies for Github Actions

best practices: Only allow least amount of access
  • click Create Policy

    • create policy AmazonS3PutOnlyAccess with following JSON

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "VisualEditor0",
                  "Effect": "Allow",
                  "Action": "s3:PutObject",
                  "Resource": "*"
              }
          ]
      }
      this policy only allows write access to all buckets
    • create policy AmazonS3LimitedBucketAccess-AWS_BUCKET_NAME with following JSON

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "VisualEditor1",
                  "Effect": "Allow",
                  "Action": "s3:*",
                  "Resource": "arn:aws:s3:::AWS_BUCKET_NAME/*"
              }
          ]
      }
      this policy allows all access only to the specified bucket

3.2 Create IAM User for Github Actions

  • click Create User and name github-actions

    • Permissions > Add permissions > Attach policies directly and choose AmazonS3PutOnlyAccess
    • Permissions > Permissions boundary > Add boundary and choose AmazonS3LimitedBucketAccess-AWS_BUCKET_NAME
    Technically vice versa should work but do whatever makes most sense

3.3 Add AWS access keys to Github repository

  • click on the created user
  • Security credentials > Access keys > Create access key and choose Applications running outside AWS

    • copy keys and add it to GITHUB_REPOSITORY > Settings > Secrets and add to repository secrets (not environment secrets)

      • name it AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

3.4 Add Github Actions workflow

  • add following code to ./.github/workflows/deploy.yml

    # This workflow will do a clean installation of node dependencies, cache/restore them, build and upload /out folder to Amazon S3
    name: aprilsecond
    
    on:
      push:
        branches:
          - main
    
    jobs:
      build-and-deploy:
        runs-on: ubuntu-latest
        
        steps:
          - name: Checkout
            uses: actions/checkout@v4
    
          - name: Install pnpm
            uses: pnpm/action-setup@v4
            with:
              version: 9
              run_install: false
            
          - uses: actions/setup-node@v4
            with:
              node-version: 21
              cache: 'pnpm'
              cache-dependency-path: pnpm-lock.yaml
              
          - name: Install dependencies
            run: pnpm install
            
          - name: Build
            run: pnpm build
            
          - name: Configure AWS Credentials
            uses: aws-actions/configure-aws-credentials@v4
            with:
              aws-region: us-east-2
              aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
              aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
              
          - name: Copy files to S3 with the AWS CLI
            run: |
              aws s3 cp --recursive --no-progress ./out s3://AWS_BUCKET_NAME/
    aws s3 cp command copies and overwrites existing content in the bucket (--recursive flag is required, --no-progress flag is not required but improves readability on Github Actions log)
best practices: check latest versions for each packages and update accordingly before deployment
  • push this file and see how it works

4 Connecting to domain

  • add ALIAS record to your CF domain

Notes

  • check for S3 free tier limits, updating large websites can quickly use up 20K GET requests and 2K PUT requests