Example automated pipeline to deploy your S3 site from a GitHub repo using AWS OIDC and without the need to upload your secrets to GitHub

2025-01-09

Context

Now that you have the infrastructure set up for your site, it’s time to deploy it to S3 using GitHub Actions.

  • Create a GitHub Actions script for the deployment and configure access to S3
  • Create an Amazon OIDCProvider to allow GitHub to connect to S3
  • Allow GitHub to access the S3 bucket by means of an assumed role

The GitHub script

Is quite straightforward, following the “setup, lint, test, build” sequence, but I found a trick to accelerate it by caching with wild abandon: Here I cache the entire working directory as I found this improved the pipeline run by 50% over caching on ‘~./npm’ or ‘node_modules’. It obviously includes the checkout as well, with the other way does not account for.

  setup:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Cache dependencies
        uses: actions/cache@v4
        with:
          path: '.'
          key: npm-cache-${{ hashFiles('**/package.lock.json') }}
      - name: Install dependencies
        run: npm ci

Usage of the cache, for example in the lint phase:

  lint:
    runs-on: ubuntu-latest
    needs: setup
    steps:
      - name: Cache dependencies
        uses: actions/cache@v4
        with:
          path: '.'
          key: npm-cache-${{ hashFiles('**/package.lock.json') }}
      - name: Lint Action
        run: npm run lint

The deployment task is special

This is where GitHub acquires the AWS credentials (see below) to access S3. Using this method removes the need to upload your AWS secrets to GitHub in order for it to access S3. It’s also more secure, because you can set the permissions of that role to only those needed to access that specific S3 bucket, and, in our case, to invalidate the CloudFront cache.

  - name: Get AWS token
    uses: aws-actions/configure-aws-credentials@v2
    with:
      role-to-assume: arn:aws:iam::<account number>:role/GitHubOIDCRole
      aws-region: ${{ secrets.AWS_REGION }}

It then syncs with S3 and invalidates the CloudFront distribution. The section is below:


  deploy:
    needs: [build, test]
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    steps:
      - name: Get artifact
        uses: actions/download-artifact@v4.1.8
        with:
          name: distro-files
      - name: Get AWS token
        uses: aws-actions/configure-aws-credentials@v2
        with:
          role-to-assume: arn:aws:iam::<account number>:role/GitHubOIDCRole
          aws-region: $ 
      - name: Sync to S3
        run: aws s3 sync . s3://${{ secrets.BUCKET_NAME }} --delete
      - name: CloudFront invalidation 1
        run: |
          aws cloudfront create-invalidation \
            --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_1_ID }} --paths "/*"

Meanwhile, back in AWS

We need to create the OIDC and connect it to the role that GitHub Actions is going to assume. Not the following points:

In the role’s policy, you must follow the repo:<your user name>/${RepoName}:ref:refs/heads/* syntax if you want to avoid specifying a GitHub Organization.

    Condition:
      StringEquals:
        token.actions.githubusercontent.com:aud: sts.amazonaws.com
      StringLike:
        token.actions.githubusercontent.com:sub: !Sub "repo:<your user name>/${RepoName}:ref:refs/heads/*"

The name given here is the one used in the GitHub Actions script:

  GitHubOIDCRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: GitHubOIDCRole

when getting the token there (see above):

  - name: Get AWS token
    uses: aws-actions/configure-aws-credentials@v2
    with:
      role-to-assume: arn:aws:iam::<account number>:role/GitHubOIDCRole
      aws-region: ${{ secrets.AWS_REGION }}

Here’s the CloudFormation for setting up the OIDC and Role:

AWSTemplateFormatVersion: '2010-09-09'
Description: Allow github to access S3 bucket (for deployment of files as a step in github actions, for example)

Parameters:
  RepoName:
    Type: String

Resources:
  GitHubOIDCRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: GitHubOIDCRole
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Federated: arn:aws:iam::<your account>:oidc-provider/token.actions.githubusercontent.com
            Action: sts:AssumeRoleWithWebIdentity
            Condition:
              StringEquals:
                token.actions.githubusercontent.com:aud: sts.amazonaws.com
              StringLike:
                token.actions.githubusercontent.com:sub: !Sub "repo:<your user name>/${RepoName}:ref:refs/heads/*"
      Policies:
        - PolicyName: CloudFrontPermissions
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - cloudfront:CreateInvalidation
                  - cloudfront:GetDistribution
                  - cloudfront:ListInvalidations
                  - cloudfront:GetInvalidation
                Resource: "*"
        - PolicyName: S3WritePermissions
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - s3:ListBucket
                  - s3:PutObject
                  - s3:PutObjectAcl
                  - s3:GetObject
                  - s3:DeleteObject
                Resource:
                  - "arn:aws:s3:::<your bucket name>"
                  - "arn:aws:s3:::<your bucket name/*"

  GitHubOIDCProvider:
    Type: AWS::IAM::OIDCProvider
    Properties:
      Url: https://token.actions.githubusercontent.com
      ClientIdList:
        - sts.amazonaws.com

Outputs:
  RoleArn:
    Description: ARN of the IAM role
    Value: !GetAtt GitHubOIDCRole.Arn

What this will do

It will allow you to create a GitHub Actions deployment script triggered by a push to your website’s main branch, by copying the files over to S3. In fact, this website wes deployed using this method.

Conclusion

This is complex but worth it if you’re collaborating to maintain the site. The following method will certainly suffice if you’re working alone: Much simpler :)

npm run lint
npm run test
npm run build

aws s3 --region $AWS_REGION --profile $AWS_PROFILE_NAME \
  cp ./_site s3://$AWS_BUCKET_NAME --recursive --delete

aws cloudfront create-invalidation --region $AWS_REGION --profile $AWS_PROFILE_NAME \
    --distribution-id <distribution-id> \
    --paths "/*"

Happy hacking!

Other Tags

API GW
AWS
ActiveRecord
Agile
Alexa
Analysis
Ansible
BDD
BLE
C
CAB
CloudFormation
CloudFront
CloudWatch
Cross-compile
Cucumber
DevOps
Devops
DotNet
Embedded
Fitbit
GNU
GitHub Actions
Governance
How-to
Inception
IoT
Javascript
Jest
Lambda
Mac OS X
MacRuby
Metrics
MySQL
NetBeans
Objective-C
PMO
Product Management
Programme management
Project Management
Quality Assurance
Rails
Raspberry Pi
Remote compilation
Remote debugging
Remote execution
Risk Assessment
Route 53
Ruby
S3
SPA
Self Organising Teams
SpecFlow
TDD
Unit testing
VSM
Value
arm
contract testing
inception
nrf51
pact
planning
rSpec
ruby
ssh