(Updated )

Deploy Phoenix to Google Cloud Run and Cloud SQL

Deploy Phoenix to Google Cloud Run and Cloud SQL hero image

This post explains how to set up automated deployments and migrations for a Phoenix project on Google Cloud's managed services using the Google Cloud CLI (mostly). The Phoenix app will be hosted on Google Cloud Run and the PostgreSQL database will be hosted on Cloud SQL. Deployments will be automatically triggered when changes are pushed to the main branch of your git repository (GitHub specifically in this post).

At a high level we will:

  1. Prepare your application
  2. Create a GCP project
  3. Enable the services we need
  4. Create an Artifact Registry repository to store our compiled app
  5. Update Service Account permissions
  6. Create a Cloud SQL database instance
  7. Create environment variables in Secrets Manager
  8. Connect a GitHub repository to Cloud Build
  9. Create a Cloud Build trigger
  10. Create a build configuration file
  11. Trigger a deploy to Cloud Run
  12. (OPTIONAL) psql into Cloud SQL

Prerequisites

Note: This was written using MacOS on an M1 Macbook. Some commands and steps may require variations if you are on a different OS/architecture.

1. Prepare your application

If you don't have an app ready to go but want to following along we will be using a simple project called insight throughout the guide.

mix phx.new insight
cd insight
# create something for us to test DB interaction with e.g.,
mix phx.gen.live Products Product products name brand
# remember to update lib/insight_web/router.ex

In your existing app (or newly generated app), generate a Dockerfile and other useful release helpers with the following command

mix phx.gen.release --docker

Next we update our runtime config to delete the production database environment variables because we will leverage PostgreSQL environment variables (e.g., PGHOST, PGDATABASE, PGUSER, PGPASSWORD), which is conveniently what Postgrex.start_link/1 defaults to under the hood if you do not specify database connection details in your code.

config/runtime.exs
if config_env() == :prod do
  database_url =
    System.get_env("DATABASE_URL") ||
      raise """
      environment variable DATABASE_URL is missing.
      For example: ecto://USER:PASS@HOST/DATABASE
      """
 
  maybe_ipv6 = if System.get_env("ECTO_IPV6") in ~w(true 1), do: [:inet6], else: []
 
  config :insight, Insight.Repo,
    # ssl: true,
    url: database_url,
    pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10"),
    socket_options: maybe_ipv6

Cloud Run will automatically generate a semi randomised URL for your app once deployed. It will be in the form of https://[SERVICE NAME]-[RANDOM NUMBERS].a.run.app. To prevent infinite reloading behaviour in LiveView we need to update config/prod.exs to allow-list the Cloud Run origin.

config/prod.exs
config :insight, Insight.Endpoint,
  cache_static_manifest: "priv/static/cache_manifest.json",
  check_origin: ["https://*.run.app"]

2. Create a GCP project

We will make use of environment variables to make as many commands copy-pasteable as possible. To start, create some environment variables for your application name and preferred GCP region.

APP_NAME=insight
SERVICE_NAME=insight-dev
REGION=australia-southeast1
 
echo $APP_NAME $SERVICE_NAME $REGION

Create a project. Accept the auto-generated project ID at the prompt.

gcloud projects create --set-as-default --name="$SERVICE_NAME"

Create environment variables for your app name, project ID, project number, and for easy reuse. It may take ~30 seconds before the project successfully returns with the gcloud projects list command.

PROJECT_ID=$(gcloud projects list \
    --format='value(projectId)' \
    --filter="name='$SERVICE_NAME'")
 
PROJECT_NUMBER=$(gcloud projects list \
    --format='value(projectNumber)' \
    --filter="name='$SERVICE_NAME'")
 
echo $PROJECT_ID $PROJECT_NUMBER

Find the billing account you set up (refer to prerequisites).

gcloud billing accounts list

Add your billing account from the prior command to the project.

gcloud billing projects link $PROJECT_ID \
    --billing-account=[billing_account_id]

3. Enable the services we need

Google Cloud disables all cloud products/services on a new project by default so we will need to enable all the services we will use for this deployment: Artifact Registry, Cloud Build, Cloud SQL, Secret Manager, Cloud Run, and the IAM API.

The following command will enable all the services we need.

gcloud services enable \
  artifactregistry.googleapis.com \
  cloudbuild.googleapis.com \
  compute.googleapis.com \
  sqladmin.googleapis.com \
  secretmanager.googleapis.com \
  run.googleapis.com \
  iam.googleapis.com

4. Create an Artifact Registry repository to store our compiled app

Create a new repository with an identifier (I generally align this with my elixir app name) and specifying the format and region.

gcloud artifacts repositories create $APP_NAME \
  --repository-format=docker \
  --location=$REGION \
  --description="$APP_NAME application"

Create an environment variable to capture the repository's Registry URL with the following command:

REGISTRY_URL=$(gcloud artifacts repositories describe $APP_NAME \
  --location $REGION)
 
ECHO $REGISTRY_URL

5. Update service account permissions

Update the service account to have all necessary permissions.

Above can be added with the following commands:

gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member=serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
  --role="roles/logging.logWriter" \
  --condition None
 
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member=serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
  --role="roles/cloudsql.client" \
  --condition None
 
gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member=serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
  --role="roles/artifactregistry.writer" \
  --condition None

6. Create a Cloud SQL database instance

Set up the environment variables we will use for the next handful of steps. The instance name must be composed of lowercase letters, numbers, and hyphens; must start with a letter.

INSTANCE=${APP_NAME}-sql
DB_NAME=${APP_NAME}_dev
DB_USER=${APP_NAME}_admin
DB_PASS=pa55w0rd
 
echo $INSTANCE $DB_NAME $DB_USER $DB_PASS

Create a new PostgreSQL instance specifying your desired region, type of DB, and compute tier. We've used the lowest tier for this example, refer to the pricing. This can take a few minutes to provision.

gcloud sql instances create $INSTANCE \
  --region=$REGION \
  --database-version=POSTGRES_17 \
  --tier=db-perf-optimized-N-2

Now we will create a user for our application to use when interacting with the database.

gcloud sql users create $DB_USER \
  --instance=$INSTANCE \
  --password=$DB_PASS

Next we will create our database.

gcloud sql databases create $DB_NAME \
  --instance $INSTANCE

We also need to retrieve our instance connectionName for later:

CONN_NAME=$(gcloud sql instances describe $INSTANCE \
  --format='value(connectionName)')
 
echo $CONN_NAME

7. Create environment variables in Secrets Manager

Now we need to create the secrets on GCP that our Phoenix app will use. We will create these in Secrets Manager:

mix phx.gen.secret | gcloud secrets create DEV_SECRET_KEY_BASE --data-file=-
echo /cloudsql/$CONN_NAME | gcloud secrets create DB_HOST --data-file=-
echo $DB_USER | gcloud secrets create DB_USER --data-file=-
echo $DB_PASS | gcloud secrets create DB_PASS --data-file=-

Next we need to provide the Service Account with permission to access the secrets.

gcloud secrets add-iam-policy-binding DEV_SECRET_KEY_BASE \
  --member=serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
  --role="roles/secretmanager.secretAccessor"
 
gcloud secrets add-iam-policy-binding DB_USER \
  --member=serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
  --role="roles/secretmanager.secretAccessor"
 
gcloud secrets add-iam-policy-binding DB_PASS \
  --member=serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
  --role="roles/secretmanager.secretAccessor"
 
gcloud secrets add-iam-policy-binding DB_HOST \
  --member=serviceAccount:$PROJECT_NUMBER-compute@developer.gserviceaccount.com \
  --role="roles/secretmanager.secretAccessor"

We also need to retrieve the paths for the DB_USER and DB_PASS for use later:

DB_USER_PATH=$(gcloud secrets describe DB_USER --format='value(name)')
DB_PASS_PATH=$(gcloud secrets describe DB_PASS --format='value(name)')
 
echo $DB_USER_PATH $DB_PASS_PATH

8. Connect a GitHub repository to Cloud Build

This step and the next step are easier via Google Cloud Console > Cloud Build > Repositories.

Click "CREATE HOST CONNECTION" and populate the fields.

It will then take you through authentication with GitHub. You will have an option to provide access to all of your GitHub repositories or just a selection. Pick whatever makes sense for your needs.

After you have successfully created a connection, click "LINK A REPOSITORY". Select the connection we just created, and your Phoenix app repository. Choose generated repository names.

9. Create a Cloud Build trigger

Now we create a trigger via Google Cloud Console > Cloud Build > Triggers.

Click "CREATE TRIGGER" and populate with your desired details:

10. Create a build configuration file

In your Phoenix project's root directory run the following command to create a cloudbuild.yaml file.

cat << EOF > cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
  id: Build and Push Docker Image
  script: |
    docker build -t \${_IMAGE_NAME}:latest .
    docker push \${_IMAGE_NAME}:latest
 
- name: 'gcr.io/cloud-builders/docker'
  id: Start Cloud SQL Proxy to Postgres
  args: [
      'run',
      '-d',
      '--name',
      'cloudsql',
      '-p',
      '5432:5432',
      '--network',
      'cloudbuild',
      'gcr.io/cloud-sql-connectors/cloud-sql-proxy',
      '--address',
      '0.0.0.0',
      '\${_INSTANCE_CONNECTION_NAME}'
    ]
 
- name: 'postgres'
  id: Wait for Cloud SQL Proxy to be available
  script: |
    until pg_isready -h cloudsql ; do sleep 1; done
 
- name: \${_IMAGE_NAME}:latest
  id: Run migrations
  env:
  - MIX_ENV=prod
  - SECRET_KEY_BASE=fake-key
  - PGHOST=cloudsql
  - PGDATABASE=\${_DATABASE_NAME}
  secretEnv:
  - PGUSER
  - PGPASSWORD
  script: |
    /app/bin/insight eval "Insight.Release.migrate"
 
- name: 'gcr.io/cloud-builders/gcloud'
  id: Deploy to Cloud Run
  args: [
    'run',
    'deploy',
    '\${_SERVICE_NAME}',
    '--image',
    '\${_IMAGE_NAME}:latest',
    '--region',
    '\${_REGION}',
    '--allow-unauthenticated',
    '--set-secrets=SECRET_KEY_BASE=DEV_SECRET_KEY_BASE:latest',
    '--set-secrets=PGHOST=DB_HOST:latest',
    '--set-secrets=PGUSER=DB_USER:latest',
    '--set-secrets=PGPASSWORD=DB_PASS:latest',
    '--set-env-vars=PGDATABASE=\${_DATABASE_NAME}',
    '--add-cloudsql-instances=\${_INSTANCE_CONNECTION_NAME}'
  ]
 
availableSecrets:
  secretManager:
  - versionName: $DB_USER_PATH/versions/latest
    env: 'PGUSER'
  - versionName: $DB_PASS_PATH/versions/latest
    env: 'PGPASSWORD'
 
images:
  - \${_IMAGE_NAME}:latest
 
options:
  automapSubstitutions: true
  logging: CLOUD_LOGGING_ONLY
 
substitutions:
  _DATABASE_NAME: $DB_NAME
  _IMAGE_NAME: $REGISTRY_URL/app
  _INSTANCE_CONNECTION_NAME: $CONN_NAME
  _REGION: $REGION
  _SERVICE_NAME: insight-dev
EOF

Notes:

11. Trigger a deploy to Cloud Run

Commit the cloudbuild.yaml file (or any other change) and push it to your GitHub repository and watch it build. You can manually trigger builds via Google Cloud Console > Cloud Build > Triggers.

You can view previous builds and stream in-progress builds on the Cloud Build History tab.

You should now have a fully deployed application on GCP!

If at any time you need to retrieve details of this service you can do so with the following command

gcloud run services list

12. (OPTIONAL) psql into Cloud SQL

If we want to remotely connect to our Cloud SQL database we can use a tool called Cloud SQL Proxy. This allows us to securely connect via API to our database using our Google Cloud SDK credentials.

Download and install the Cloud SQL Proxy. Follow the instructions at the link.

Cloud SQL Proxy utilises your Google Cloud SDK credentials for auth. You can set them with:

gcloud auth application-default login

Start the proxy using our connectionName. The port must not already be in use.

./cloud-sql-proxy --port 54321 $CONN_NAME

If successful you will see see output similar to:

Authorizing with Application Default Credentials
Listening on 127.0.0.1:54321

Now we can psql in!

psql host="127.0.0.1 port=54321 sslmode=disable user=$DB_USER dbname=$DB_NAME"