If you’re an enterprise customer interested in installing a GCP runner, please
contact us to get started with the setup process.
Deploy your GCP Runner using our Terraform module. This guide systematically walks through each configuration variable and deployment option to help you customize your runner for your specific requirements.
Prerequisites
Before starting the deployment, ensure you have completed all requirements from the Overview guide, including:
- GCP Project with billing enabled and sufficient quotas
- VPC and Networking properly configured (including proxy subnet for internal LB)
- SSL Certificate prepared for your chosen load balancer type
- Domain Name with DNS modification capabilities
- Terraform >= 1.3 and GCP CLI installed and authenticated
Create runner in Ona
Start by creating a new runner in the Ona dashboard to obtain the required authentication credentials.
Access Runner Settings
Navigate to Settings → Runners in your Ona dashboard and click Set up a new runner.
- Provider Selection: Choose Google Cloud Platform from the list of available providers
- Runner Information:
- Name: Provide a descriptive name for your runner
- Region: Select the GCP region where you’ll deploy the runner
- Configuration: Click Create to generate the runner configuration
The system will generate a unique Runner ID and Runner Token that you’ll need for the Terraform deployment.
Store the Runner Token securely. You’ll need it for the Terraform configuration and cannot retrieve it again from the dashboard.
Download and Initialize
# Clone the Terraform module repository
git clone <repository-url>
cd gitpod-gcp-terraform
# Copy the example configuration
cp terraform.tfvars.example terraform.tfvars
# Initialize Terraform
terraform init
Required configuration variables
These variables must be configured for any GCP Runner deployment:
Core Authentication Variables
Configure the basic runner authentication and identification:
| Variable | Description | Example Value | Required |
|---|
api_endpoint | Ona management plane API endpoint (from Ona dashboard) | "https://app.gitpod.io/api" | ✅ Yes |
runner_id | Unique identifier for your runner (from Ona dashboard) | "runner-abc123def456" | ✅ Yes |
runner_token | Authentication token for the runner (from Ona dashboard) | "eyJhbGciOiJSUzI1NiIs..." | ✅ Yes |
runner_name | Display name for your runner | "my-company-gcp-runner" | ✅ Yes |
runner_domain | Domain name for accessing development environments | "dev.yourcompany.com" | ✅ Yes |
# Required: Core runner authentication (copy from Ona dashboard)
api_endpoint = "https://app.gitpod.io/api" # Ona management plane API
runner_id = "runner-abc123def456" # From Ona dashboard
runner_token = "eyJhbGciOiJSUzI1NiIs..." # From Ona dashboard
runner_name = "my-company-gcp-runner" # Descriptive name
runner_domain = "dev.yourcompany.com" # Your domain
GCP Project and Location
Specify your GCP project and deployment region:
| Variable | Description | Example Value | Required |
|---|
project_id | Your GCP project ID | "your-gcp-project-123" | ✅ Yes |
region | GCP region for deployment | "us-central1" | ✅ Yes |
zones | List of availability zones (2-3 recommended for HA) | ["us-central1-a", "us-central1-b"] | ✅ Yes |
# Required: GCP project and location
project_id = "your-gcp-project-123"
region = "us-central1"
zones = ["us-central1-a", "us-central1-b", "us-central1-c"]
Network Configuration
Configure your existing VPC and subnet infrastructure:
| Variable | Description | Example Value | Required |
|---|
vpc_name | Name of your existing VPC | "your-company-vpc" | ✅ Yes |
runner_subnet_name | Subnet where runner and environments will be deployed | "dev-environments-subnet" | ✅ Yes |
# Required: Network configuration
vpc_name = "your-company-vpc" # Existing VPC name
runner_subnet_name = "dev-environments-subnet" # Subnet for runner and environments
Runner Subnet Requirements:
- This subnet hosts both the runner service and development environment VMs
- Can use routable CIDR range for corporate network access
- For heavy workloads with high IP usage, use non-routable CIDR range (e.g., 10.0.0.0/16)
- Recommended CIDR masks:
/16 for non-routable subnets (65,534 IPs) - recommended for large deployments
/24 minimum for routable subnets (254 IPs) - suitable for smaller deployments
Load balancer configuration
Choose your load balancer type and configure the required variables:
External Load Balancer (Default)
External load balancers provide internet-accessible environments with simplified setup:
| Variable | Description | Example Value | Required |
|---|
loadbalancer_type | Load balancer type | "external" | ❌ Optional (default) |
certificate_id | Certificate from Certificate Manager | "projects/.../certificates/cert" | ✅ Yes for external |
# External load balancer configuration (default)
loadbalancer_type = "external" # Optional, this is the default
certificate_id = "projects/your-project/locations/global/certificates/your-cert"
Certificate Requirements for External LB:
- Certificate must be stored in Google Certificate Manager
- Must include both root domain and wildcard as Subject Alternative Names
- Format:
projects/{project}/locations/global/certificates/{name}
Internal Load Balancer (Recommended for Enterprise)
Internal load balancers provide VPC-only access with enhanced security:
| Variable | Description | Example Value | Required |
|---|
loadbalancer_type | Load balancer type | "internal" | ✅ Yes |
routable_subnet_name | Routable subnet for internal load balancer IP allocation | "internal-lb-subnet" | ✅ Yes for internal |
certificate_secret_id | Secret Manager secret with certificate and private key | "projects/.../secrets/cert-secret" | ✅ Yes for internal |
# Internal load balancer configuration
loadbalancer_type = "internal"
routable_subnet_name = "internal-lb-subnet" # Must be routable from your network
certificate_secret_id = "projects/your-project/secrets/ssl-cert-secret"
Certificate Requirements for Internal LB:
- Certificate must be stored in Google Secret Manager
- Must contain both certificate and private key in JSON format
- Secret format:
{
"certificate": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
"privateKey": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----"
}
Internal LB Additional Requirements:
Critical Networking Requirements:
- Routable Subnet:
routable_subnet_name must be a subnet with routes from your internal/on-premises network (recommended /28 - 16 IPs)
- Proxy-Only Subnet: Your VPC must have a separate subnet with purpose
REGIONAL_MANAGED_PROXY (recommended /27 - 32 IPs). This subnet does not need to be routable from your corporate network.
- Corporate network connectivity to your GCP VPC (VPN, Interconnect, etc.)
- DNS resolution from your corporate network
Optional configuration variables
These variables provide additional customization and enterprise features:
Enterprise Security Features
HTTP Proxy Configuration
For environments behind corporate firewalls:
| Variable | Description | Example Value |
|---|
proxy_config.http_proxy | HTTP proxy server URL | "http://proxy.company.com:8080" |
proxy_config.https_proxy | HTTPS proxy server URL | "https://proxy.company.com:8080" |
proxy_config.no_proxy | Comma-separated list of hosts to bypass proxy | ".company.com,localhost,127.0.0.1" |
proxy_config.all_proxy | All-protocol proxy server URL | "http://proxy.company.com:8080" |
# HTTP proxy configuration for corporate environments
proxy_config = {
http_proxy = "http://proxy.company.com:8080"
https_proxy = "https://proxy.company.com:8080"
no_proxy = "localhost,127.0.0.1,metadata.google.internal,.company.com"
all_proxy = "http://proxy.company.com:8080"
}
Customer-Managed Encryption Keys (CMEK)
For compliance with organizational encryption policies:
| Variable | Description | Default Value |
|---|
create_cmek | Automatically create KMS keyring and key | false |
kms_key_name | Existing KMS key (when create_cmek = false) | null |
# Option 1: Automatic CMEK setup (recommended)
create_cmek = true
# kms_key_name is ignored when create_cmek = true
# Option 2: Use existing KMS key
# create_cmek = false
# kms_key_name = "projects/your-project/locations/us-central1/keyRings/gitpod-keyring/cryptoKeys/gitpod-key"
For additional configurations when using pre-existing CMEK keys, refer to the IAM configuration guide in the Terraform module.
Pre-Created Service Accounts
For organizations with strict IAM policies that require pre-created service accounts:
# Pre-created service accounts (all optional)
pre_created_service_accounts = {
runner = "gitpod-runner@your-project.iam.gserviceaccount.com"
environment_vm = "gitpod-env@your-project.iam.gserviceaccount.com"
build_cache = "gitpod-cache@your-project.iam.gserviceaccount.com"
secret_manager = "gitpod-secrets@your-project.iam.gserviceaccount.com"
pubsub_processor = "gitpod-pubsub@your-project.iam.gserviceaccount.com"
proxy_vm = "gitpod-proxy@your-project.iam.gserviceaccount.com"
}
For additional configurations when using pre-created service accounts, refer to the IAM configuration guide in the Terraform module.
Custom Images
For enterprises using internal container registries:
| Variable | Description | Example Value |
|---|
custom_images.runner_image | Custom runner container image | "gcr.io/your-project/runner:v1.0" |
custom_images.proxy_image | Custom proxy container image | "gcr.io/your-project/proxy:v1.0" |
custom_images.prometheus_image | Custom Prometheus image | "gcr.io/your-project/prometheus:latest" |
custom_images.docker_config_json | Docker registry credentials (JSON) | jsonencode({...}) |
# Custom images configuration
custom_images = {
runner_image = "gcr.io/your-project/custom-runner:v1.0"
proxy_image = "gcr.io/your-project/custom-proxy:v1.0"
prometheus_image = "gcr.io/your-project/prometheus:latest"
# Docker registry authentication (JSON format)
docker_config_json = jsonencode({
auths = {
"gcr.io" = {
auth = base64encode("_json_key:${file("service-account-key.json")}")
}
}
})
# Set to true for insecure registries (testing only)
insecure = false
}
When using custom images, you need to set up pipelines to sync images from the stable channel to your internal registry (e.g., Artifactory). Contact Ona support when using this feature for guidance on image synchronization.
Deployment process
Validate Configuration
Before deployment, validate your Terraform configuration:
# Validate Terraform syntax and configuration
terraform validate
# Plan the deployment and review all changes
terraform plan -out=tfplan
# Review the plan output carefully for:
# - Resources being created in correct project/region
# - Networking configuration matches requirements
# - No unexpected deletions or modifications
Deploy Infrastructure
Execute the Terraform deployment:
# Apply the planned configuration
terraform apply tfplan
# Monitor deployment progress (typically 15-20 minutes)
# The Redis instance creation is usually the longest step
Post-Deployment Configuration
After successful deployment, get the load balancer details:
# Display all Terraform outputs
terraform output
# Key outputs:
# load_balancer_ip = "10.0.1.100" (internal) or "34.102.136.180" (external)
Create DNS records pointing to your load balancer:
For External Load Balancer:
yourdomain.com. A <external-ip-address>
*.yourdomain.com. A <external-ip-address>
For Internal Load Balancer:
yourdomain.com. A <internal-ip-address>
*.yourdomain.com. A <internal-ip-address>
For internal load balancers, ensure your DNS servers can resolve these records and your corporate network has connectivity to the internal IP address through VPN, Interconnect, or other connectivity methods.
Verification
Test Runner Health
Verify the runner is accessible and functioning:
# Test health endpoint (use -k for self-signed certificates)
curl -k https://yourdomain.com/_health
# Expected response: {"status":"ok"} with HTTP 200
Verify Runner Status in Ona
Monitor runner status in the Ona dashboard:
- Navigate to Settings → Runners
- Verify your runner shows as Connected with green status
- Check Last Seen timestamp is recent (within last few minutes)
- Confirm runner region and configuration are correct
Next steps
With your GCP Runner successfully deployed and verified: