×

Creating a Red Hat OpenShift Service on AWS cluster with egress lockdown provides a way to enhance your cluster’s stability and security by allowing your cluster to use the image registry in the local region if the cluster cannot access the Internet. Your cluster will try to pull the images from Quay, but when they aren’t reached, it will instead pull the images from the image registry in the local region. All public and private clusters with egress lockdown get their Red Hat container images from a registery that is located in the local region of the cluster instead of gathering these images from various endpoints and registeries on the Internet. You can create a fully operational cluster that does not require a public egress by configuring a virtual private cloud (VPC) and using the --properties zero_egress:true flag when creating your cluster.

Egress lockdown is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prequisites
  • You have an AWS account with sufficient permissions to create VPCs, subnets, and other required infrastructure.

  • You have installed the Terraform v1.4.0+ CLI.

  • You have installed the ROSA v1.2.45+ CLI.

  • You have installed and configured the AWS CLI with the necessary credentials.

  • You have installed the git CLI.

You can use egress lockdown on all supported versions of Red Hat OpenShift Service on AWS that use the hosted control plane architecture; however, Red Hat suggests using the latest available z-stream release for each OpenShift Container Platform version.

While you may install and upgrade your clusters as you would a regular cluster, due to an upstream issue with how the internal image registry functions in disconnected environments, your cluster that uses egress lockdown will not be able to fully use all platform components, such as the image registry. You can restore these features by using the latest ROSA version when upgrading or installing your cluster.

Creating a Virtual Private Cloud for your egress lockdown ROSA with HCP clusters

You must have a Virtual Private Cloud (VPC) to create ROSA with HCP clusters. You can use one of the following methods to create a VPC:

  • Create a VPC by using a Terraform template

  • Manually create the VPC resources in the AWS console

The Terraform instructions are for testing and demonstration purposes. Your own installation requires modifications to the VPC for your specific needs and constraints. You should also ensure that when you use the following Terraform script it is in the same region that you intend to install your cluster. In the following examples, use us-east-2.

Creating a Virtual Private Cloud using Terraform

Terraform is a tool that allows you to create various resources using an established template. The following process uses the default options as required to create a ROSA with HCP cluster. For more information about using Terraform, see the additional resources.

Prerequisites
  • You have installed Terraform version 1.4.0 or newer on your machine.

  • You have installed Git on your machine.

Procedure
  1. Open a shell prompt and clone the Terraform VPC repository by running the following command:

    $ git clone https://github.com/openshift-cs/terraform-vpc-example
  2. Navigate to the created directory by running the following command:

    $ cd terraform-vpc-example/zero-egress
  3. Initiate the Terraform file by running the following command:

    $ terraform init

    A message confirming the initialization appears when this process completes.

  4. To build your VPC Terraform plan based on the existing Terraform template, run the plan command. You must include your AWS region, availability zones, CIDR blocks, and private subnets. You can choose to specify a cluster name. A rosa-zero-egress.tfplan file is added to the hypershift-tf directory after the terraform plan completes. For more detailed options, see the Terraform VPC repository’s README file.

    $ terraform plan -out rosa-zero-egress.tfplan -var region=<aws_region> \ (1)
          -var 'availability_zones=["aws_region_1a","aws_region_1b","aws_region_1c"]'\ (2)
          -var vpc_cidr_block=10.0.0.0/16 \ (3)
          -var 'private_subnets=["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]' (4)
    
    1 Enter your AWS region.
    2 Enter the availability zones for the VPC. For example, for a VPC that uses ap-southeast-1, you would use the following as availability zones: ["ap-southeast-1a", "ap-southeast-1b", "ap-southeast-1c"].
    3 Enter the CIDR block for your VPC.
    4 Enter each of the subnets that are created for the VPC.
  5. Apply this plan file to build your VPC by running the following command:

    $ terraform apply rosa-zero-egress.tfplan
Additional resources

Creating a Virtual Private Cloud manually

If you choose to manually create your Virtual Private Cloud (VPC) instead of using Terraform, go to the VPC page in the AWS console. Your VPC must meet the requirements shown in the following table.

Table 1. Requirements for your VPC
Requirement Details

VPC name

You need to have the specific VPC name and ID when creating your cluster.

CIDR range

Your VPC CIDR range should match your machine CIDR.

Availability zone

You need one availability zone for a single zone, and you need three for availability zones for multi-zone.

Public subnet

You must have one public subnet with an internet gateway for public clusters.

Private subnet

You must have exactly one private subnet in each availability zone (AZ) for installing machine pools in ROSA HCP clusters. A NAT gateway may be associated with this subnet to allow outbound internet access for the instances. Private clusters do not need a public subnet.

DNS hostname and resolution

You must ensure that the DNS hostname and resolution are enabled.

Tagging your subnets

Before you can use your VPC to create a ROSA with HCP cluster, you must tag your VPC subnets. Automated service preflight checks verify that these resources are tagged correctly before you can use these resources. The following table shows how your resources should be tagged as the following:

Resource Key Value

Public subnet

kubernetes.io/role/elb

1 or no value

Private subnet

kubernetes.io/role/internal-elb

1 or no value

You must tag at least one private subnet and, if applicable, and one public subnet.

Prerequisites
  • You have created a VPC.

  • You have installed the aws CLI.

Procedure
  1. Tag your resources in your terminal by running the following commands:

    1. For public subnets, run:

      $ aws ec2 create-tags --resources <public-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
    2. For private subnets, run:

      $ aws ec2 create-tags --resources <private-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
Verification
  • Verify that the tag is correctly applied by running the following command:

    $ aws ec2 describe-tags --filters "Name=resource-id,Values=<subnet_id>"
    Example output
    TAGS    Name                    <subnet-id>        subnet  <prefix>-subnet-public1-us-east-1a
    TAGS    kubernetes.io/role/elb  <subnet-id>        subnet  1

Configuring AWS security groups and PrivateLink connections

After creating your VPC, create your AWS security groups and VPC endpoints.

Procedure
  1. Create the AWS security group by running the following command:

    $ aws ec2 create-security-group \
            --group-name allow-inbound-traffic \
            --description "allow inbound traffic" \
            --vpc-id <vpc_id> \ (1)
            --region <aws_region> \ (2)
    1 Enter your VPC’s ID.
    2 Enter the AWS region where the VPC was installed.
  2. Grant access to the security group’s ingress by running the following command:

    $ aws ec2 authorize-security-group-ingress \
            --group-id <group_id> \ (1)
            --protocol -1 \
            --port 0-0 \
            --cidr <vpc_cidr> \ (2)
            --region <aws_region> \ (3)
    1 --group-id uses ID of the security group created with the previous command.
    2 Enter the CIDR of your VPC.
    3 The AWS region where you installed your VPC
  3. Create your STS VPC endpoint by running the following command:

    $ aws ec2 create-vpc-endpoint \
        --vpc-id <vpc_id> \ (1)
        --service-name com.amazonaws.<aws_region>.sts \ (2)
        --vpc-endpoint-type Interface
    1 Enter your VPC’s ID.
    2 Enter the AWS region where the VPC was installed.
  4. Create your ECR VPC endpoints by running the following command:

    $ aws ec2 create-vpc-endpoint \
        --vpc-id <vpc_id> \
        --service-name com.amazonaws.<aws_region>.ecr.dkr \ (1)
        --vpc-endpoint-type Interface
    1 Enter the AWS region where the VPC is located.
  5. Create your S3 VPC endpoint by running the following command:

    $ aws ec2 create-vpc-endpoint \
        --vpc-id <vpc_id> \
        --service-name com.amazonaws.<aws_region>.s3

Creating the account-wide STS roles and policies

Before using the Red Hat OpenShift Service on AWS (ROSA) CLI (rosa) to create Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, create the required account-wide roles and policies, including the Operator policies.

ROSA with HCP clusters require account and Operator roles with AWS managed policies attached. Customer managed policies are not supported. For more information regarding AWS managed policies for ROSA with HCP clusters, see AWS managed policies for ROSA account roles.

Prerequisites
  • You have completed the AWS prerequisites for ROSA with HCP.

  • You have available AWS service quotas.

  • You have enabled the ROSA service in the AWS Console.

  • You have installed and configured the latest ROSA CLI (rosa) on your installation host.

  • You have logged in to your Red Hat account by using the ROSA CLI.

Procedure
  1. If they do not exist in your AWS account, create the required account-wide STS roles and attach the policies by running the following command:

    $ rosa create account-roles --hosted-cp
  2. Ensure that the your worker role has the correct AWS policy by running the following command:

    $ aws iam attach-role-policy \
    --role-name ManagedOpenShift-HCP-ROSA-Worker-Role \ (1)
    --policy-arn "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
    1 This role needs to include the prefix that was created in the previous step.
  3. Optional: Set your prefix as an environmental variable by running the following command:

    $ export ACCOUNT_ROLES_PREFIX=<account_role_prefix>
    • View the value of the variable by running the following command:

      $ echo $ACCOUNT_ROLES_PREFIX
      Example output
      ManagedOpenShift

For more information regarding AWS managed IAM policies for ROSA, see AWS managed IAM policies for ROSA.

Creating an OpenID Connect configuration

When using a Red Hat OpenShift Service on AWS cluster, you can create the OpenID Connect (OIDC) configuration prior to creating your cluster. This configuration is registered to be used with OpenShift Cluster Manager.

Prerequisites
  • You have installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa, on your installation host.

Procedure
  1. To create your OIDC configuration alongside the AWS resources, run the following command:

    $ rosa create oidc-config --mode=auto --yes

    This command returns the following information.

    Example output
    ? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
    I: Setting up managed OIDC configuration
    I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
    	rosa create operator-roles --prefix <user-defined> --oidc-config-id 13cdr6b
    If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
    I: Creating OIDC provider using 'arn:aws:iam::4540112244:user/userName'
    ? Create the OIDC provider? Yes
    I: Created OIDC provider with ARN 'arn:aws:iam::4540112244:oidc-provider/dvbwgdztaeq9o.cloudfront.net/13cdr6b'

    When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for --mode auto, otherwise you must determine these values based on aws CLI output for --mode manual.

  2. Optional: you can save the OIDC configuration ID as a variable to use later. Run the following command to save the variable:

    $ export OIDC_ID=<oidc_config_id>(1)
    1 In the example output above, the OIDC configuration ID is 13cdr6b.
    • View the value of the variable by running the following command:

      $ echo $OIDC_ID
      Example output
      13cdr6b
Verification
  • You can list the possible OIDC configurations available for your clusters that are associated with your user organization. Run the following command:

    $ rosa list oidc-config
    Example output
    ID                                MANAGED  ISSUER URL                                                             SECRET ARN
    2330dbs0n8m3chkkr25gkkcd8pnj3lk2  true     https://dvbwgdztaeq9o.cloudfront.net/2330dbs0n8m3chkkr25gkkcd8pnj3lk2
    233hvnrjoqu14jltk6lhbhf2tj11f8un  false    https://oidc-r7u1.s3.us-east-1.amazonaws.com                           aws:secretsmanager:us-east-1:242819244:secret:rosa-private-key-oidc-r7u1-tM3MDN
    

Creating Operator roles and policies

When using a ROSA with HCP cluster, you must create the Operator IAM roles that are required for Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) deployments. The cluster Operators use the Operator roles to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, cloud provider credentials, and external access to a cluster.

Prerequisites
  • You have completed the AWS prerequisites for ROSA with HCP.

  • You have installed and configured the latest Red Hat OpenShift Service on AWS ROSA CLI (rosa), on your installation host.

  • You created the account-wide AWS roles.

Procedure
  1. Set your prefix name to an environment variable using the following command:

    $ export OPERATOR_ROLES_PREFIX=<prefix_name>
  2. To create your Operator roles, run the following command:

    $ rosa create operator-roles --hosted-cp --prefix=$OPERATOR_ROLES_PREFIX --oidc-config-id=$OIDC_ID --installer-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role

    The following breakdown provides options for the Operator role creation.

    $ rosa create operator-roles --hosted-cp
    	--prefix=$OPERATOR_ROLES_PREFIX (1)
    	--oidc-config-id=$OIDC_ID (2)
    	--installer-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/${ACCOUNT_ROLES_PREFIX}-HCP-ROSA-Installer-Role (3)
    1 You must supply a prefix when creating these Operator roles. Failing to do so produces an error. See the Additional resources of this section for information on the Operator prefix.
    2 This value is the OIDC configuration ID that you created for your ROSA with HCP cluster.
    3 This value is the installer role ARN that you created when you created the ROSA account roles.

    You must include the --hosted-cp parameter to create the correct roles for ROSA with HCP clusters. This command returns the following information.

    Example output
    ? Role creation mode: auto
    ? Operator roles prefix: <pre-filled_prefix> (1)
    ? OIDC Configuration ID: 23soa2bgvpek9kmes9s7os0a39i13qm4 | https://dvbwgdztaeq9o.cloudfront.net/23soa2bgvpek9kmes9s7os0a39i13qm4 (2)
    ? Create hosted control plane operator roles: Yes
    W: More than one Installer role found
    ? Installer role ARN: arn:aws:iam::4540112244:role/<prefix>-HCP-ROSA-Installer-Role
    ? Permissions boundary ARN (optional):
    I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
    I: Creating roles using 'arn:aws:iam::4540112244:user/<userName>'
    I: Created role '<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials'
    I: Created role '<prefix>-openshift-cloud-network-config-controller-cloud-credenti' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti'
    I: Created role '<prefix>-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager'
    I: Created role '<prefix>-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager'
    I: Created role '<prefix>-kube-system-control-plane-operator' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator'
    I: Created role '<prefix>-kube-system-kms-provider' with ARN 'arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider'
    I: Created role '<prefix>-openshift-image-registry-installer-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials'
    I: Created role '<prefix>-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials'
    I: To create a cluster with these roles, run the following command:
    	rosa create cluster --sts --oidc-config-id 23soa2bgvpek9kmes9s7os0a39i13qm4 --operator-roles-prefix <prefix> --hosted-cp
    1 This field is prepopulated with the prefix that you set in the initial creation command.
    2 This field requires you to select an OIDC configuration that you created for your ROSA with HCP cluster.

    The Operator roles are now created and ready to use for creating your ROSA with HCP cluster.

Verification
  • You can list the Operator roles associated with your ROSA account. Run the following command:

    $ rosa list operator-roles
    Example output
    I: Fetching operator roles
    ROLE PREFIX  AMOUNT IN BUNDLE
    <prefix>      8
    ? Would you like to detail a specific prefix Yes (1)
    ? Operator Role Prefix: <prefix>
    ROLE NAME                                                         ROLE ARN                                                                                         VERSION  MANAGED
    <prefix>-kube-system-capa-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-capa-controller-manager                       4.13     No
    <prefix>-kube-system-control-plane-operator                        arn:aws:iam::4540112244:role/<prefix>-kube-system-control-plane-operator                        4.13     No
    <prefix>-kube-system-kms-provider                                  arn:aws:iam::4540112244:role/<prefix>-kube-system-kms-provider                                  4.13     No
    <prefix>-kube-system-kube-controller-manager                       arn:aws:iam::4540112244:role/<prefix>-kube-system-kube-controller-manager                       4.13     No
    <prefix>-openshift-cloud-network-config-controller-cloud-credenti  arn:aws:iam::4540112244:role/<prefix>-openshift-cloud-network-config-controller-cloud-credenti  4.13     No
    <prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       arn:aws:iam::4540112244:role/<prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials       4.13     No
    <prefix>-openshift-image-registry-installer-cloud-credentials      arn:aws:iam::4540112244:role/<prefix>-openshift-image-registry-installer-cloud-credentials      4.13     No
    <prefix>-openshift-ingress-operator-cloud-credentials              arn:aws:iam::4540112244:role/<prefix>-openshift-ingress-operator-cloud-credentials              4.13     No
    1 After the command runs, it displays all the prefixes associated with your AWS account and notes how many roles are associated with this prefix. If you need to see all of these roles and their details, enter "Yes" on the detail prompt to have these roles listed out with specifics.

Creating a ROSA with HCP cluster with egress lockdown using the CLI

When using the Red Hat OpenShift Service on AWS (ROSA) command-line interface (CLI), rosa, to create a cluster, you can select the default options to create the cluster quickly.

Prerequisites
  • You have completed the AWS prerequisites for ROSA with HCP.

  • You have available AWS service quotas.

  • You have enabled the ROSA service in the AWS Console.

  • You have installed and configured the latest ROSA CLI (rosa) on your installation host. Run rosa version to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade.

  • You have logged in to your Red Hat account by using the ROSA CLI.

  • You have created an OIDC configuration.

  • You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.

Procedure
  1. Use one of the following commands to create your ROSA with HCP cluster:

    When creating a ROSA with HCP cluster, the default machine Classless Inter-Domain Routing (CIDR) is 10.0.0.0/16. If this does not correspond to the CIDR range for your VPC subnets, add --machine-cidr <address_block> to the following commands. To learn more about the default CIDR ranges for Red Hat OpenShift Service on AWS, see the CIDR range definitions.

    • If you did not set environment variables, run the following command:

      $ rosa create cluster --cluster-name=<cluster_name> \ (1)
           --mode=auto --hosted-cp [--private] \ (2)
           --operator-roles-prefix <operator-role-prefix> \ (3)
           --oidc-config-id <id-of-oidc-configuration> \
           --subnet-ids=<private-subnet-id> --region <region> \
           --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \
           --pod-cidr 10.128.0.0/14 --host-prefix 23 \
           --billing-account <root-acct-id> \ (4)
           --properties zero_egress:true
      1 Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the --domain-prefix flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation.
      2 By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace <cluster_name>-<hash> in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see About custom Operator IAM role prefixes.

      If you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step.

      3 Provide the AWS account that is responsible for all billing.
    • If you set the environment variables, create a cluster with egress lockdown that has a single, initial machine pool, using a privately available API, and a privately available Ingress by running the following command:

      $ rosa create cluster --private --cluster-name=<cluster_name> \
          --mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
          --oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS \
          --region <region> --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \
          --pod-cidr 10.128.0.0/14 --host-prefix 23 --billing-account <root-acct-id> \
          --private --properties zero_egress:true
  2. Check the status of your cluster by running the following command:

    $ rosa describe cluster --cluster=<cluster_name>

    The following State field changes are listed in the output as cluster installation progresses:

    • pending (Preparing account)

    • installing (DNS setup in progress)

    • installing

    • ready

      If the installation fails or the State field does not change to ready after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see Troubleshooting installations. For steps to contact Red Hat Support for assistance, see Getting support for Red Hat OpenShift Service on AWS.

  3. Track the cluster creation progress by watching the Red Hat OpenShift Service on AWS installation program logs. To check the logs, run the following command:

    $ rosa logs install --cluster=<cluster_name> --watch \ (1)
    1 Optional: To watch for new log messages as the installation progresses, use the --watch argument.