Navigation

Prerequisites

Before you create a multi-Kubernetes-cluster deployment using either the quick start or a deployment procedure, complete the following tasks:

Review Supported Hardware Architectures

See supported hardware architectures.

Clone the MongoDB Enterprise Kubernetes Operator Repository

Clone the MongoDB Enterprise Kubernetes Operator repository:

git clone https://github.com/mongodb/mongodb-enterprise-kubernetes.git

Set Environment Variables and GKE Zones

Set the environment variables with cluster names and the available GKE zones where you deploy the clusters, as in this example:

export MDB_GKE_PROJECT={GKE project name}

export MDB_CENTRAL_CLUSTER="mdb-central"
export MDB_CENTRAL_CLUSTER_ZONE="us-west1-a"

export MDB_CLUSTER_1="mdb-1"
export MDB_CLUSTER_1_ZONE="us-west1-b"

export MDB_CLUSTER_2="mdb-2"
export MDB_CLUSTER_2_ZONE="us-east1-b"

export MDB_CLUSTER_3="mdb-3"
export MDB_CLUSTER_3_ZONE="us-central1-a"

export MDB_CENTRAL_CLUSTER_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CENTRAL_CLUSTER_ZONE}_${MDB_CENTRAL_CLUSTER}"

export MDB_CLUSTER_1_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_1_ZONE}_${MDB_CLUSTER_1}"
export MDB_CLUSTER_2_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_2_ZONE}_${MDB_CLUSTER_2}"
export MDB_CLUSTER_3_FULL_NAME="gke_${MDB_GKE_PROJECT}_${MDB_CLUSTER_3_ZONE}_${MDB_CLUSTER_3}"

Set up GKE Clusters

Set up GKE (Google Kubernetes Engine) clusters:

1

Set up your Google Cloud account.

If you have not done so already, create a Google Cloud project, enable billing on the project, enable the Artifact Registry and GKE APIs, and launch Cloud Shell by following the relevant procedures in the Google Kubernetes Engine Quickstart in the Google Cloud documentation.

2

Create a central cluster and member clusters.

Create one central cluster and one or more member clusters, specifying the GKE zones, the number of nodes, and the instance types, as in these examples:

gcloud container clusters create $MDB_CENTRAL_CLUSTER \
  --zone=$MDB_CENTRAL_CLUSTER_ZONE \
  --num-nodes=5 \
  --machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_1 \
  --zone=$MDB_CLUSTER_1_ZONE \
  --num-nodes=5 \
  --machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_2 \
  --zone=$MDB_CLUSTER_2_ZONE \
  --num-nodes=5 \
  --machine-type "e2-standard-2"
gcloud container clusters create $MDB_CLUSTER_3 \
  --zone=$MDB_CLUSTER_3_ZONE \
  --num-nodes=5 \
  --machine-type "e2-standard-2"

Obtain User Authentication Credentials for Central and Member Clusters

Obtain user authentication credentials for the central and member Kubernetes clusters and save the credentials. You will later use these credentials for running kubectl commands on these clusters.

Run the following commands:

gcloud container clusters get-credentials $MDB_CENTRAL_CLUSTER \
  --zone=$MDB_CENTRAL_CLUSTER_ZONE

gcloud container clusters get-credentials $MDB_CLUSTER_1 \
  --zone=$MDB_CLUSTER_1_ZONE

gcloud container clusters get-credentials $MDB_CLUSTER_2 \
  --zone=$MDB_CLUSTER_2_ZONE

gcloud container clusters get-credentials $MDB_CLUSTER_3 \
  --zone=$MDB_CLUSTER_3_ZONE

Install Go and Helm

Install the following tools:

  1. Install Go v1.17 or later.
  2. Install Helm.

Install the kubectl MongoDB Plugin

Use the kubectl mongodb plugin to:

To install the kubectl mongodb plugin:

1

Download your desired Kubernetes Operator package version.

Download your desired Kubernetes Operator package version from the Release Page of the MongoDB Enterprise Kubernetes Operator Repository.

The package’s name uses this pattern: kubectl-mongodb-multicluster_{{ .Version }}_{{ .Os }}_{{ .Arch }}.tar.gz.

Use one of the following packages:

  • kubectl-mongodb-multicluster_{{ .Version }}_darwin_amd64.tar.gz
  • kubectl-mongodb-multicluster_{{ .Version }}_darwin_arm64.tar.gz
  • kubectl-mongodb-multicluster_{{ .Version }}_linux_amd64.tar.gz
  • kubectl-mongodb-multicluster_{{ .Version }}_linux_arm64.tar.gz
2

Unpack the Kubernetes Operator package.

Unpack the package, as in the following example:

tar -zxvf kubectl-mongodb_<version>_darwin_amd64.tar.gz
3

Locate the kubectl mongodb plugin binary and copy it to its desired destination.

Find the kubectl-mongodb binary in the unpacked directory and move it to its desired destination, inside the PATH for the Kubernetes Operator user, as shown in the following example:

mv kubectl-mongodb /usr/local/bin/kubectl-mongodb

Now you can run the kubectl mongodb plugin using the following commands:

kubectl mongodb multicluster setup
kubectl mongodb multicluster recover

To learn more about the supported flags, see the MongoDB kubectl plugin Reference.

Understand Kubernetes Roles and Role Bindings

To use a multi-Kubernetes-cluster deployment, you must have a specific set of Kubernetes Roles, ClusterRoles, RoleBindings, ClusterRoleBindings, and ServiceAccounts, which you can configure in any of the following ways:

  • Follow the Multi-Kubernetes-Cluster Quick Start, which tells you how to use the MongoDB Plugin to automatically create the required objects and apply them to the appropriate clusters within your multi-Kubernetes-cluster deployment.

  • Use Helm to configure the required Kubernetes Roles and service accounts for each member cluster:

    helm template --show-only \
      templates/database-roles.yaml \
      mongodb/enterprise-operator \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_1_FULL_NAME \
      --namespace mongodb
    
    helm template --show-only \
      templates/database-roles.yaml \
      mongodb/enterprise-operator \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_2_FULL_NAME \
      --namespace mongodb
    
    helm template --show-only \
      templates/database-roles.yaml \
      mongodb/enterprise-operator \
      --set namespace=mongodb | \
    kubectl apply -f - \
      --context=$MDB_CLUSTER_3_FULL_NAME \
      --namespace mongodb
    
  • Manually create Kubernetes object .yaml files and add the required Kubernetes Roles and service accounts to your multi-Kubernetes-cluster deployment with the kubectl apply command. This may be necessary for certain highly automated workflows. MongoDB provides sample configuration files.

    For namespace-scoped resources:

    For cluster-scoped resources:

    Each file defines multiple resources. To support your deployment, you must replace the placeholder values in the following fields:

    • subjects.namespace in each RoleBinding or ClusterRoleBinding resource
    • metadata.namespace in each ServiceAccount resource

    After modifying the definitions, apply them by running the following command for each file:

    kubectl apply -f <fileName>
    

Set the Deployment’s Scope

By default, the multi-cluster Kubernetes Operator is scoped to the namespace in which you install it. The Kubernetes Operator reconciles the MongoDBMultiCluster resource deployed in the same namespace as the Kubernetes Operator.

When you run the MongoDB kubectl plugin as part of the multi-cluster quick start, and don’t modify the kubectl mongodb plugin settings, the plugin:

  • Creates a default ConfigMap named mongodb-enterprise-operator-member-list that contains all the member clusters of the multi-Kubernetes-cluster deployment. This name is hard-coded and you can’t change it. See Known Issues.
  • Creates service accounts, Roles, and RoleBindings in the central cluster and each member cluster.
  • Applies the correct permissions for service accounts.
  • Uses the preceding settings to create your multi-Kubernetes-cluster deployment.

Once the Kubernetes Operator creates the multi-Kubernetes-cluster deployment, the Kubernetes Operator starts watching MongoDB Kubernetes resources in the mongodb namespace.

To configure the Kubernetes Operator with the correct permissions to deploy in multiple or all namespaces, run the following command and specify the namespaces that you would like the Kubernetes Operator to watch.

kubectl mongodb multicluster setup \
  --central-cluster="${MDB_CENTRAL_CLUSTER_FULL_NAME}" \
  --member-clusters="${MDB_CLUSTER_1_FULL_NAME},${MDB_CLUSTER_2_FULL_NAME},${MDB_CLUSTER_3_FULL_NAME}" \
  --member-cluster-namespace="mongodb2" \
  --central-cluster-namespace="mongodb2" \
  --cluster-scoped="true"

When you install the multi-Kubernetes-cluster deployment to multiple or all namespaces, you can configure the Kubernetes Operator to:

Watch Resources in Multiple Namespaces

If you set the scope for the multi-Kubernetes-cluster deployment to many namespaces, you can configure the Kubernetes Operator to watch MongoDB Kubernetes resources in these namespaces in the multi-Kubernetes-cluster deployment.

Set the spec.template.spec.containers.name.env.name:WATCH_NAMESPACE in the mongodb-enterprise.yaml file from the MongoDB Enterprise Kubernetes Operator GitHub Repository to the comma-separated list of namespaces that you would like the Kubernetes Operator to watch:

WATCH_NAMESPACE: "$namespace1,$namespace2,$namespace3"

Run the following command and replace the values in the last line with the namespaces that you would like the Kubernetes Operator to watch.

helm upgrade \
  --install \
  mongodb-enterprise-operator-multi-cluster \
  mongodb/enterprise-operator \
  --namespace mongodb \
  --set namespace=mongodb \
  --version <mongodb-kubernetes-operator-version>\
  --set operator.name=mongodb-enterprise-operator-multi-cluster \
  --set operator.createOperatorServiceAccount=false \
  --set "multiCluster.clusters={$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME}" \
  --set operator.watchNamespace="$namespace1,$namespace2,$namespace3"

Watch Resources in All Namespaces

If you set the scope for the multi-Kubernetes-cluster deployment to all namespaces instead of the default mongodb namespace, you can configure the Kubernetes Operator to watch MongoDB Kubernetes resources in all namespaces in the multi-Kubernetes-cluster deployment.

Set the spec.template.spec.containers.name.env.name:WATCH_NAMESPACE in mongodb-enterprise.yaml to "*". You must include the double quotation marks (") around the asterisk (*) in the YAML file.

WATCH_NAMESPACE: "*"

Run the following command:

helm upgrade \
  --install \
  mongodb-enterprise-operator-multi-cluster \
  mongodb/enterprise-operator \
  --namespace mongodb \
  --set namespace=mongodb \
  --version <mongodb-kubernetes-operator-version>\
  --set operator.name=mongodb-enterprise-operator-multi-cluster \
  --set operator.createOperatorServiceAccount=false \
  --set "multiCluster.clusters={$MDB_CLUSTER_1_FULL_NAME,$MDB_CLUSTER_2_FULL_NAME,$MDB_CLUSTER_3_FULL_NAME}" \
  --set operator.watchNamespace="*"

Plan for External Connectivity: Should You Use a Service Mesh?

A service mesh enables inter-cluster communication between the replica set members deployed in different Kubernetes clusters. Using a service mesh greatly simplifies creating multi-Kubernetes-cluster deployments and is the recommended way of deploying MongoDB across multiple Kubernetes clusters. However, if your IT organization doesn’t use a service mesh, you can deploy a replica set in a multi-Kubernetes-cluster deployment without it.

Depending on your environment, do the following:

How Does the Kubernetes Operator Establish Connectivity?

Regardless of the deployment type, a MongoDB deployment in Kubernetes must establish the following connections:

  • From the Ops Manager Automation Agent in the Pod to its mongod process, to enable MongoDB deployment’s lifecycle management and monitoring.
  • From the Ops Manager Automation Agent in the Pod to the Ops Manager instance, to enable automation.
  • Between all mongod processes, to allow replication.

When the Kubernetes Operator deploys the MongoDB resources, it treats these connectivity requirements in the following ways, depending on the type of deployment:

  • In a single Kubernetes cluster deployment, the Kubernetes Operator configures hostnames in the replica set as FQDNs of a Headless Service. This is a single service that resolves the DNS of a direct IP address of each Pod hosting a MongoDB instance by the Pod’s FQDN, as follows: <pod-name>.<replica-set-name>-svc.<namespace>.svc.cluster.local.

  • In a multi-Kubernetes-cluster deployment that uses a service mesh, the Kubernetes Operator creates a separate StatefulSet for each MongoDB replica set member in the Kubernetes cluster. A service mesh allows communication between mongod processes across distinct Kubernetes clusters.

    Using a service mesh allows the multi-Kubernetes-cluster deployment to:

    • Achieve global DNS hostname resolution across Kubernetes clusters and establish connectivity between them. For each MongoDB deployment Pod in each Kubernetes cluster, the Kubernetes Operator creates a ClusterIP service through the spec.duplicateServiceObjects: true configuration in the MongoDBMultiCluster resource. Each process has a hostname defined to this service’s FQDN: <pod-name>-svc.<namespace>.svc.cluster.local. These hostnames resolve from DNS to a service’s ClusterIP in each member cluster.
    • Establish communication between Pods in different Kubernetes clusters. As a result, replica set members hosted on different clusters form a single replica set across these clusters.
  • In a multi-Kubernetes-cluster deployment without a service mesh, the Kubernetes Operator uses the following MongoDBMultiCluster resource settings to expose all its mongod processes externally. This enables DNS resolution of hostnames between distinct Kubernetes clusters, and establishes connectivity between Pods routed through the networks that connect these clusters.

Optional: Install Istio

Install Istio in a multi-primary mode on different networks, using the Istio documentation. Istio is a service mesh that simplifies DNS resolution and helps establish inter-cluster communication between the member Kubernetes clusters in a multi-Kubernetes-cluster deployment. If you choose to use a service mesh, you need to install it. If you can’t utilize a service mesh, skip this section and use external domains and configure DNS to enable external connectivity.

In addition, we offer the install_istio_separate_network example script. This script is based on Istio documentation and provides an example installation that uses the multi-primary mode on different networks. We don’t guarantee the script’s maintenance with future Istio releases. If you choose to use the script, review the latest Istio documentation for installing a multicluster, and, if necessary, adjust the script to match the documentation and your deployment. If you use another service mesh solution, create your own script for configuring separate networks to facilitate DNS resolution.

Enable External Connectivity through External Domains and DNS Zones

If you don’t use a service mesh, do the following to enable external connectivity to and between mongod processes and the Ops Manager Automation Agent:

  • Register mongod processes on externally-available hostnames using one of the following approaches:

    • When you create a Kubernetes cluster for your multi-Kubernetes-cluster deployment, use the spec.clusterDomain setting to specify an externally-available custom domain instead of the default domain. With the default cluster domain, mongod processes use *.cluster.local hostnames. However, if you specify an externally-available custom domain for each Kubernetes cluster in a multi-Kubernetes-cluster deployment, mongod processes use hostnames in the following pattern:

      <pod-name>.<replica-set-name>-svc.<namespace>.svc.<externally-available-cluster-domain>
      

      Note

      You can set a custom cluster domain only when creating a Kubernetes cluster for a multi-Kubernetes-cluster deployment.

    • When you create a multi-Kubernetes-cluster deployment, use the spec.clusterSpecList.externalAccess.externalDomain setting to specify an external domain and instruct the Kubernetes Operator to configure hostnames for mongod processes in the following pattern:

      <pod-name>.<externalDomain>
      

      Note

      You can specify external domains only for new deployments. You can’t change external domains after you configure a multi-Kubernetes-cluster deployment.

      After you configure an external domain in this way, the Ops Manager Automation Agents and mongod processes use this domain to connect to each other.

  • Customize external services that the Kubernetes Operator creates for each Pod in the Kubernetes cluster. Use the global configuration in the spec.externalAccess settings and Kubernetes cluster-specific overrides in the spec.clusterSpecList.externalAccess.externalService settings.

  • Configure Pod hostnames in a DNS zone to ensure that each Kubernetes Pod hosting a mongod process allows establishing an external connection to the other mongod processes in a multi-Kubernetes-cluster deployment. A Pod is considered “exposed externally” when you can connect to a mongod process by using the <pod-name>.<externalDomain> hostname on ports 27017 (this is the default database port) and 27018 (this is the database port + 1). You may also need to configure firewall rules to allow TCP traffic on ports 27017 and 27018.

After you complete these prerequisites, you can deploy a multi-Kubernetes cluster without a service mesh.

Check Connectivity Across Clusters

Follow the steps in this procedure to verify that service FQDNs are reachable across Kubernetes clusters.

In this example, you deploy a sample application defined in sample-service.yaml across two Kubernetes clusters.

1

Create a namespace in each cluster.

Create a namespace in each of the Kubernetes clusters to deploy the sample-service.yaml.

kubectl create --context="${CTX_CLUSTER_1}" namespace sample
kubectl create --context="${CTX_CLUSTER_2}" namespace sample

Note

In certain service mesh solutions, you might need to annotate or label the namespace.

2

Deploy the sample-service.yaml in both Kubernetes clusters.

kubectl apply --context="${CTX_CLUSTER_1}" \
   -f sample-service.yaml \
   -l service=helloworld1 \
   -n sample

kubectl apply --context="${CTX_CLUSTER_2}" \
   -f sample-service.yaml \
   -l service=helloworld2 \
   -n sample
3

Deploy the sample application on CLUSTER_1.

kubectl apply --context="${CTX_CLUSTER_1}" \
   -f sample-service.yaml \
   -l version=v1 \
   -n sample
4

Ensure CLUSTER_1 is running.

Check that the CLUSTER_1 hosting Pod is in the Running state.

kubectl get pod --context="${CTX_CLUSTER_1}" \
   -n sample \
   -l app=helloworld
5

Deploy the sample application on CLUSTER_2.

kubectl apply --context="${CTX_CLUSTER_2}" \
   -f sample-service.yaml \
   -l version=v2 \
   -n sample
6

Ensure CLUSTER_2 is running.

Check that the CLUSTER_2 hosting Pod is in the Running state.

kubectl get pod --context="${CTX_CLUSTER_2}" \
   -n sample \
   -l app=helloworld
7

Verify CLUSTER_1 can connect to CLUSTER_2.

Deploy the Pod in CLUSTER_1 and check that you can reach the sample application in CLUSTER_2.

kubectl run  --context="${CTX_CLUSTER_1}" \
   -n sample \
   curl --image=radial/busyboxplus:curl \
   -i --tty \
   curl -sS helloworld2.sample:5000/hello

You should see output similar to this example:

Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
8

Verify CLUSTER_2 can connect to CLUSTER_1.

Deploy the Pod in CLUSTER_2 and check that you can reach the sample application in CLUSTER_1.

kubectl run --context="${CTX_CLUSTER_2}" \
   -n sample \
   curl --image=radial/busyboxplus:curl \
   -i --tty \
   curl -sS helloworld1.sample:5000/hello

You should see output similar to this example:

Hello version: v1, instance: helloworld-v1-758dd55874-6x4t8

Review the Requirements for Deploying Ops Manager

As part of the Quick Start, you deploy an Ops Manager resource on the central cluster. To learn more, see Deploy an Ops Manager Resource on the Central Cluster and Connect to Ops Manager.

Prepare for TLS-Encrypted Connections

If you plan to secure your multi-Kubernetes-cluster deployment using TLS encryption, complete the following tasks to enable internal cluster authentication and generate TLS certificates for member clusters and the MongoDB Agent:

Note

You must possess the CA certificate and the key that you used to sign your TLS certificates.

1

Generate a TLS certificate for Kubernetes services.

Use one of the following options:

  • Generate a wildcard TLS certificate that covers hostnames of the services that the Kubernetes Operator creates for each Pod in the deployment.

    If you generate wildcard certificates, you can continue using the same certificates when you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.

    For example, add the hostname similar to the following format to the SAN:

    *.<namespace>.svc.cluster.local
    
  • For each Kubernetes service that the Kubernetes Operator generates corresponding to each Pod in each member cluster, add SANs to the certificate. In your TLS certificate, the SAN for each Kubernetes service must use the following format:

    <metadata.name>-<member_cluster_index>-<n>-svc.<namespace>.svc.cluster.local
    

    where n ranges from 0 to clusterSpecList[member_cluster_index].members - 1.

2

Generate one TLS certificate for your project’s MongoDB Agents.

For the MongoDB Agent TLS certificate:

  • The Common Name in the TLS certificate must not be empty.
  • The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.

To speed up creating TLS certificates for member Kubernetes clusters, we offer the setup_tls script. We don’t guarantee the script’s maintenance. If you choose to use the script, test it and adjust it to your needs. The script does the following:

  • Creates the cert-manager namespace in the connected cluster and installs cert-manager using Helm in the cert-manager namespace.
  • Installs a local CA using mkcert.
  • Downloads TLS certificates from downloads.mongodb.com and concatenates them with the CA file name and ca-chain.
  • Creates a ConfigMap that includes the ca-chain files.
  • Creates an Issuer resource, which cert-manager uses to generate certificates.
  • Creates a Certificate resource, which cert-manager uses to create a key object for the certificates.

To use the script:

1

Install mkcert.

Install mkcert on the machine you plan to run this script.

2

Set the context to the central cluster.

kubectl --context $MDB_CENTRAL_CLUSTER_FULL_NAME \
--namespace=<metadata.namespace> \
3

Run the setup_tls script.

curl https://raw.githubusercontent.com/mon  mongodb-enterprise-kubernetes/master/tools/multicluster/setup_tl  -o  setup_tls.sh

The output includes:

  • A secret containing the CA named ca-key-pair.
  • A secret containing the server certificates on the central n clustercert-${resource}-cert.
  • A ConfigMap containing the CA certificates named issuer-ca.
4

Generate one TLS certificate for your project’s MongoDB Agents.

For the MongoDB Agent TLS certificate:

  • The Common Name in the TLS certificate must not be empty.
  • The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.
1

Generate a TLS certificate for SAN hostnames.

Use one of the following options:

  • Generate a wildcard TLS certificate that contains all externalDomains that you created in the SAN. For example, add the hostnames similar to the following format to the SAN:

    *.cluster-0.example.com, *.cluster-1.example.com
    

    If you generate wildcard certificates, you can continue using them when you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.

  • Generate a TLS certificate for each MongoDB replica set member hostname in the SAN. For example, add the hostnames similar to the following to the SAN:

    my-replica-set-0-0.cluster-0.example.com,
    my-replica-set-0-1.cluster-0.example.com,
    my-replica-set-1-0.cluster-1.example.com,
    my-replica-set-1-1.cluster-1.example.com
    

    If you generate an individual TLS certificate that contains all the specific hostnames, you must create a new certificate each time you scale up or rebalance nodes in the Kubernetes member clusters, for example for disaster recovery.

2

Generate one TLS certificate for your project’s MongoDB Agents.

For the MongoDB Agent TLS certificate:

  • The Common Name in the TLS certificate must not be empty.
  • The combined Organization and Organizational Unit in each TLS certificate must differ from the Organization and Organizational Unit in the TLS certificate for your replica set members.

Important

The Kubernetes Operator uses kubernetes.io/tls secrets to store TLS certificates and private keys for Ops Manager and MongoDB resources. Starting in Kubernetes Operator version 1.17.0, the Kubernetes Operator doesn’t support concatenated PEM files stored as Opaque secrets.