Navigation

Connect to a MongoDB Database Resource from Outside Kubernetes

On this page

The following procedure describes how to connect to a MongoDB resource deployed in Kubernetes from outside of the Kubernetes cluster.

Prerequisite

Compatible MongoDB Versions

For your databases to be accessed outside of Kubernetes, they must run MongoDB 4.2.3 or later.

Procedure

The following procedure walks you through the process of configuring external connectivity for your deployment by using the built-in configuration options in the Kubernetes Operator.

How you connect to a MongoDB resource that the Kubernetes Operator deployed from outside of the Kubernetes cluster depends on the resource.

To connect to your Kubernetes Operator-deployed MongoDB standalone resource from outside of the Kubernetes cluster:

1

Deploy a standalone resource with the Kubernetes Operator.

If you haven’t deployed a standalone resource, follow the instructions to deploy one.

This procedure uses the following example:

20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-standalone>
spec:
  version: "4.2.2-ent"
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
            # Must match metadata.name in ConfigMap file
  credentials: <mycredentials>
  type: Standalone
...
2

Create an external service for the MongoDB Pod.

To connect to your standalone resource from an external resource, configure the spec.externalAccess setting:

externalAccess: {}

This setting instructs the Kubernetes Operator to create an external LoadBalancer service for the MongoDB Pod in your standalone resource. The external service provides an entry point for external connections. Adding this setting with no values creates an external service with the following default values:

Field Value Description
Name <pod-name>-svc-external Name of the external service. You can’t change this value.
Type LoadBalancer Creates an external LoadBalancer service.
Port <Port Number> A port for mongod.
publishNotReadyAddress true Specifies that DNS records are created even if the Pod isn’t ready. Do not set to false for any database Pod.

Optionally, if you need to add values to the service or override the default values, specify:

For example, the following settings override the default values for the external service to configure your standalone resource to create NodePort services that expose the MongoDB Pod:

externalAccess:
  externalService:
    annotations:
      # cloud-specific annotations for the service
    spec:
      type: NodePort # default is LoadBalancer
      port: 27017
      # you can specify other spec overrides if necessary

Tip

To learn more, see Annotations and ServiceSpec in the Kubernetes documentation.

3

Verify the external services.

In your standalone resource, run the following command to verify that the Kubernetes Operator created the external service for your deployment.

$ kubectl get services

The command returns a list of services similar to the following output. For each database Pod in the cluster, the Kubernetes Operator creates an external service named <pod-name>-0-svc-external. This service is configured according to the values and overrides you provide in the external service specification.

NAME                                  TYPE         CLUSTER-IP   EXTERNAL-IP       PORT(S)           AGE
<my-standalone>-0-svc-external   LoadBalancer   10.102.27.116    <lb-ip-or-fqdn>   27017:27017/TCP    8m30s

Depending on your cluster configuration or cloud provider, the IP address of the LoadBalancer service is an externally accessible IP address or FQDN. You can use the IP address or FQDN to route traffic from your external domain.

4

Test the connection to the standalone resource.

To connect to your deployment from outside of the Kubernetes cluster, use the MongoDB Shell (mongosh) and specify the MongoDB Pod address that you’ve exposed through the external domain.

Example

If you have an external FQDN of <my-standalone>.<external-domain>, you can connect to this sharded cluster instance from outside of the Kubernetes cluster by using the following command:

mongosh "mongodb://<my-standalone>.<external-domain>"

Important

This procedure explains the least complicated way to enable external connectivity. Other utilities can be used in production.

To connect to your Kubernetes Operator-deployed MongoDB replica set resource from outside of the Kubernetes cluster:

1

Deploy a replica set with the Kubernetes Operator.

If you haven’t deployed a replica set, follow the instructions to deploy one.

You must enable TLS for the replica set by providing a value for the spec.security.certsSecretPrefix setting. The replica set must use a custom CA certificate stored with spec.security.tls.ca.

2

Create an external service for the MongoDB Pods.

To connect to your replica set from an external resource, configure the spec.externalAccess setting:

externalAccess: {}

This setting instructs the Kubernetes Operator to create an external LoadBalancer service for the MongoDB Pods in your replica set. The external service provides an entry point for external connections. Adding this setting with no values creates an external service with the following default values:

Field Value Description
Name <pod-name>-svc-external Name of the external service. You can’t change this value.
Type LoadBalancer Creates an external LoadBalancer service.
Port <Port Number> A port for mongod.
publishNotReadyAddress true Specifies that DNS records are created even if the Pod isn’t ready. Do not set to false for any database Pod.

Optionally, if you need to add values to the service or override the default values, specify:

For example, the following settings override the default values for the external service to configure your replica set to create NodePort services that expose the MongoDB Pods:

externalAccess:
  externalService:
    annotations:
      # cloud-specific annotations for the service
    spec:
      type: NodePort # default is LoadBalancer
      port: 27017
      # you can specify other spec overrides if necessary

Tip

To learn more, see Annotations and ServiceSpec in the Kubernetes documentation.

3

Add Subject Alternate Names to your TLS certificates.

Add each external DNS name to the certificate SAN.

4

Verify the external services.

In your replica set, run the following command to verify that the Kubernetes Operator created the external service for your deployment.

$ kubectl get services

The command returns a list of services similar to the following output. For each database Pod in the cluster, the Kubernetes Operator creates an external service named <pod-name>-<pod-idx>-svc-external. This service is configured according to the values and overrides you provide in the external service specification.

NAME                                  TYPE         CLUSTER-IP   EXTERNAL-IP       PORT(S)           AGE
<my-replica-set>-0-svc-external   LoadBalancer   10.102.27.116    <lb-ip-or-fqdn>   27017:27017/TCP    8m30s

Depending on your cluster configuration or cloud provider, the IP address of the LoadBalancer service is an externally accessible IP address or FQDN. You can use the IP address or FQDN to route traffic from your external domain.

5

Open your replica set resource YAML file.

6

Copy the sample replica set resource.

Change the settings of this YAML file to match your desired replica set configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-replica-set>
spec:
  members: 3
  version: "4.2.2-ent"
  type: ReplicaSet
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
  credentials: <mycredentials>
  persistent: true
15
16
17
18
19
20
21
22
23
  security:
    tls:
      enabled: true
  connectivity:
    replicaSetHorizons:
      - "example-website": "web1.example.com:30907"
      - "example-website": "web2.example.com:32350"
      - "example-website": "web3.example.com:31185"
...
7

Paste the copied example section into your existing replica set resource.

Open your preferred text editor and paste the object specification at the end of your resource file in the spec section.

8

Change the highlighted settings to your preferred values.

Key Type Necessity Description Example
spec.connectivity
collection Conditional

Add this parameter and values if you need your database to be accessed outside of Kubernetes. This setting allows you to provide different DNS settings within the Kubernetes cluster and to the Kubernetes cluster. The Kubernetes Operator uses split horizon DNS for replica set members. This feature allows communication both within the Kubernetes cluster and from outside Kubernetes.

You may add multiple external mappings per host.

Split Horizon Requirements

  • Make sure that each value in this array is unique.
  • Make sure that the number of entries in this array matches the value given in spec.members.
  • Provide a value for the spec.security.certsSecretPrefix setting to enable TLS. This method to use split horizons requires the Server Name Indication extension of the TLS protocol.
See Setting
spec.security
.tls.certsSecretPrefix
string Required Add the <prefix> of the secret name that contains your MongoDB deployment’s TLS certificates. devDb
9

Confirm the external hostnames and external service values in your replica set resource.

Confirm that the external hostnames in the spec.connectivity.replicaSetHorizons setting are correct.

External hostnames should match the DNS names of Kubernetes worker nodes. These can be any nodes in the Kubernetes cluster. Kubernetes nodes use internal routing if the pod runs on another node.

Set the ports in spec.connectivity.replicaSetHorizons to the external service values.

Example

15
16
17
18
19
20
21
22
23
  security:
    tls:
      enabled: true
  connectivity:
    replicaSetHorizons:
      - "example-website": "web1.example.com:30907"
      - "example-website": "web2.example.com:32350"
      - "example-website": "web3.example.com:31185"
...
10

Save your replica set config file.

11

Update and restart your replica set deployment.

In any directory, invoke the following Kubernetes command to update and restart your replica set:

kubectl apply -f <replica-set-conf>.yaml
12

Test the connection to the replica set.

In the development environment, for each host in a replica set, run the following command:

mongosh --host <my-replica-set>/web1.example.com \
      --port 30907
      --ssl \
      --sslAllowInvalidCertificates

Note

Don’t use the --sslAllowInvalidCertificates flag in production.

In production, for each host in a replica set, specify the TLS certificate and the CA to securely connect to client tools or applications:

mongosh --host <my-replica-set>/web1.example.com \
  --port 30907 \
  --tls \
  --tlsCertificateKeyFile server.pem \
  --tlsCAFile ca-pem

If the connection succeeds, you should see:

Enterprise <my-replica-set> [primary]

To connect to your Kubernetes Operator-deployed MongoDB replica set resource from outside of the Kubernetes cluster with OpenShift:

1

Deploy a replica set with the Kubernetes Operator.

If you haven’t deployed a replica set, follow the instructions to deploy one.

You must enable TLS for the replica set by providing a value for the spec.security.certsSecretPrefix setting. The replica set must use a custom CA certificate stored with spec.security.tls.ca.

2

Configure services to ensure connectivity.

  1. Paste the following example services into a text editor:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: my-external-0
    spec:
      ports:
        - name: mongodb
          protocol: TCP
          port: 443
          targetPort: 27017
      selector:
        statefulset.kubernetes.io/pod-name: my-external-0
    
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: my-external-1
    spec:
      ports:
        - name: mongodb
          protocol: TCP
          port: 443
          targetPort: 27017
      selector:
        statefulset.kubernetes.io/pod-name: my-external-1
    
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: my-external-2
    spec:
      ports:
        - name: mongodb
          protocol: TCP
          port: 443
          targetPort: 27017
      selector:
        statefulset.kubernetes.io/pod-name: my-external-2
    
    ...
    

    Note

    If the spec.selector has entries that target headless services or applications, OpenShift may create a software firewall rule explicitly dropping connectivity. Review the selectors carefully and consider targeting the stateful set pod members directly as seen in the example. Routes in OpenShift offer port 80 or port 443. This example service uses port 443.

  2. Change the settings to your preferred values.

  3. Save this file with a .yaml file extension.

  4. To create the services, invoke the following kubectl command on the services file you created:

    kubectl apply -f <my-external-services>.yaml
    
3

Configure routes to ensure TLS terminination passthrough.

  1. Paste the following example routes into a text editor:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    ---
    apiVersion: v1
    kind: Route
    metadata:
      name: my-external-0
    spec:
      host: my-external-0.{redacted}
      to:
        kind: Service
        name: my-external-0
      tls:
        termination: passthrough
    ---
    apiVersion: v1
    kind: Route
    metadata:
      name: my-external-1
    spec:
      host: my-external-1.{redacted}
      to:
        kind: Service
        name: my-external-1
      tls:
        termination: passthrough
    ---
    apiVersion: v1
    kind: Route
    metadata:
      name: my-external-2
    spec:
      host: my-external-2.{redacted}
      to:
        kind: Service
        name: my-external-2
      tls:
        termination: passthrough
    
    ...
     
    

    Note

    To ensure the TLS SNI negotiation with mongod necessary for mongod to respond with the correct horizon replica set topology for the drivers to use, you must set TLS termination passthrough.

  2. Change the settings to your preferred values.

  3. Save this file with a .yaml file extension.

  4. To create the routes, invoke the following kubectl command on the routes file you created:

    kubectl apply -f <my-external-routes>.yaml
    
4

Add Subject Alternate Names to your TLS certificates.

Add each external DNS name to the certificate SAN.

5

Open your replica set resource YAML file.

6

Configure your replica set resource YAML file.

Use the following example to edit your replica set resource YAML file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: my-external
  namespace: mongodb
spec:
  type: ReplicaSet
  members: 3
  version: 4.2.2-ent
  opsManager:
    configMapRef:
      name: {redacted}
  credentials: {redacted}
  persistent: false
  security:
    tls:
      # TLS must be enabled to allow external connectivity
      enabled: true
    authentication:
      enabled: true
      modes: ["SCRAM","X509"]
  connectivity:
    # The "localhost" routes are included to enable the creation of localhost
    # TLS SAN in the CSR, per OpenShift route requirements.
    # "ocroute" is the configured route in OpenShift.
    replicaSetHorizons:
      - "ocroute": "my-external-0.{redacted}:443"
        "localhost": "localhost:27017"
      - "ocroute": "my-external-1.{redacted}:443"
        "localhost": "localhost:27018"
      - "ocroute": "my-external-2.{redacted}:443"
        "localhost": "localhost:27019"

...

Note

OpenShift clusters require localhost horizons if you intend to use the Kubernetes Operator to create each CSR. If you manually create your TLS certificates, ensure you include localhost in the SAN list.

7

Change the settings to your preferred values.

Key Type Necessity Description Example
spec.connectivity
collection Conditional

Add this parameter and values if you need your database to be accessed outside of Kubernetes. This setting allows you to provide different DNS settings within the Kubernetes cluster and to the Kubernetes cluster. The Kubernetes Operator uses split horizon DNS for replica set members. This feature allows communication both within the Kubernetes cluster and from outside Kubernetes.

You may add multiple external mappings per host.

Split Horizon Requirements

  • Make sure that each value in this array is unique.
  • Make sure that the number of entries in this array matches the value given in spec.members.
  • Provide a value for the spec.security.certsSecretPrefix setting to enable TLS. This method to use split horizons requires the Server Name Indication extension of the TLS protocol.
See Setting
spec.security
.tls.certsSecretPrefix
string Required Add the <prefix> of the secret name that contains your MongoDB deployment’s TLS certificates. devDb
8

Save your replica set config file.

9

Create the necessary TLS certificates and Kubernetes secrets.

Configure TLS for your replica set. Create one secret for the MongoDB replica set and one for the certificate authority. The Kubernetes Operator uses these secrets to place the TLS files in the pods for MongoDB to use.

10

Update and restart your replica set deployment.

In any directory, invoke the following Kubernetes command to update and restart your replica set:

kubectl apply -f <replica-set-conf>.yaml
11

Test the connection to the replica set.

The Kubernetes Operator should deploy the MongoDB replica set, configured with the horizon routes created for ingress. After the Kubernetes Operator completes the deployment, you may connect with the horizon using TLS connectivity. If the certificate authority is not present on your workstation, you can view and copy it from a MongoDB pod using the following command:

oc exec -it my-external-0 -- cat /mongodb-automation/ca.pem

To test the connections, run the following command:

Note

In the following example, for each member of the replica set, use your replica set names and replace {redacted} with the domain that you manage.

mongosh --host my-external/my-external-0.{redacted} \
      --port 443
      --ssl \
      --tlsAllowInvalidCertificates

Warning

Don’t use the --tlsAllowInvalidCertificates flag in production.

In production, for each host in a replica set, specify the TLS certificate and the CA to securely connect to client tools or applications:

mongosh --host my-external/my-external-0.{redacted} \
  --port 443 \
  --tls \
  --tlsCertificateKeyFile server.pem \
  --tlsCAFile ca-pem

If the connection succeeds, you should see:

Enterprise <my-replica-set> [primary]

To connect to your Kubernetes Operator-deployed MongoDB sharded cluster resource from outside of the Kubernetes cluster:

1

Deploy a sharded cluster with the Kubernetes Operator.

If you haven’t deployed a sharded cluster, follow the instructions to deploy one.

You must enable TLS for the sharded cluster by configuring the following settings:

Key Type Necessity Description Example
spec.security
string Required Add the <prefix> of the secret name that contains your MongoDB deployment’s TLS certificates. devDb
spec.security.tls
collection Optional List of every domain that should be added to TLS certificates to each pod in this deployment. When you set this parameter, every CSR that the Kubernetes Operator transforms into a TLS certificate includes a SAN in the form <pod name>.<additional cert domain>. example.com
2

Create an external service for the mongos Pods.

To connect to your sharded cluster from an external resource, configure the spec.externalAccess setting:

externalAccess: {}

This setting instructs the Kubernetes Operator to create an external LoadBalancer service for the mongos Pods in your sharded cluster. The external service provides an entry point for external connections. Adding this setting with no values creates an external service with the following default values:

Field Value Description
Name <pod-name>-svc-external Name of the external service. You can’t change this value.
Type LoadBalancer Creates an external LoadBalancer service.
Port <Port Number> A port for mongod.
publishNotReadyAddress true Specifies that DNS records are created even if the Pod isn’t ready. Do not set to false for any database Pod.

Optionally, if you need to add values to the service or override the default values, specify:

For example, the following settings override the default values for the external service to configure your sharded cluster to create NodePort services that expose the mongos Pods:

externalAccess:
  externalService:
    annotations:
      # cloud-specific annotations for the service
    spec:
      type: NodePort # default is LoadBalancer
      port: 27017
      # you can specify other spec overrides if necessary

Tip

To learn more, see Annotations and ServiceSpec in the Kubernetes documentation.

3

Add Subject Alternate Names to your TLS certificates.

Add each external DNS name to the certificate SAN.

Each MongoDB host uses the following SANs:

<my-sharded-cluster>-<shard>-<pod-index>.<external-domain>
<my-sharded-cluster>-config-<pod-index>.<external-domain>
<my-sharded-cluster>-mongos-<pod-index>.<external-domain>

The mongos instance uses the following SAN:

<my-sharded-cluster>-mongos-<pod-index>-svc-external.<external-domain>

Configure the spec.security.tls.additionalCertificateDomains setting similar to the following example. Each TLS certificate that you use must include the corresponding SAN for the shard, config server, or mongos instance. The Kubernetes Operator validates your configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: <my-sharded-cluster>
spec:
  version: "4.2.2-ent"
  opsManager:
    configMapRef:
      name: <configMap.metadata.name>
            # Must match metadata.name in ConfigMap file
  shardCount: 2
  mongodsPerShardCount: 3
  mongosCount: 2
  configServerCount: 3
  credentials: my-secret
  type: ShardedCluster
  externalAccess: {}
  security:
    tls:
      certsSecretPrefix: <prefix>
      additionalCertificateDomains:
         - "<external-domain>"
...
4

Verify the external services.

In your sharded cluster, run the following command to verify that the Kubernetes Operator created the external services for your deployment.

$ kubectl get services

The command returns a list of services similar to the following output. For each mongos instance in the cluster, the Kubernetes Operator creates an external service named <pod-name>-<pod-idx>-svc-external. This service is configured according to the values and overrides you provide in the external service specification.

NAME                                              TYPE         CLUSTER-IP     EXTERNAL-IP       PORT(S)           AGE
<my-sharded-cluster>-mongos-0-svc-external    LoadBalancer   10.102.27.116  <lb-ip-or-fqdn>   27017:27017/TCP    8m30s
<my-sharded-cluster>-mongos-1-svc-external    LoadBalancer   10.102.27.116  <lb-ip-or-fqdn>   27017:27017/TCP    8m30s

Depending on your cluster configuration or cloud provider, the IP address of the LoadBalancer service is an externally accessible IP address or FQDN. You can use the IP address or FQDN to route traffic from your external domain. This example has two mongos instances, therefore the Kubernetes Operator creates two external services.

5

Test the connection to the sharded cluster.

To connect to your deployment from outside of the Kubernetes cluster, use the MongoDB Shell (mongosh) and specify the addresses for the mongos instances that you’ve exposed through the external domain.

Example

If you have external FQDNs of <my-sharded-cluster>-mongos-0-svc-external.<external-domain> and <my-sharded-cluster>-mongos-1-svc-external.<external-domain>, you can connect to this sharded cluster instance from outside of the Kubernetes cluster by using the following command:

mongosh "mongodb://<my-sharded-cluster>-mongos-0-svc-external.<external-domain>,<my-sharded-cluster>-mongos-1-svc-external.<external-domain>"