Above Baseline Technology Guidance
Overview
This document outlines guidance for using above baseline technologies within Party Bus environments. It focuses on databases and messaging tools that fall outside the supported services. You’ll find key considerations, risks, and expectations for teams choosing to deploy and manage these technologies independently, including the absence of support for maintenance, backups, and recovery.
Technology discussed in this document:
- MongoDB
- RabbitMQ
- Redis
Above Baseline Objective
P1 defines an "Above Baseline Database" as a database deployment that is not listed in the Party Bus Service Catalog . The MDO team does NOT provide assistance, maintenance, backup, or restoration of these databases in any way. Product teams may deploy such databases in their own Kubernetes manifests at their own risk.
These criteria identify the actions a team needs to take to gain a P1 exception, allowing the use of an Above Baseline Party Bus database.
The resources are paid for during contract negotiations with the CST.
The team understands that they are 100% responsible for any outage or data loss.
The team does not expect any support from Party Bus DevOps.
If the Above Baseline resource shows any degradation of the Party Bus infrastructure in any way, it will be removed from our architecture.
If you have not paid for an Above Baseline resource, you are not allowed to add the resource. If we see that the resource exists without authorization, it will be removed from our architecture.
If you have paid for an Above Baseline resource, you are authorized to create the resource yourself with the instructions below, but you will receive zero support from Party Bus to add, maintain, backup, or any other request for assistance.
DISCLAIMER
This document was created to provide a self-help resource for teams that cannot work with other technologies not on the Party Bus Service Catalog. The recommendation is that you DO NOT implement anything that is not listed as supported on the Party Bus Service Catalog.
Above Baseline Summary
This document is for teams intending to use a database other than PostgreSQL and/or MySQL.
Party Bus supports PostgreSQL and MySQL databases. These databases, by default, have high availability, redundancy, and automated backups. Other databases are considered "Above Baseline."
These Above Baseline databases are deployed as a pod in your application namespace. You may add a deployment.yaml file to your app-manifests repository if you are comfortable doing so. This deployment.yaml can reference any image in Iron Bank. A service.yaml file must be added as well, and these should be integrated into the manifests via kustomize. If you need help creating these files, submit a Pipeline Issue ticket with the Help Desk .
Requests to increase pod resources will be evaluated on a case-by-case basis. If Party Bus DevOps identifies that the pod is being used for load testing, we will remove the resource from our architecture.
Kubernetes Manifests
The following manifest files will need to be added to the repository in order to deploy MongoDB. The steps outlined below will walk through how to deploy MongoDB across all ILs and environments (i.e., staging and production).
Clone your manifest's repo locally by retrieving the git location and running git clone <manifests.git location>.
Within your manifest's directory, create a folder called 'mongodb' under 'base.'
In the 'mongodb' folder, create a file called "statefulset.yaml" with the following code:
yamlapiVersion: apps/v1 kind: StatefulSet metadata: labels: app: mongodb name: mongodb spec: serviceName: mongodb template: spec: securityContext: fsGroup: 1337 containers: - env: - name: MONGODB_SYSTEM_LOG_VERBOSITY value: '5' - name: MONGODB_DISABLE_SYSTEM_LOG value: 'no' - name: MONGODB_ENABLE_IPV6 value: 'no' - name: MONGODB_ENABLE_DIRECTORY_PER_DB value: 'no' image: mongodb imagePullPolicy: IfNotPresent name: mongodb ports: - containerPort: 27017 name: mongo resources: limits: cpu: 100m memory: 256Mi requests: cpu: 10m memory: 16Mi volumeMounts: - mountPath: /data/db name: data volumes: - name: data persistentVolumeClaim: claimName: mongodbIn the 'mongodb' folder, create a 'service.yaml' file with the following code:
yamlapiVersion: v1 kind: Service metadata: name: mongodb spec: ports: - name: mongo port: 27017 targetPort: 27017In the 'mongodb' folder, create a 'pvc.yaml' file with the following code:
yamlkind: PersistentVolumeClaim apiVersion: v1 metadata: labels: app: mongodb name: mongodb spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "8Gi"In the 'mongodb' folder, create a 'kustomization.yaml' file with the following code:
yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization commonLabels: app: mongodb resources: - statefulset.yaml - service.yaml - pvc.yamlOnce those files have been created, we need to add a new resource to the 'kustomization.yaml' in the 'base' folder:
yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization commonLabels: authentication: istio-auth resources: - mongodb # Add this line. There are likely other entries under resources, so add it to the very end.Now that the 'base' manifests have been created, a patch will need to be created under each IL you want MongoDB deployed to. This patch will allow you to override values specified in the 'base/mongodb/statefulset.yaml' file. Create a directory called 'patches' under '[il-env]/base.' In that new directory, create a file called "mongodb-statefulset-patch.yaml" with the following code:
yamlapiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb spec: template: spec: imagePullSecrets: - name: code-il2-pull-creds containers: - name: mongodb envFrom: - secretRef: name: mongodb-credsUnder the '[il-env]/base' directory, update the "kustomization.yaml" file with the following code:
INFO
If this file already exists, you may just need to add the 'patchesStrategicMerge' section with the additional entry.
yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base/ patchesStrategicMerge: - patches/mongodb-statefulset-patch.yamlFor each environment (staging, production) within each IL, a new image will need to be provided to point to the MongoDB one in Registry1 (Iron Bank). Please follow the steps below for each environment you want MongoDB deployed to. Example: for IL2 staging, update the file located at 'il2/overlays/staging/kustomization.yaml' (For IL4 Production, go to the following file: il4/overlays/production/kustomization.yaml):
yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base images: - name: mongodb # Add this section newName: registry1.dso.mil/ironbank/opensource/mongodb/mongodb newTag: "4.4.5"At this point, you have configured MongoDB to be deployed within your namespace. However, since we specified a secret called 'mongodb-creds,' you will need to create a Help Desk ticket to have an MDO team member create the secret for you. See the section labeled 'Authentication' below for more information. In the meantime, you can still commit your changes to the 'master' branch, and Argo CD should pick it up automatically. Just note that you will see an error about 'mongodb-creds: secret not found' until it is created.
For the next step, you will need the kustomize tool. It can be found here: Access the kustomize tool for MAC or Linux users, and install it using Chocolatey for Windows.
To test that you do not have any syntax errors, you can run the following command locally:
#Format: kustomize build [path] #If you configured MongoDB to be deployed in IL2 Staging, run: kustomize build il2/overlays/staging #If you configured MongoDB to be deployed in a different environment, update the path accordingly.</code>INFO
If the command above prints out the Kubernetes manifests files to the console, then you are good to go. If not, then there is likely an error in one of the files you created above.
Once you have used the instructions above to verify that your manifests are syntactically correct, then commit and push them up to your manifests repository using Git.
Connection
To connect to your pod database, use the service name as the hostname and the service port as the port. Example: http://mongodb:27017
You can also utilize the full DNS name (update the namespace to match your team name): mongodb.[namespace].svc.cluster.local:27017
Authentication
We do not require authentication for pod databases. If you decide that you want authentication enabled, that needs to be done depending on your chosen database. For MongoDB, for example, we can inject the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD environment variables to enable auth. In this scenario, these are secrets and must be encrypted by the MDO team by creating a help desk ticket .
Availability and Redundancy
We do not provide any high-availability or redundancy functionalities for these types of databases.
Backups
We do not provide any backup functionality for these types of databases.
If you do want to implement backups, that is the responsibility of the app team. We recommend you use one of the pods in your namespace to periodically get a dump from the database and then save it to S3.
You can request an S3 bucket be added to your app by following our IRSA S3 Guide.