Kubenetes
Kubernetes
ZTrust can be deployed on any Kubernetes Container Engine. The prerequisites of deploying ZTrust on Kubernetes are as follows.
Prerequisites
Kubernetes Version 1.28.0 or higher We recommend deploying production-grade ZTrust SSO on a Kubernetes cluster for the below reasons:
ZTrust SSO is not a stateless app. It maintains user sessions and cache and requires a database.
ZTrust SSO uses Kubernetes’ service discovery to discover similar ZTrust pods and create ZTrust clusters automatically for HA.
Kubernetes can automatically restart pods, auto-rollout upgrades, and auto-scale pods with zero downtime.
Sticky sessions are more efficiently handled with Kubernetes and NGINX ingress
For production, multi-tenant, HA-enabled ZTrust → use Kubernetes
Minimum VCPU & RAM
ZTrust SSO requires a minimum of 800 millicore CPUs and 600 MB of RAM during startup.
With a standard load, ZTrust consumes around 300 millicores of CPU and 1200 MB of RAM per pod.
The usage may vary depending on the number of users, active user sessions, and features used.
Database
For a production-grade setup, ZTrust needs a database to persist the ZTrust configuration and user data. The following table shows a list of supported databases for ZTrust
DatabaseOption ValueTested VersionMariaDB Server
mariadb
11.4
Microsoft SQL Server
mssql
2022
MySQL
mysql
8.4
Oracle Database
oracle
23.5
PostgreSQL
postgres
17
Amazon Aurora PostgreSQL
postgres
16.8
It is recommended to use 100 GB storage for the SQL DB for up to 100,000 users. The storage may increase if the number of users is increased.
Ingress - NGINX (recommended). Use ingress for,
Session stickiness
ZTrust relies on session cookies during login and admin console use.
If requests bounce between different ZTrust pods, users may get logged out or see errors.
NGINX Ingress can enforce sticky sessions via cookie affinity, ensuring all user requests hit the same pod for the session.
TLS termination & hostname handling
ZTrust requires a fixed external hostname (ZT_HOSTNAME); otherwise, redirects and OIDC flows fail.
NGINX Ingress handles:
TLS termination
Large Tokens in auth header with NGINX annotations
Hostname routing. For multi-organisation setup, NGINX is helpful in setting up multiple hosts for the same ZTrust instance. NGINX hosts option can be leveraged for isolating organisation-specific traffic and routing to a shared ZTrust instance (Pod).
Reverse proxy hardening
Rate limiting for ingress traffic helps in managing your production ZTrust cluster efficiently.
Request/connection timeouts for incoming requests prove to be cost-effective with low bandwidth usage.
Large header/cookie handling (important for SAML/OIDC-based authentication, which often uses large-size cookies, resulting in large tokens)
Better DDoS protection
Load balancing & failover
If you run multiple ZTrust pod replicas, Ingress balances traffic automatically.
With ALB/NLB directly, you’d need extra setup for cookie stickiness.
NGINX gives finer control: health checks, failover, and pod draining during rolling upgrades.
Compatibility with OIDC/SAML flows
OIDC & SAML redirect flows need absolute URLs and consistent session state.
Ingress simplifies this by presenting a single, stable public endpoint even if backend pods move around.
Custom routing & path handling
Many setups run ZTrust under a path (e.g., /auth).
NGINX Ingress handles path rewrites, forwarding, and securing access with annotations (e.g., nginx.ingress.kubernetes.io/rewrite-target).
RabbitMQ – Version 3.12.2 (recommended)
ZTrust SSO uses RabbitMQ-based interactions with the ZTrust Authenticator app via the MQTT protocol. You should ensure that your RabbitMQ has the following:
RabbitMQ version is 3.12.2
The TLS certificates are added to RabbitMQ, and MQTTS is configured.
A writable virtual host is configured with the right permissions of the topic exchanges for the RabbitMQ user.
For a production-grade RabbitMQ, it is recommended to go with 30 GB of disc space in the virtual host.
SSL/TLS certificates
Certificates are required during the runtime of ZTrust SSO on Kubernetes to enable better encryption and production grade. Please ensure you have a valid CA certificate for the ZTrust deployment. Alternatively, you can use a Let’s Encrypt certificate in your non-prod environment.
Deploy on Kubernetes.
Once the prerequisites are satisfied, you are ready to deploy the ZTrust SSO on your Kubernetes Cluster. Let’s consider the Kubernetes deployment kind with service and ingress for this installation. The following environment variables are required to be added to ZTrust SSO deployment.
ZT_DB
Database type (e.g., postgres, mysql, mariadb).
mariadbmssqlmysqloraclepostgres
ZT_DB_URL
Full JDBC URL string including host, port, and database name.
jdbc:postgresql://url:port/db-name
ZT_DB_USERNAME
Database username with privileges to access the DB.
<string>
ZT_DB_PASSWORD
Database password.
<string>
ZTRUST_ADMIN
Default Admin username for ZTrust Admin Console
<string>
ZTRUST_ADMIN_PASSWORD
Default password for Admin user
<string>
ZT_HOSTNAME
Defines the external hostname (base URL) that ZTrust uses to generate redirects, tokens, and OIDC/SAML endpoints
auth.example.com
ZT_PROXY
This variable tells ZTrust how to handle proxy headers (like X-Forwarded-For, X-Forwarded-Proto) when running behind reverse proxies or ingress controllers (NGINX, HAProxy, ALB, etc.). If ZTrust is behind a reverse proxy use “edge”If the proxy terminates TLS and then re-encrypts traffic back to ZTrust then use “reencrypt”If the proxy is not terminating TLS, but just passing it through to ZTrust then use “passthrough”
edge reencrypt passthrough
PROXY_ADDRESS_FORWARDING
Same as ZT_PROXY but useful in older Kubernetes and ZTrust versions
true false
ZT_HTTPS_CERTIFICATE_FILE
Path to the TLS/SSL certificate file (.crt or .pem) that ZTrust should use for HTTPS. The certificates can be mounted via configmap to the ZTrust deployment
/etc/x509/https/tls.crt
ZT_HTTPS_CERTIFICATE_KEY_FILE
Path to the certificate private key file (.key or .pem) that ZTrust should use for HTTPS. The private key can be mounted via configmap to the ZTrust deployment
/etc/x509/https/tls.key
Create a new namespace in your Kubernetes cluster, “ztrust-sso”, using the kubectl command.
kubectl create ns ztrust-sso
Create a Kubernetes TLS secret in the same namespace using the following command.
kubectl -n ztrust-sso create secret tls <secret-name> \ --cert=<path-to-cert-file> \ --key=<path-to-key-file>
Fill in the following “deployment.yaml” with your environment values. You can change the name of the deployment, labels, secrets and volumes as per your convenience.
apiVersion: v1 items: - apiVersion: apps/v1 kind: Deployment metadata: name: ztrust-sso namespace: ztrust-sso spec: replicas: 1 selector: matchLabels: app: ztrust-sso template: metadata: labels: app: ztrust-sso spec: containers: - command: - /opt/ZTrust/bin/zt.sh - start env: - name: ZTRUST_ADMIN value: <Admin User Name> - name: ZTRUST_ADMIN_PASSWORD value: <Admin Password> - name: PROXY_ADDRESS_FORWARDING value: "true" - name: ZT_PROXY value: edge - name: ZT_DB value: postgres - name: ZT_DB_URL value: jdbc:postgresql://<url:port>/<db-name> - name: ZT_DB_USERNAME value: <DB Username> - name: ZT_DB_PASSWORD value: <DB Password> - name: ZT_HOSTNAME value: <auth.example.com> - name: ZT_HTTPS_CERTIFICATE_FILE value: /etc/x509/https/tls.crt - name: ZT_HTTPS_CERTIFICATE_KEY_FILE value: /etc/x509/https/tls.key image: registry.prodevans.com/ztrust-sso/ztrust-sso:<tag> imagePullPolicy: IfNotPresent name: ztrust-container ports: - containerPort: 8443 protocol: TCP resources: {} volumeMounts: - mountPath: /etc/x509/https name: ztrust-cert readOnly: true dnsPolicy: ClusterFirst imagePullSecrets: - name: <image pull secret name> securityContext: {} volumes: - name: ztrust-cert secret: defaultMode: 420 secretName: ztrust-cert-secret
Then apply the “deployment.yaml” using
Copy
kubectl -n ztrust-sso apply -f deployment.yaml
Now you should be seeing the pods rolling up and changing to running state in Kubernetes.
Then we need to create a service for this deployment. You can use the following “service.yaml” file
apiVersion: v1 kind: Service metadata: labels: app: ztrust-sso name: ztrust-sso namespace: ztrust-sso spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 8443 protocol: TCP targetPort: 8443 selector: app: ztrust-sso sessionAffinity: None type: ClusterIP
Note the port number and selector, which should be matching with your deployment you created in the previous step.
Then apply the service.yaml file using the following:
kubectl -n ztrust-sso apply -f service.yaml
You should be able to see the new service created in your Kubernetes cluster.
Now the service has to be exposed to outside networks using an ingress. To create an ingress in the same namespace, use the following YAML:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-buffer-size: 32k
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
name: ztrust-sso
namespace: ztrust-sso
spec:
ingressClassName: nginx
rules:
- host: auth.example.com
http:
paths:
- backend:
service:
name: ztrust-sso
port:
number: 8443
pathType: ImplementationSpecific
tls:
- hosts:
- auth.example.com
secretName: ztrust-cert-secret
Note the annotations above. The proxy-buffer-size
controls the header buffer size between NGINX ingress and ZTrust. This is important for OIDC and SAML logins since they often send very large cookies/headers. Setting it to a lower value will result in 400 Bad Request - Request Header Or Cookie Too Large
an error.
The backend-protocol
annotation tells NGINX to talk to ZTrust over HTTPS protocol. Since we deployed ZTrust with SSL inbuilt to the pod, the NGINX ingress should be forwarding the traffic to the pod over HTTPS. This is the recommended setup for a production-grade ZTrust setup.
Apply this ingress.yaml using the following command
kubectl -n ztrust-sso apply -f ingress.yaml
Now your ZTrust service should be accessible, and you should be able to navigate to the ZTrust Admin Console.
Last updated