Implementing a HTTPS Web Server for GIS-Adapter
The GIS Adapter facilitates the sharing of an electric network model from a Smallworld source system to an ADMS system. This process ensures that the network model is managed and updated centrally, allowing for reuse across various applications. The GIS Adapter product supports the initial transfer of network data into the target system and handles subsequent incremental updates as the source system's network evolves. Defined workflows streamline the entire network model exchange, enabling the identification and resolution of any discrepancies to maintain synchronization between the network models.
The exchange of data supported by two modes.
Export to a shared network location which is accessible by both GIS Adapter and the ADMS. We can use this output mode only when the job server runs in a Windows environment, that is, when the job server is deployed outside the Kubernetes cluster.
Uploading to an HTTPS file web server. This supports HTTPS REST PUT API requests to write files by GIS Adapter, and HTTPS REST GET API requests or SFTP protocol for retrieval by the ADMS. In this mode, GIS Adapter directly uploads files to the HTTPS REST-based file server endpoint via the Bifrost in the GSS framework. We must use this file output mode when the job server is deployed as a pod within the Kubernetes cluster.
In my experience, organizations often opt for a shared network location that's accessible to both GIS and ADMS systems. However, it's common for GIS and ADMS to reside in separate networks, necessitating additional processes to transfer files between them without disrupting ADMS web service responses. This introduces complexity and maintenance challenges, particularly when personnel transition between roles. While this approach may seem straightforward initially, it can lead to various unnecessary issues for stakeholders. The primary reason for companies choosing this option is often a shortage of experienced Kubernetes professionals.
Kubernetes has undoubtedly revolutionized the way we deploy and manage containerized applications, offering scalability, high availability, and portability at scale. However, its complexity and operational overhead pose significant challenges, especially for small teams or organizations with limited resources. By carefully weighing the pros and cons of Kubernetes and investing in proper training and infrastructure, you can harness its power to streamline your application deployment and operations.
Regrettably, GE does not provide a preconfigured HTTPS Web Server for GIS Adapter, prompting the need for this article. Here, we will utilize the nginx official image to establish a file web server for hosting extracted CIM files from GIS Adapter job engine sessions. Without delay, let's delve into the implementation.
Steps
It is presumed that you possess the requisite expertise for deploying GSS and GIS Adapter.
Create a GSS certificate with a Subject Alternative Name (SAN) for our NGINX deployments.
๐กIn this article, we are deploying the File WebServer on https://nginx.kube-rhel-server:30443. You will need to adjust the code according to the Subject Alternative Name (SAN) used in the certificate creation for nginx.๐กIf you're generating a new certificate in an environment where a GSS production instance is operational, you must recreate the bifrost-ingress-tls secret within the GSS production namespace or whichever namespace you deployed the GSS to. Additionally, if your deployment is on-premises, you'll need to recreate the nexus-tls secret in the nexus namespace.๐กYou have the option to deploy Nginx without creating a new certificate, but you'll need to modify the bifrost ingress accordingly. Additionally, you should remove the cim-fserver-ingress definition from the YAML file provided below. I refrained from pursuing this approach due to GE's recommendation to keep custom deployments separate from the deployments GE provides. Nonetheless, it would have been simpler to proceed with this option otherwise.Update the environment variables for the Desktop Sessions to align with the new web endpoint.
SW_GISA_OUTPUT_MODE=web_endpoint SW_GISA_OUTPUT_PATH=gisa_file_transfer SW_GISA_WEB_PATH=https://nginx.rhel-k8s-server:30443/upload/
Revise the server_config_gis_adapter.json file located in the Bifrost config directory within Kubernetes (K8s).
Generate a file named cim-file-webserver.yaml containing the following content.
--- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: cim-nginx spec: {} status: {} --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx namespace: cim-nginx spec: replicas: 1 selector: matchLabels: app: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: nginx spec: containers: - image: nginx volumeMounts: - name: nginx-config mountPath: /etc/nginx/conf.d - name: nginx-html-cm mountPath: /usr/share/nginx/html - name: nginx-upload mountPath: /usr/share/nginx/upload imagePullPolicy: IfNotPresent name: nginx resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: nginx-config configMap: name: nginx-config - name: nginx-html-cm configMap: name: nginx-html-cm - name: nginx-upload persistentVolumeClaim: claimName: nginx-pvc-claim --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-pvc-claim namespace: cim-nginx spec: storageClassName: nfs-storage-class accessModes: - ReadWriteMany resources: requests: storage: 5Gi --- apiVersion: v1 data: default.conf: |+ server { listen 80; listen [::]:80; server_name localhost; underscores_in_headers on; location /cim-server { root /usr/share/nginx/html; index index.html index.htm; rewrite ^/cim-server?$ /index.html break; } location ~ "/upload/([0-9_a-zA-Z-.]*)$" { root /usr/share/nginx; client_body_temp_path /tmp; dav_methods PUT DELETE MKCOL COPY MOVE; create_full_put_path on; dav_access group:rw all:r; client_body_in_file_only on; client_max_body_size 500M; autoindex on; autoindex_exact_size off; autoindex_format html; autoindex_localtime on; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } kind: ConfigMap metadata: creationTimestamp: null name: nginx-config namespace: cim-nginx --- apiVersion: v1 data: index.html: |+ <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="/upload/">Cim Extracts</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 50x.html: |+ <!DOCTYPE html> <html> <head> <title>Error</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>An error occurred.</h1> <p>Sorry, the page you are looking for is currently unavailable.<br/> Please try again later.</p> <p>If you are the system administrator of this resource then you should check the error log for details.</p> <p><em>Faithfully yours, nginx.</em></p> </body> </html> kind: ConfigMap metadata: creationTimestamp: null name: nginx-html-cm namespace: cim-nginx --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: nginx name: cim-file-webserver-service namespace: cim-nginx spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx status: loadBalancer: {} --- apiVersion: v1 kind: Service metadata: name: cim-file-webserver-service namespace: eo-gisa spec: type: ExternalName externalName: cim-file-webserver-service.cim-nginx.svc.cluster.local --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers "X-Content-Type-Options: nosniff"; nginx.ingress.kubernetes.io/proxy-read-timeout: "120" generation: 1 name: cim-fserver-ingress namespace: eo-gisa spec: rules: - host: nginx.rehl-k8s-server http: paths: - backend: service: name: cim-file-webserver-service port: number: 80 path: /cim-server pathType: Prefix - backend: service: name: cim-file-webserver-service port: number: 80 path: /upload/ pathType: Prefix tls: - hosts: - nginx.rehl-k8s-server secretName: bifrost-ingress-tls status: loadBalancer: {} ---
๐กNginx will be deployed on port 80.๐กIn the provided code, the GSS namespace is set as eo-gisa, while the nginx deployment will be installed in the cim-nginx namespace. Please adjust these namespaces according to your environment's namespace configuration.๐กThe Nginx will be accessible from https://nginx.rehl-k8s-server:30443/cim-server
Apply the cim-file-webserver.yaml in GSS cluster
Kubectl apply -f cim-file-webserver.yaml
After applying the manifest file, nginx pod should be running in cim-enginx namespace.
Completed! Accessing https://nginx.rehl-k8s-server:30443/cim-server should now present the Nginx welcome screen.
Initiate a desktop session and proceed to export a circuit.
Upon successful export, clicking on "Cim Extracts" should display a list of all CIM files uploaded to nginx from the GIS Adapter Job session in K8s.
In a development environment where POA mock services are set up in SoupUI, you should receive the consumer request and be capable of verifying the new download URL sent to ADMS.
This demonstration underscores the simplicity of working with Smallworld and Kubernetes. Kudos to GE for pioneering Kubernetes integration for Smallworld applications.