Skip to main content
Not interested in using Kubernetes?

There are two flavours of Control Plane deployments - Remote and Kubernetes. This guide will focus on deploying a Remote Control Plane on a Kubernetes cluster. Go to Remote - Deploy Control Plane to deploy the Control Plane on a Linux host instead. Keep in mind that in such case, it will be necessary to prepare the host for Controller as well.

Kubernetes - Deploy Control Plane Using potctl

Every Edge Compute Network ('ECN') starts with a Control Plane that allows us to manage our ECN's resources.

In this guide, our Control Plane will deploy a single Controller instance.

We use YAML to define ioFog resources

The following procedures will define resources in YAML for potctl to consume. Specification of those YAML resources can be found here.

Deploy a Control Plane on Kubernetes

Prepare your Keycloak Realms for Datasance PoT

We recommened going through the Keycloak Installation Guide before continuing on here.

Setup your db for production environment

Datasance PoT support both mysql and postgres as an external databases. If you must provide external db configuration if you want to deploy Controller with replica > 1

We recommened going through the Database Installation Guide before continuing on here.

Create a template of controlplane.yaml like so:

echo "---
apiVersion: datasance.com/v3
kind: KubernetesControlPlane
metadata:
name: albatros-1
spec:
iofogUser:
name: Foo
surname: Bar
email: user@domain.com
password: iht234g9afhe
config: ~/.kube/config
replicas:
controller: 2
database:
provider: mysql/postgres
user:
host:
port:
password:
databaseName: pot
auth:
url: https://example.com/
realm: realm-name
realmKey:
ssl: external
controllerClient: pot-controller
controllerSecret:
viewerClient: ecn-viewer
images:
pullSecret: pull-srect
operator: ghcr.io/datasance//operator:3.4.16
controller: ghcr.io/datasance/controller:3.4.9
portManager: ghcr.io/datasance/port-manager:3.1.6
proxy: ghcr.io/datasance/proxy:3.1.1
router: ghcr.io/datasance/router:3.2.5
services:
controller:
type: LoadBalancer/ClusterIP
# annotations:
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
proxy:
type: LoadBalancer/oadBalancer
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
router:
type: LoadBalancer/ClusterIP
# annotations:
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
controller:
ecnViewerUrl: https://
https: true
secretName:
ecnViewerPort: 8008
router:
internalSecret:
amqpsSecret:
requireSsl: "yes"
saslMechanisms: EXTERNAL
authenticatePeer: "yes"
proxy:
serverName:
transport: tls
ingresses:
controller:
annotations:
# cert-manager.io/cluster-issuer: letsencrypt
# nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
# nginx.ingress.kubernetes.io/backend-protocol: "https"
ingressClassName: nginx
host:
secretName:
router:
address:
messagePort: 5672
interiorPort: 55672
edgePort: 45672" > /tmp/controlplane.yaml

Make sure to specify the correct value for the config field. Here we implicitly use the default namespace. Note that potctl will deploy to the Kubernetes namespace that it is configured to use through the -n flag or to the default namespace we set via potctl configure current-namespace .... This means that by following these examples, we end up installing the Control Plane in default namespace on the cluster. Therefore it is recommended to use a namespace instead.

Once we have edited the fields to our liking, we can go ahead and run:

potctl deploy -f /tmp/controlplane.yaml

Naturally, we can also use kubectl to see what is happening on the Kubernetes cluster.

kubectl get all

The next section covers how to do the same thing we just did, but on a remote host instead of a Kubernetes cluster. We can skip ahead.

Verify the Deployment

We can use the following commands to verify the Control Plane is up and running:

potctl get controllers
potctl describe controller alpaca-1
potctl describe controlplane
Where to go from here?

Having our Control Plane up and running, we can now go to Setup Agents guide to deploy our Agents and finalize the ECN deployment.

Group 3See anything wrong with the document? Help us improve it!