65. Deploying a custom java server to the cloud!

In this post we explore how Terraform was used to deploy the application to the cloud.

The cloud of choice was Azure: https://azure.microsoft.com/en-gb

This is mainly because I will be integrating with PlayFab services also, but other two great options are:

Deploying our server to the azure cloud

Using Terraform to deploy our services

What IS terraform?

https://www.terraform.io

Infrastructure automation to provision and manage resources in any cloud or data center.

I use terraform so that if decide to switch cloud provider in future, it shouldn’t be overly complicated to do so.

The github project contains a README doc for the terraform part:

https://github.com/yazoo321/mmo_server_with_micronaut/tree/master/terraform

What are the core components that we’re trying to spawn?

  • mongo db
  • kafka
  • redis
  • application vm

We will also need several other things, such as container registries and load balancers, which will be handled below.

Getting started

First of all, we created the main.tf file which will define all the connections that we’ll need, such as providers and API login info.

provider "azurerm" {
  features {}
  subscription_id = var.azure_subscription_id
  client_id       = var.azure_client_id
  client_secret   = var.azure_client_secret
  tenant_id       = var.azure_tenant_id
}

provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.main.kube_config[0].host
  username               = azurerm_kubernetes_cluster.main.kube_config[0].username
  password               = azurerm_kubernetes_cluster.main.kube_config[0].password
  client_certificate     = base64decode(azurerm_kubernetes_cluster.main.kube_config[0].client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.main.kube_config[0].client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.main.kube_config[0].cluster_ca_certificate)
}

variable "azure_subscription_id" {
  description = "Subscription for azure"
  type        = string
}
variable "azure_client_id" {
  description = "Client/App ID"
  type        = string
}
variable "azure_client_secret" {
  description = "Client Secret / Password"
  type        = string
}
variable "azure_tenant_id" {
  description = "Tenant ID"
  type        = string
}

Here you can see that it references Azure, so I need to provide the Azure credentials.

I store the parameter actual values in a file called terraform.tfvars

This file is NOT in the repository as it contains sensitive information.

Folder structure for terraform

Enter your credentials like this:

You may also store them in env variables, which would be even better. For me this is test envs, so its not an issue.

Getting the credentials

As you can see, we will need to get 4 pieces of information. First you need to register with Azure and login. https://portal.azure.com/#home

I’d suggest you create a new subscription like I have to use for the project:

Create a new subscription for your project

In order to create a new subscription, Navigate to Subscriptions and click ‘Add’ like below

Click add on new subscription

Once you created it, you can find the Subscription ID here (see screenshot above).

This is your first bit of information.

Next, you will want to create a new IAM role for your terraform to create the resources with.

In order to do that, you will need to execute a command in CLI, after installing az:

az ad sp create-for-rbac --name "<service-principal-name>" --role="Owner" --scopes="/subscriptions/<subscription-id>"

change the <service-principal-name> to something of your choice, e.g. mine was open-mmo-principal or similar.

Replace <subscription-id> from your previous step. This will create a role and provide you the other credentials you need, including:

  • client_id
  • client_secret (will be referred to as password)
  • tenant id

If you don’t have az installed on your machine, follow instructions here: https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-windows?tabs=azure-cli

Populate all of these values to terraform.tfvars

note that the IAM role is created with: --role="Owner"

Contributor role is almost enough, but it will have issues with adding permissions between acr and aks, therefore I upgraded it to owner.

Creating AKS (Azure Kubernetes Cluster)

Now we gone over the main.tf and credentials in terraform.tfvars. Next, let’s check our AKS. This is found in aks.tf.

resource "azurerm_kubernetes_cluster" "main" {
  name                = "myAKSCluster"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  dns_prefix          = "myakscluster"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_B2s"
  }

  identity {
    type = "SystemAssigned"
  }

  tags = {
    environment = "production"
  }
}

resource "azurerm_container_registry" "acr" {
  name                = "openmmoregistry"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  sku                 = "Basic"
}

resource "azurerm_role_assignment" "acrpull" {
  scope                = azurerm_container_registry.acr.id
  role_definition_name = "ACRPull"
  principal_id         = azurerm_kubernetes_cluster.main.kubelet_identity[0].object_id
}

output "client_certificate" {
  value     = azurerm_kubernetes_cluster.main.kube_config[0].client_certificate
  sensitive = true
}

output "kube_config" {
  value = azurerm_kubernetes_cluster.main.kube_config_raw
  sensitive = true
}

This creates the kubernetes cluster for us: "azurerm_kubernetes_cluster" "main". It will have a name of myAKSCluster.

Location (region) is defined in resource_group.tf and will create the resource group where we will spawn our resources.

resource "azurerm_resource_group" "main" {
  name     = "myGameResourceGroup"
  location = "UK South"
}

Change the location to what will be close to you.

My AKS has a vm size of: vm_size = "Standard_B2s"

Change this to your requirements, there is a limitation of requiring relatively large vm size here, so I chose Standard B2s for mine, but there may be more suitable ones.

If you require IPv6, like I perhaps will, you will also need to add network_profile and a Standard load balancer, instead of Basic which mine will be configured to.

I keep it as basic to reduce costs.

Azure container registry

Part of the aks.tf you will notice there is a resource "azurerm_container_registry" "acr" {

This is a container registry which will hold our docker images for our main application.

In order to give our cluster access to it, you will need to provide it necessary pull roles:

resource "azurerm_role_assignment" "acrpull" {

Note that the IAM role with which terraform executes the commands with will require Owner privileges. I was stuck here for a while as I originally created it under Contributor role.

output "kube_config" { will provide us with k8 credentials to use locally.

Adding mongo db config

This can be found in mongo.tf.

resource "kubernetes_persistent_volume" "mongo" {
  metadata {
    name = "mongo-pv"
  }
  spec {
    capacity = {
      storage = "1Gi"
    }
    access_modes = ["ReadWriteOnce"]
    persistent_volume_source {
      host_path {
        path = "/mnt/data/mongo"
      }
    }
    storage_class_name = "manual"
  }
}

resource "kubernetes_persistent_volume_claim" "mongo" {
  metadata {
    name = "mongo-pvc"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    resources {
      requests = {
        storage = "1Gi"
      }
    }
    storage_class_name = "manual"
  }
}

resource "kubernetes_service" "mongo" {
  metadata {
    name = "mongo-service"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    selector = {
      app = "mongo"
    }
    port {
      port          = 27017
      target_port   = 27017
    }
    type = "ClusterIP"
  }
}

The first 2 blocks:

  • resource "kubernetes_persistent_volume" "mongo"
  • resource "kubernetes_persistent_volume_claim" "mongo"

Will configure the storage options for mongo. The next block, resource "kubernetes_service" "mongo" will expose it as a service for us to use.

The deployment of mongo occurs in k8s_deployment.tf. The specific part is:

resource "kubernetes_deployment" "mongo" {
  metadata {
    name      = "mongo"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        app = "mongo"
      }
    }
    template {
      metadata {
        labels = {
          app = "mongo"
        }
      }
      spec {
        container {
          name  = "mongo"
          image = "mongo:latest"
          port {
            container_port = 27017
          }
          env {
            name  = "MONGO_INITDB_ROOT_USERNAME"
            value = "mongo_mmo_server"
          }
          env {
            name  = "MONGO_INITDB_ROOT_PASSWORD"
            value = "mongo_password"
          }
        }
      }
    }
  }
}

Here you can define the env vars used by the image, which includes things like mongo username and password. You can reference them from and add them to your local env variables or terraform.tfvars also.

Adding Redis config

Redis config can be found in redis.tf.

resource "kubernetes_persistent_volume" "redis" {
  metadata {
    name = "redis-pv"
  }
  spec {
    capacity = {
      storage = "1Gi"
    }
    access_modes = ["ReadWriteOnce"]
        persistent_volume_source {
          host_path {
            path = "/mnt/data/redis"
          }
        }
    storage_class_name = "manual"
  }
}

resource "kubernetes_persistent_volume_claim" "redis" {
  metadata {
    name = "redis-pvc"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    resources {
      requests = {
        storage = "1Gi"
      }
    }
    storage_class_name = "manual"
  }
}

resource "kubernetes_service" "redis" {
  metadata {
    name      = "redis"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    selector = {
      app = "redis"
    }
    port {
      port        = 6379
      target_port = 6379
    }
    type = "ClusterIP"
  }
}

Similar to mongo, this file manages the storage for redis.

It also exposes the service for us to use in our app.

Like mongo, the deployment of it occurs in k8s_deployment.tf.

resource "kubernetes_deployment" "redis" {
  metadata {
    name      = "redis"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        app = "redis"
      }
    }
    template {
      metadata {
        labels = {
          app = "redis"
        }
      }
      spec {
        container {
          name  = "redis"
          image = "redis:latest"
          port {
            container_port = 6379
          }
          command = ["redis-server", "--save", "20", "1", "--loglevel", "warning"]
          volume_mount {
            name       = "redis-data"
            mount_path = "/data"
          }
        }
        volume {
          name = "redis-data"
          empty_dir {}
        }
      }
    }
  }
}

Adding Zookeeper config

Zookeeper is required for Kafka. The relevant config can be found in zookeeper.tf

resource "kubernetes_persistent_volume" "zookeeper" {
  metadata {
    name = "zookeeper-pv"
  }
  spec {
    capacity = {
      storage = "1Gi"
    }
    access_modes = ["ReadWriteOnce"]
    persistent_volume_source {
      host_path {
        path = "/mnt/data/zookeeper"
      }
    }
    storage_class_name = "manual"
  }
}

resource "kubernetes_persistent_volume_claim" "zookeeper" {
  metadata {
    name = "zookeeper-pvc"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    resources {
      requests = {
        storage = "1Gi"
      }
    }
    storage_class_name = "manual"
  }
}

resource "kubernetes_service" "zookeeper" {
  metadata {
    name      = "zookeeper"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    selector = {
      app = "zookeeper"
    }
    port {
      port        = 2181
      target_port = 2181
    }
    type = "ClusterIP"
  }
}

You can see its following a pattern, we define the storage options and service here and the deployment will again occur in k8s_deployment.tf

resource "kubernetes_deployment" "zookeeper" {
  metadata {
    name      = "zookeeper"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        app = "zookeeper"
      }
    }
    template {
      metadata {
        labels = {
          app = "zookeeper"
        }
      }
      spec {
        container {
          name  = "zookeeper"
          image = "confluentinc/cp-zookeeper:latest"
          port {
            container_port = 2181
          }
          env {
            name  = "ZOOKEEPER_CLIENT_PORT"
            value = "2181"
          }
          env {
            name  = "ZOOKEEPER_TICK_TIME"
            value = "2000"
          }
          volume_mount {
            name       = "zookeeper-data"
            mount_path = "/var/lib/zookeeper"
          }
        }
        volume {
          name = "zookeeper-data"
          empty_dir {}
        }
      }
    }
  }
}

Adding Kafka config

Kafka config can be found in kafka.tf. I was planning to have SASL_PLAINTEXT enabled, however I had some issues with this so I will just have it as PLAINTEXT for now and will add SASL a bit later to unblock my deployments for now.

 resource "kubernetes_persistent_volume" "kafka" {
   metadata {
     name = "kafka-pv"
   }
   spec {
     capacity = {
       storage = "1Gi"
     }
     access_modes = ["ReadWriteOnce"]
     persistent_volume_source {
       host_path {
         path = "/mnt/data/kafka"
       }
     }
     storage_class_name = "manual"
   }
 }

 resource "kubernetes_persistent_volume_claim" "kafka" {
   metadata {
     name = "kafka-pvc"
     namespace = kubernetes_namespace.main.metadata[0].name
   }
   spec {
     access_modes = ["ReadWriteOnce"]
     resources {
       requests = {
         storage = "1Gi"
       }
     }
     storage_class_name = "manual"
   }
 }
 resource "kubernetes_config_map" "kafka_config" {
   metadata {
     name      = "kafka-config"
     namespace = kubernetes_namespace.main.metadata[0].name
   }

   data = {
    server_properties = <<-EOT
      # Define listeners
      listeners=PLAINTEXT://0.0.0.0:9092
      advertised.listeners=PLAINTEXT://kafka-broker:9092

      # Enable PLAINTEXT security protocol (no SASL)
      security.protocol=PLAINTEXT

      # Logging configuration
      log.dirs=/var/lib/kafka/data
    EOT
   }
 }


resource "kubernetes_service" "kafka" {
  metadata {
    name      = "kafka-broker"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    selector = {
      app = "kafka"
    }
    port {
      name = "plaintext"
      port        = 9092
      target_port = 9092
    }
    port {
      name = "sasl-plaintext"
      port        = 9093
      target_port = 9093
    }
    type = "ClusterIP"
  }
}

resource "kubernetes_config_map" "kafka_config" should include additional configurations that may be required to configure SASL.

The deployment of it occurs in k8s_deployment.tf

resource "kubernetes_deployment" "kafka" {
  metadata {
    name      = "kafka"
    namespace = kubernetes_namespace.main.metadata[0].name
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        app = "kafka"
      }
    }
    template {
      metadata {
        labels = {
          app = "kafka"
        }
      }
      spec {
        container {
          name  = "kafka-broker"
          image = "confluentinc/cp-kafka:latest"

          env {
            name  = "KAFKA_BROKER_ID"
            value = "1"
          }
          env {
            name  = "KAFKA_ZOOKEEPER_CONNECT"
            value = "zookeeper:2181"
          }
          env {
            name  = "KAFKA_ADVERTISED_LISTENERS"
            value = "PLAINTEXT://kafka-broker:9092"
          }
          env {
            name  = "KAFKA_LISTENER_SECURITY_PROTOCOL_MAP"
            value = "PLAINTEXT:PLAINTEXT"
          }
          env {
            name  = "KAFKA_SECURITY_INTER_BROKER_PROTOCOL"
            value = "PLAINTEXT"
          }
          env {
            name  = "KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL"
            value = "PLAIN"
          }
          env {
            name  = "KAFKA_AUTO_CREATE_TOPICS_ENABLE"
            value = "true"
          }
          env {
            name  = "KAFKA_SASL_ENABLED_MECHANISMS"
            value = "PLAIN"
          }
          env {
            name  = "KAFKA_DEFAULT_REPLICATION_FACTOR"
            value = "1"
          }
          env {
            name  = "KAFKA_LOG_RETENTION_HOURS"
            value = "1"
          }
          env {
            name  = "KAFKA_NUM_PARTITIONS"
            value = "2"
          }
          env {
            name  = "KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR"
            value = "1"
          }

          volume_mount {
            name       = "kafka-config"
            mount_path = "/etc/kafka/configs/config.properties"
            sub_path   = "server_properties"
          }
        }

        volume {
          name = "kafka-config"
          config_map {
            name = kubernetes_config_map.kafka_config.metadata[0].name
          }
        }
      }
    }
  }
}

Again, I removed the SASL config from above as there was some issues with it, but I will look to add it back, worth cross referencing with repository code to see if its updated.

Also note that it has to be configured in line with Zookeeper to ensure they both have connections to each other.

Application VM config

My Java (Micronaut) application will also require deployment. This config can be found in app_vm.tf

resource "azurerm_linux_virtual_machine" "micronaut_vm" {
  name                = "micronaut-vm"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  size                = "Standard_B2s"

  admin_username      = "azureuser"
  admin_password      = "Password1234!"

  disable_password_authentication = false

  network_interface_ids = [
    azurerm_network_interface.nic.id,
  ]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }
}

resource "azurerm_public_ip" "micronaut_pip" {
  name                = "micronaut-pip"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  allocation_method   = "Static"
  sku                 = "Basic"
  lifecycle           {
    create_before_destroy = true
  }
  ip_version = "IPv4"  # You can duplicate this block for IPv6 or use "IPv4" and "IPv6" if available
}

resource "kubernetes_service" "micronaut_service" {
  metadata {
    name      = "micronaut-service"
    namespace = kubernetes_namespace.main.metadata[0].name
  }

  spec {

    selector = {
      app = kubernetes_deployment.micronaut_vm.metadata[0].labels["app"]
    }

    type                    = "LoadBalancer"
    external_traffic_policy = "Local"
    ip_families             = ["IPv4"]
    ip_family_policy        = "SingleStack"

    port {
      name        = "main"
      port        = 80      # External port to expose the service
      target_port = 8081    # Internal port of the Micronaut app
    }

    port {
      name        = "udp-9876"
      port        = 9876
      target_port = 9876
      protocol    = "UDP"
    }

    # Add the range for receiving updates over UDP ports 5000-5010
    port {
      name        = "udp-5000"
      port        = 5000
      target_port = 5000
      protocol    = "UDP"
    }
  }
}

I removed config for IPv6 (commented out in code) which works, but it incurs additional costs due to setup. If you’re interested in that, check the repository.

Some interesting points to make here:

  • sku = "Basic" -> this will refer to the load balancer used. Standard will be required for things like ipv6
  • ip_family_policy = "SingleStack" -> RequireDualStack will be required for ipv6
  • port { entries are required to specify the ports that we will be using to communicate with our client

The deployment will be handled again in k8s_deployment.tf:

resource "kubernetes_deployment" "micronaut_vm" {
  metadata {
    name = "mmo-server"
    namespace = kubernetes_namespace.main.metadata[0].name

    labels = {
      app = "mmo-server"
    }
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        app = "mmo-server"
      }
    }
    template {
      metadata {
        labels = {
          app = "mmo-server"
        }
      }
      spec {
        container {
          name  = "mmo-server"
          image = "openmmoregistry.azurecr.io/myapp/mmo-server:latest"
          image_pull_policy = "Always"
          port {
            container_port = 8081
          }
        }
      }
    }
  }
}

Points of interest:

  • openmmoregistry.azurecr.io/myapp/mmo-server:latest -> this should point to your container registry, in my case openmmoregistry as defined in resource "azurerm_container_registry" "acr" in aks.tf
  • container_port = 8081 -> This is defined in my application.yml file, as my server port is 8081
  • image_pull_policy = "Always" -> this ensures it will always pull latest image, useful when I delete pod and expect it to update on restart

Networking

Networking is defined in network.tf – the public ip was actually defined in the app_vm.tf.

resource "azurerm_virtual_network" "vnet" {
  name                = "micronaut-vnet"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  address_space       = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "subnet" {
  name                 = "micronaut-subnet"
  resource_group_name  = azurerm_resource_group.main.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.1.0/24"]
}

resource "azurerm_network_interface" "nic" {
  name                = "nic-micronaut"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.subnet.id
    private_ip_address_allocation = "Dynamic"
    public_ip_address_id          = azurerm_public_ip.micronaut_pip.id
    primary                       = true
  }
}

Applying terraform and deploying the app

We’ve now referenced ALL the terraform configs so we’re ready to start deploying the resources.

Before we apply all of this, we should set replicas of all pods to 0. This is because we haven’t actually built our application and made it ready for the deploys.

So what we want to do is:

  • Go to k8s_deployment.tf and find all replicas = 1
  • Replace it to replicas = 0

This will not spawn the pods, which we want because we haven’t prepared the image to use yet.

After setting replicas to 0, we’re ready to execute:

terraform apply

Kubectl configuration and setup

After terraform apply has applied, you will want to get credentials required to execute kubernetes commands, you can do this with:

az aks get-credentials --resource-group <resource_group> --name <aks-cluster-name>

in my case this equates to:

az aks get-credentials --resource-group myGameResourceGroup --name myAKSCluster

Useful commands for further debugging in steps below:

  • kubectl get pods --all-namespaces -> lists all active pods
  • kubectl logs -f <pod_name> -n main -> Get logs and follow them (namespace in my case is main -> change it to your namespace if different
  • kubectl get svc -n main -> list all services in main namespace, change it if different. This will provide you external IPs that you need to map to your clients
  • kubectl describe configmap kafka-config -n main -> get config map for kafka, when in main namespace

Preparing your application configs for build

I want to enable my application to work under both, local setup and deployed setup.

You can find the configurations that your Java application uses in application.yml.

My application.yml can be found here: https://github.com/yazoo321/mmo_server_with_micronaut/blob/master/src/main/resources/application.yml

First point of interest:

micronaut:
  application:
    name: mmo_server
  server:
    port: 8081

You can see that the application runs on port 8081. This was important in app_vm.tf

next part is linking Kafka:

kafka:
  streams:
    default:
      processing.guarantee: "exactly_once"
      auto.offset.reset: "earliest"
  bootstrap:
    servers: kafka-broker:9092
  security:
    protocol: PLAINTEXT
#  sasl:
#    mechanism: PLAIN
#    jaas:
#      config: org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="password123";
  consumers:
    mmo-server:
      bootstrap:
        servers: kafka-broker:9092

Note that I’ve temporarily disabled SASL and instead of an IP, I reference: kafka-broker.

This matches:

 resource "kubernetes_service" "kafka" {
   metadata {
     name      = "kafka-broker"
     namespace = kubernetes_namespace.main.metadata[0].name
   }

from kafka.tf and

      spec {
        container {
          name  = "kafka-broker"
          image = "confluentinc/cp-kafka:latest"

from resource "kubernetes_deployment" "kafka"

The service is the key one as that’s how k8s is able to resolve the IP.

Next is mongo db:

mongodb:
  #  Set username/password as env vars
  uri: mongodb://mongo_mmo_server:mongo_password@mongo-service:27017/mmo_server?authSource=admin

Again, this matches the mongo service in mongo.tf

resource "kubernetes_service" "mongo" {
  metadata {
    name = "mongo-service"
    namespace = kubernetes_namespace.main.metadata[0].name
  }

And finally its redis:

redis:
  uri: redis://redis

which matches the service in redis.tf:

resource "kubernetes_service" "redis" {
  metadata {
    name      = "redis"
    namespace = kubernetes_namespace.main.metadata[0].name
  }

Adjusting local docker to work with server

I want my build to work locally too. I previously referred to the local IPs (localhost/127.0.0.1) for services like mongo, redis and kafka.

I have a docker-compose.yml file for local setup of these services.

  mongo_db:
    container_name: mongo-service
...
  redis:
    container_name: redis
...
  kafka1:
    container_name: kafka-broker

I thought Windows would be able to derive the IPs using the container names from Docker – perhaps this is still the case in Mac and other systems, but it wasn’t the case for Windows for me (or I done it wrong).

To resolve the IPs manually, I edited the hosts file and you can do this as a workarounds on other systems too.

In Windows to do this, you need to edit the file in: C:\Windows\System32\drivers\etc\hosts.file

and add these entries:

127.0.0.1 redis
127.0.0.1 mongo-service
127.0.0.1 kafka-broker

the names need to match your configurations in application.yml.

This simply resolves those services to localhost.

Building and deploying the app to Azure

Now we’re finally at the stage to build the app.

I assume you know how to get your jar file, but just in case, I can create it using assemble.

In my case, the command is ./gradlew assemble.

This will create the jar file in your build/libs by default.

finding your jar file in build/libs

Next, I want to copy/move it to another directory, where I will package it into a docker image.

Create a Dockerfile and move the jar to same directory

The content in this Dockerfile is:

FROM openjdk:17-jdk-alpine
COPY ./mmo_server-0.8.2-all.jar /app.jar
ENTRYPOINT ["java", "-Djava.util.concurrent.ForkJoinPool.common.parallelism=12", "-jar", "/app.jar"]

This image is what will be used and executed in our VM.

It will use the Java 17 environment and copies the jar.

The entrypoint refers to what it will execute, I added:

"-Djava.util.concurrent.ForkJoinPool.common.parallelism=12" because I have multiple schedulers and the default parallelism would have issues with the VMs I chose.

Now we can build and push this docker image to our container that we setup using Terraform in Azure.

Remember we created:

resource "azurerm_container_registry" "acr" {
  name                = "openmmoregistry"
  resource_group_name = azurerm_resource_group.main.name
  location            = azurerm_resource_group.main.location
  sku                 = "Basic"
}

The name above that we will use is: openmmoregistry.

The commands we will need to execute are:

docker build -t myapp/mmo-server .
docker tag myapp/mmo-server openmmoregistry.azurecr.io/myapp/mmo-server
docker push openmmoregistry.azurecr.io/myapp/mmo-server

You will need to have Docker authenticated with acr to allow this, in order to do this you may need to execute:

- docker logout
- az login
- az acr login --name openmmoregistry
- docker tag myapp/mmo-server openmmoregistry.azurecr.io/myapp/mmo-server
- docker push openmmoregistry.azurecr.io/myapp/mmo-server

Complete the deploy

Now we can go back to k8s_deployment.tf and if you set replicas = 0 on all resources, we can now update it to: replicas = 1.

After doing so, we just need to apply changes using terraform apply.

That’s it! The services should now be deployed and available for use.

Refer back to section: Kubectl configuration and setup to check some debugging commands to find pods and services. Services will provide the external IDs that you will need to use to integrate with your application.