How to install AKS with internal ingress controller and API Management by using Terraform – Part I


  • Terraform installed
  • Azure cli installed
  • Active Azure subscription
  • Service principal with contributor or owner level privilege
  • AKS service principal
  • kubectl installed

How to create a service principal using az/azure cli

Login into Azure using Azure cli with your service principal

Note : By default, az ad sp create-for-rbac assigns the Contributor role to the service principal at the subscription scope. To reduce your risk of a compromised service principal, assign a more specific role and narrow the scope to a resource or resource group.

For now, I am using the command without any role by which we will get the contributor role assignment.

az ad sp create-for-rbac –name ssg-sp

Now login with your service principal in Azure

[root@devopscheetah ~]# az login –service-principal -u xxxxxxxxxxxxxxxxxxxxx -p xxxxxxxxxxxxxxxxxxxxx –tenant xxxxxxxxxxxxxx
“cloudName”: “AzureCloud”,
“homeTenantId”: “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
“id”: “xxxxxxxxxxxxxxxxxxxxxxxxxx”,
“isDefault”: true,
“managedByTenants”: [],
“name”: “Free Trial”,
“state”: “Enabled”,
“tenantId”: “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
“user”: {
“name”: “xxxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
“type”: “servicePrincipal”

Now, we will deploy our kubernetes cluster on Azure with the help of Terraform.

We have four files mainly,, and to run our terraform.
First, in we define our provider such as aws, azure or GCP.

[root@devopscheetah terraform]# cat
terraform {
required_providers {
azurerm = {
source = “hashicorp/azurerm”
version = “=2.46.0”

provider “azurerm” {
features {}

[root@devopscheetah terraform]# cat
resource “azurerm_resource_group” “demo” {
name = “${var.prefix}-rg”
location = var.location
resource “azurerm_virtual_network” “demo” {
name = “${var.prefix}-network”
location = azurerm_resource_group.demo.location
resource_group_name =
address_space = [“”]
resource “azurerm_subnet” “demo” {
name = “${var.prefix}-subnet”
virtual_network_name =
resource_group_name =
address_prefixes = [“”]
resource “azurerm_kubernetes_cluster” “demo” {
name = “${var.prefix}-aks”
location = azurerm_resource_group.demo.location
resource_group_name =
dns_prefix = “${var.prefix}-aks”

default_node_pool {
name = “default”
node_count = 2
vm_size = “Standard_D2_v2”
type = “VirtualMachineScaleSets”
availability_zones = [“1”, “2”]
enable_auto_scaling = true
min_count = 2
max_count = 4
vnet_subnet_id =

service_principal {
client_id = var.client_id
client_secret = var.client_secret
network_profile {
network_plugin = “azure”
load_balancer_sku = “standard”
network_policy = “calico”
tags = {
Environment = “Development”

You can define your own variables in

[root@devopscheetah terraform]# cat
variable “prefix” {
default = “k8stest”
description = “A prefix used for all resources in this example”
variable “location” {
default = “West Europe”
description = “The Azure Region in which all resources in this example should be provisioned”
variable “client_id” {
description = “The Client app ID of the AKS client application”
variable “client_secret” {
description = “The Client secret of the AKS client application”

and we also specify the outputs section

[root@devopscheetah terraform]# cat
output “client_certificate” {
value = azurerm_kubernetes_cluster.demo.kube_config.0.client_certificate
output “kube_config” {
value = azurerm_kubernetes_cluster.demo.kube_config_raw

Before running Terraform, we need to generate the service principal for our AKS application by running the following command

[root@devopscheetah terraform]# az ad sp create-for-rbac –name k8s-sp
Changing “k8s-sp” to a valid URI of “http://k8s-sp”, which is the required format used for service principal names
Found an existing application instance of “xxxxxxxxxxxxxxxxxxxxxx”. We will patch it
Creating a role assignment under the scope of “/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx”
“appId”: “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
“displayName”: “k8s-sp”,
“name”: “http://k8s-sp”,
“password”: “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
“tenant”: “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”

Now run “terraform init”

then, terraform plan with variables value of client id and client secret of AKS service principal.

Finally, terraform apply to deploy the changes.

Bravo!, if the output is like this, then at this stage our AKS is up and ready in a custom subnet. In the next part, we will access our kubernetes cluster and deploy an API-Management instance.



Leave a Reply

Your email address will not be published. Required fields are marked *