AKS makes it easy to deploy and manage containerized applications without container orchestration expertise. Azure handles the ongoing operations including provisioning, upgrading and scaling of resources/nodes. Nodes are deployed as Azure Virtual Machines. Master nodes are completely managed by Azure. In short, AKS reduces the complexity and operational overhead of managing a Kubernetes cluster by offloading much of that responsibility to Azure. Azure handles health monitoring and maintenance. In addition to AKS, Azure has a full ecosystem of container based services like Azure Container Registry, Azure Service Fabric and Azure Batch.
Overview
Managed Kubernetes simplifies deployment, management and operations of Kubernetes, and allows developers to take advantage of Kubernetes without worrying about the underlying plumbing to get it up running and freeing up developer time to focus on the applications. Different Cloud Providers are offering this service – for example Google Kubernetes Engine (GKE), Amazon has Elastic Container Service for Kubernetes (EKS), Microsoft has Azure Kubernetes Service (AKS) etc..
The focus of this blog is on Azure Kubernetes Services. AKS makes it easy to deploy and manage containerized applications without container orchestration expertise. Azure handles the ongoing operations including provisioning, upgrading and scaling of resources/nodes. Worker nodes are deployed as Azure Virtual Machines. Master nodes are completely managed by Azure. In short, AKS reduces the complexity and operational overhead of managing a Kubernetes cluster, by offloading much of that responsibility to Azure. Azure handles health monitoring and maintenance.
AKS Reference Architecture (Kubenetes Networking)
Throughout the blog article we will reference the following architecture. It shows a 3-nodes Kubernetes cluster with basic Kubenet networking in a flat-routed topology. The master nodes are completely managed by Azure.
Kubernetes Service Architecture
To simplify the network configuration for application workloads, Kubernetes uses Services to logically group a set of pods together and expose your application for external network connectivity. There are three types of services, or ServiceTypes.
- ClusterIP
- NodePort
- LoadBalancer
We will focus on the LoadBalancer service type. It leverages an External Azure Load balancer with a Public IP.
From Microsoft documentation:
Source: Microsoft Documentation
Install Azure CLI and login to Azure
Azure Kubernetes Service management can be done from a development VM as well as using Azure Cloud Shell. In this setup, I’m using an Ubuntu VM and I’ve install Azure CLI locally. To install Azure CLI follow this link.
Few basic commands to login to Azure using Azure CLI
| az login
az account set --subscription "Microsoft Azure XXXX"
az account show --output table
|
Create AKS Cluster and Connect to It
Create the AKS cluster in Azure is a single command. In Azure, create a resource group to manage the AKS cluster resources first.
********************* On the Local VM ********************** azure@aks-setup-vm:~$ az group create --name nn-aks-rg --location eastus { "id": "/subscriptions/XXXXXXX-f308-496c-a43c-faaeXXXXXX/resourceGroups/nn-aks-rg", "location": "eastus", "managedBy": null, "name": "nn-aks-rg", "properties": { "provisioningState": "Succeeded" }, "tags": null, "type": null } Create AKS Cluster: az aks create \ --resource-group nn-aks-rg \ --name nn-aks-cluster \ --node-count 3 \ --enable-addons monitoring \ --generate-ssh-keys \ --node-vm-size Standard_DS1_v2 \ --dns-name-prefix nnakscluster Install kubectl (If not installed already!) Connect to AKS Cluster from the development VM: azure@aks-setup-vm:~$ az aks get-credentials --resource-group nn-aks-rg --name nn-aks-cluster Merged "nn-aks-cluster" as current context in /home/nehali/.kube/config azure@aks-setup-vm:~$ kubectl config get-clusters NAME nn-aks-cluster azure@aks-setup-vm:~$ kubectl config current-context nn-aks-cluster azure@aks-setup-vm:~$ kubectl config current-context nn-aks-cluster azure@aks-setup-vm:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-nodepool1-19416140-0 Ready agent 14m v1.9.11 aks-nodepool1-19416140-1 Ready agent 14m v1.9.11 aks-nodepool1-19416140-2 Ready agent 14m v1.9.11 Note down the nodes IP from the output below azure@aks-setup-vm:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-nodepool1-19416140-0 Ready agent 23h v1.9.11 10.240.0.4 <none> Ubuntu 16.04.5 LTS 4.15.0-1035-azure docker://3.0.1 aks-nodepool1-19416140-1 Ready agent 23h v1.9.11 10.240.0.5 <none> Ubuntu 16.04.5 LTS 4.15.0-1035-azure docker://3.0.1 aks-nodepool1-19416140-2 Ready agent 23h v1.9.11 10.240.0.6 <none> Ubuntu 16.04.5 LTS 4.15.0-1035-azure docker://3.0.1 azure@aks-setup-vm:~/.ssh$ kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1 def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.11", GitCommit:"1bfeeb6f212135a22dc787b73e1980e5bccef1 3d", GitTreeState:"clean", BuildDate:"2018-09-28T21:35:22Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} |
|
Validations in Azure
Once the Azure Kubernetes Service Cluster is created, login to the Azure Portal and verify the Resource Groups, Service Principal, three nodes IPs and the Route table for the inter pod routing.
Resource Groups
Service Principal
Kubernetes Nodes
Route Table
Load Balancer
Run a Sample Containerized Application
Deployment Manifest file
Create a Kubernetes manifest file for the deployment. A deployment in Kubernetes represents one or more identical pods that are managed by Kubernetes deployment controller. It also defines the number of replica sets (pods) to create. In our case we create a file called nn-deployment.yaml which uses the nginx container image and 3 replicas. We will use a separate manifest file for services.
azure@aks-setup-vm:~$ more nn-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nn-nginx-deployment
labels:
app: nn-nginx
spec:
replicas: 3
selector:
matchLabels:
app: nn-nginx
template:
metadata:
labels:
app: nn-nginx
spec:
containers:
- name: nnc-nginx
image: nginxdemos/hello
ports:
- containerPort: 80
azure@aks-setup-vm:~$ kubectl create -f nn-deployment.yaml
deployment.apps/nn-nginx-deployment created
azure@aks-setup-vm:~$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nn-nginx-deployment 3/3 3 3 43m
azure@aks-setup-vm:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
aks-ssh-6fbc77d848-ghdzh 1/1 Running 6 22h
nn-nginx-deployment-77fcff4b8-f6pxc 1/1 Running 0 20h
nn-nginx-deployment-77fcff4b8-klvsj 1/1 Running 0 20h
nn-nginx-deployment-77fcff4b8-n98q9 1/1 Running 0 20h
Get the POD IPs using the -o wide switch:
azure@aks-setup-vm:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
aks-ssh-6fbc77d848-ghdzh 1/1 Running 6 22h 10.244.0.7 aks-nodepool1-19416140-2 <none> <none>
nn-nginx-deployment-77fcff4b8-f6pxc 1/1 Running 0 20h 10.244.2.9 aks-nodepool1-19416140-1 <none> <none>
nn-nginx-deployment-77fcff4b8-klvsj 1/1 Running 0 20h 10.244.0.9 aks-nodepool1-19416140-2 <none> <none>
nn-nginx-deployment-77fcff4b8-n98q9 1/1 Running 0 20h 10.244.1.9 aks-nodepool1-19416140-0 <none> <none>
azure@aks-setup-vm:~$ kubectl get pods -o yaml | grep -i PODIP
podIP: 10.244.0.7
podIP: 10.244.2.9
podIP: 10.244.0.9
podIP: 10.244.1.9
Service Manifest file
Azure Kubernetes uses Services to logically group a set of pods together and provide network connectivity. As explained in the architecture section, there are three types of services. In this example, we will use the LoadBalancer service type. The following manifest file creates an external public IP address and connects the requested pods to the load balancer pool.
azure@aks-setup-vm:~$ more nn-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nn-nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nn-nginx
azure@aks-setup-vm:~$ kubectl create -f nn-service.yaml
service/nn-nginx-service created
azure@aks-setup-vm:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3h55m
nn-nginx-service LoadBalancer 10.0.121.81 <pending> 80:32210/TCP 24s
azure@aks-setup-vm:~$ kubectl get service --watch
Note the Private and Public IPs for the service and the corresponding POD endpoints
azure@aks-setup-vm:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 6h37m
nn-nginx-service LoadBalancer 10.0.140.61 40.71.30.139 80:31278/TCP 148m
azure@aks-setup-vm:~$ kubectl get endpoints nn-nginx-service
NAME ENDPOINTS AGE
nn-nginx-service 10.244.0.9:80,10.244.1.9:80,10.244.2.9:80 149m
SSH into the AKS Nodes
Throughout the lifecycle of your Azure Kubernetes Service cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. The AKS nodes are Linux VMs, so you can access them using SSH. For security purposes, the AKS nodes are not exposed to the internet and master nodes are fully managed by Azure.
This article shows you how to create an SSH connection with an AKS node using their private IP addresses. Detailed documentationhere.
Get the resource Group:
azure@aks-setup-vm:~$ az aks show --resource-group nn-aks-rg --name nn-aks-cluster --query nodeResourceGroup -o tsv
MC_nn-aks-rg_nn-aks-cluster_eastus
Get the list of VMs
azure@aks-setup-vm:~$ az vm list --resource-group MC_nn-aks-rg_nn-aks-cluster_eastus -o table
Name ResourceGroup Location Zones
------------------------ ---------------------------------- ---------- -------
aks-nodepool1-19416140-0 MC_nn-aks-rg_nn-aks-cluster_eastus eastus
aks-nodepool1-19416140-1 MC_nn-aks-rg_nn-aks-cluster_eastus eastus
aks-nodepool1-19416140-2 MC_nn-aks-rg_nn-aks-cluster_eastus eastus
Add the public key to the nodes
az vm user update \
--resource-group MC_nn-aks-rg_nn-aks-cluster_eastus \
--name aks-nodepool1-19416140-0 \
--username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
az vm user update \
--resource-group MC_nn-aks-rg_nn-aks-cluster_eastus \
--name aks-nodepool1-19416140-1 \
--username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
az vm user update \
--resource-group MC_nn-aks-rg_nn-aks-cluster_eastus \
--name aks-nodepool1-19416140-2 \
--username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
Get the list of node IPs:
azure@aks-setup-vm:~$ az vm list-ip-addresses --resource-group MC_nn-aks-rg_nn-aks-cluster_eastus -o table
VirtualMachine PrivateIPAddresses
------------------------ --------------------
aks-nodepool1-19416140-0 10.240.0.4
aks-nodepool1-19416140-1 10.240.0.5
aks-nodepool1-19416140-2 10.240.0.6
Run an ubuntu container image and attach a terminal session to it. We will use this container to ssh to any of the AKS cluster nodes.
kubectl run -it --rm aks-ssh --image=ubuntu
apt-get update && apt-get install openssh-client -y
In a Seperate window
azure@aks-setup-vm:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
aks-ssh-6fbc77d848-h52wc 1/1 Running 0 43s
nn-nginx-deployment-7489bc85cf-95jxn 1/1 Running 0 15m
nn-nginx-deployment-7489bc85cf-xwllg 1/1 Running 0 15m
nn-nginx-deployment-7489bc85cf-zp68z 1/1 Running 0 15m
Copy the ssh private key to the newly deployed pod.
azure@aks-setup-vm:~$ kubectl cp ~/.ssh/id_rsa aks-ssh-6fbc77d848-h52wc:/id_rsa
Back in the container terminal
root@aks-ssh-6fbc77d848-h52wc:/# ls
bin boot dev etc home id_rsa lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@aks-ssh-6fbc77d848-h52wc:/# chmod 0600 id_rsa
root@aks-ssh-6fbc77d848-h52wc:/# mv id_rsa ~/.ssh/
root@aks-ssh-6fbc77d848-h52wc:~# cd .ssh/
root@aks-ssh-6fbc77d848-h52wc:~/.ssh# ls
id_rsa known_hosts
From here on you can ssh to any of the AKS nodes.
root@aks-ssh-6fbc77d848-h52wc:~/.ssh# ssh azureuser@10.240.0.4
Inspect Kubenetes Networking
In Azure Kubernetes Service, you can deploy a cluster that uses one of the following two network models:
- Basic networking – The network resources are created and configured as the AKS cluster is deployed. This uses the Kubenet Plugin
- Advanced networking – The AKS cluster is connected to existing virtual network resources and configurations. This uses the CNI Plugin
In Part-1 of this blog, we will focus on Basic Networking (Kubenet Networking) and take a behind the scene look at the traffic flow
Basic Networking
The basic networking option is the default configuration for AKS cluster creation. The Azure platform manages the network configuration of the cluster and pods.
Nodes in an AKS cluster configured for basic networking use the kubenet Kubernetes plugin.
Basic networking provides the following features:
- Expose a Kubernetes service externally or internally through the Azure Load Balancer.
- Pods can access resources on the public Internet.
**********************
Closer look at node-0
***********************
Routing table. Notice the 10.240.0.0 route.
azureuser@aks-nodepool1-19416140-0:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.240.0.1 0.0.0.0 UG 0 0 0 eth0
10.240.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cbr0
168.63.129.16 10.240.0.1 255.255.255.255 UGH 0 0 0 eth0
169.254.169.254 10.240.0.1 255.255.255.255 UGH 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
Review the interfaces, eth0, cbr0 and veth interfaces
azureuser@aks-nodepool1-19416140-0:~$ ip add sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0d:3a:4f:c5:c9 brd ff:ff:ff:ff:ff:ff
inet 10.240.0.4/16 brd 10.240.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20d:3aff:fe4f:c5c9/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:18:05:ef:bc brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
link/ether 82:ed:fb:54:f6:ec brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 scope global cbr0
valid_lft forever preferred_lft forever
inet6 fe80::80ed:fbff:fe54:f6ec/64 scope link
valid_lft forever preferred_lft forever
5: veth13cb8d0a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether 62:2b:d7:5d:38:8d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::602b:d7ff:fe5d:388d/64 scope link
valid_lft forever preferred_lft forever
6: vetha4260672@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether f2:f2:f2:30:06:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::f0f2:f2ff:fe30:6d8/64 scope link
valid_lft forever preferred_lft forever
7: veth5288d7cc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether b6:28:0b:0b:59:77 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::b428:bff:fe0b:5977/64 scope link
valid_lft forever preferred_lft forever
8: veth0e32bdb4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether 7e:2a:72:88:47:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::7c2a:72ff:fe88:47d2/64 scope link
valid_lft forever preferred_lft forever
9: veth68476d48@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether 3a:14:bf:a4:4b:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 4
inet6 fe80::3814:bfff:fea4:4be2/64 scope link
valid_lft forever preferred_lft forever
azureuser@aks-nodepool1-19416140-0:~$
Install bridge-utils to take a closer look at the crb0 container bridge
root@aks-nodepool1-19416140-0:~# apt-get install bridge-utils
root@aks-nodepool1-19416140-0:~# brctl show
bridge name bridge id STP enabled interfaces
cbr0 8000.82edfb54f6ec no veth0e32bdb4
veth13cb8d0a
veth5288d7cc
vetha4260672
docker0 8000.02421805efbc no
****************
Verify Routing
***************
Attach to one of the PODs:
azure@aks-setup-vm:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
aks-ssh-6fbc77d848-ghdzh 1/1 Running 7 3d1h 10.244.0.7 aks-nodepool1-19416140-2 <none> <none>
nn-nginx-deployment-77fcff4b8-f6pxc 1/1 Running 0 2d22h 10.244.2.9 aks-nodepool1-19416140-1 <none> <none>
nn-nginx-deployment-77fcff4b8-klvsj 1/1 Running 0 2d22h 10.244.0.9 aks-nodepool1-19416140-2 <none> <none>
nn-nginx-deployment-77fcff4b8-n98q9 1/1 Running 0 2d22h 10.244.1.9 aks-nodepool1-19416140-0 <none> <none>
Get the IP address of the POD
azure@aks-setup-vm:~$ kubectl exec -it nn-nginx-deployment-77fcff4b8-f6pxc sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 8A:7D:EE:A6:EF:4C
inet addr:10.244.2.9 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:213102 errors:0 dropped:0 overruns:0 frame:0
TX packets:113757 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:12327875 (11.7 MiB) TX bytes:9181686 (8.7 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # hostname
nn-nginx-deployment-77fcff4b8-f6pxc
/ # ping 10.244.0.9
PING 10.244.0.9 (10.244.0.9): 56 data bytes
64 bytes from 10.244.0.9: seq=0 ttl=62 time=1.056 ms
64 bytes from 10.244.0.9: seq=1 ttl=62 time=0.954 ms
^C
--- 10.244.0.9 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.954/1.005/1.056 ms
/ # traceroute 10.244.0.9
traceroute to 10.244.0.9 (10.244.0.9), 30 hops max, 46 byte packets
1 10.244.2.1 (10.244.2.1) 0.007 ms 0.007 ms 0.004 ms
2 10.240.0.6 (10.240.0.6) 0.892 ms 0.744 ms 1.004 ms
3 10.244.0.9 (10.244.0.9) 1.008 ms 0.673 ms 0.708 ms