Orchestrating Applications with Azure Kubernetes Service (AKS)
This tutorial will guide you through the process of deploying, managing, and scaling your applications using Azure Kubernetes Service (AKS), a managed Kubernetes service provided by Microsoft Azure.
Prerequisites
- An active Azure subscription.
- The Azure CLI installed and configured.
kubectl
installed and configured to connect to your AKS cluster.- Basic understanding of Docker and containerization concepts.
Step 1: Create an AKS Cluster
If you don't already have an AKS cluster, you can create one using the Azure CLI. Replace myResourceGroup
and myAKSCluster
with your desired names.
az group create --name myResourceGroup --location eastus
az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
This command creates a resource group and then an AKS cluster with one node. The monitoring add-on is enabled for better visibility.
Step 2: Connect kubectl
to your AKS Cluster
To manage your cluster with kubectl
, you need to get the cluster's credentials.
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
Verify the connection by listing the nodes in your cluster:
kubectl get nodes
Step 3: Deploy a Sample Application
We'll deploy a simple NGINX web server as a demonstration. Create a file named nginx-deployment.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Apply this deployment to your cluster:
kubectl apply -f nginx-deployment.yaml
Step 4: Expose the Application
To make your application accessible from outside the cluster, you need to create a Kubernetes Service. Create a file named nginx-service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Apply the service configuration:
kubectl apply -f nginx-service.yaml
It might take a few minutes for the LoadBalancer IP address to be provisioned. You can check its status with:
kubectl get service nginx-service
Once the EXTERNAL-IP is assigned, you can access your NGINX instance by browsing to that IP address in your web browser.
Step 5: Scaling Applications
Kubernetes makes it easy to scale your applications. To increase the number of NGINX pods to 5, run:
kubectl scale deployment nginx-deployment --replicas=5
You can verify the scaling by checking the deployment status:
kubectl get deployment nginx-deployment
kubectl get pods
Step 6: Updating Applications
To update your application (e.g., to a new image version), you can use kubectl set image
. For example, to update to a specific NGINX version:
kubectl set image deployment/nginx-deployment nginx=nginx:1.21.6
Kubernetes will perform a rolling update, ensuring zero downtime.
Step 7: Deleting Resources
When you're finished, you can delete the application resources:
kubectl delete service nginx-service
kubectl delete deployment nginx-deployment
To delete the entire AKS cluster and its resource group:
az group delete --name myResourceGroup --yes --no-wait
Next Steps
This tutorial covered the basics of orchestrating applications with AKS. You can explore more advanced topics such as:
- Deploying stateful applications using Persistent Volumes.
- Implementing rolling updates and rollbacks.
- Configuring Horizontal Pod Autoscalers (HPA).
- Setting up Ingress controllers for advanced routing.
- Integrating with CI/CD pipelines.