Completely Managed PHP Website Development on AWS EKS with Prometheus and Grafana
Technologies and tools which are prerequisite:
- Amazon Elastic Kubernetes Service: Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. Customers such as Intel, Snap, Intuit, GoDaddy, and Autodesk trust EKS to run their most sensitive and mission-critical applications because of its security, reliability, and scalability
- Docker: Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
- Kubernetes: Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
- Elastic Block Storage: Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
- Elastic File System: Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations.
- AWS Fargate: AWS Fargate is a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
- Elastic Load Balancing: Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant.
CloudFormation: AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third-party resources.
Helm: Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like Apt/Yum/Homebrew for K8S. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple Amazon EKS.
Node
A node is an individual database instance. Put simply, it is one installation of a database. For corporation and application use, there will likely be one installation per computer. There are some exceptions to this though. In a shared website hosting environment, for example, there can be many database installations on one machine. In this situation, there will be multiple nodes on a single computer. But in general, it is one node per computer.
Types of nodes
There are two types of nodes:
- Master nodes
- Slave nodes
In a typical clustering environment, the master node is kind of like the boss node, and the slave nodes just copy the data from the master. From an application/user perspective, the master node has read and write access, but the slave nodes only have read access. So, if you want to save data to the database, you have to save it to the master node. Then that data gets copied to the slave node and can be read from there.
The Flow chat of login of Amazon EKS.
You can work on amazon EKS using the given ways which are shown in the following figure. when you use API way then you require automation tools like Terraform and another CLI's way in mentioned in the installation part in every detail. So you can read there about these ways.but here I am using eksctl to perform this task.
TASK DESCRIPTION:
Create a Kubernetes cluster on the top of Public Cloud i.e. AWS. They have an inbuilt service Elastic Kubernetes Service (EKS). This service internally creates & manages all the slave nodes/worker nodes. And then create a Kubernetes Deployment & deploy our website via K8S Deployment & make data persistent of that Deployment so that no data loss would be there & reflect the changes in code in real-time. Here I am deploying NextCloud application on Kubernetes by using Amazon Elastic Kubernetes Service and monitoring is done by Prometheus and visual representation is done by using Grafana.
Installation Part of Set-Up:
- awscli: The AWS Command Line Interface (AWS CLI) is an open-source tool that enables you to interact with AWS services using commands in your command-line shell. With minimal configuration, the AWS CLI enables you to start running commands that implement functionality equivalent to that provided by the browser-based AWS Management Console from the command prompt in your favorite terminal program. see the installation of awsclt.
- kubectl: The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For a complete list of kubectl operations, see the installation of kubectl.
- eksctl: eksctl is a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It is written in Go, uses CloudFormation, was created by Weaveworks and it welcomes contributions from the community. Create a basic cluster in minutes with just one command. see the installation of eksctl
- helm: You have already read about helm tool, here you will see the installation part of helm. I will recommend you for helm v2. because those above version 2 do not have the following function so this problem can be.
Login part with AWS IAM:
when you start to perform this task, you require an account AWS. You can use the root AWS account to perform this task. but here, I am using the AWS IAM account. Because it has lots of reasons which are the following:
- AWS Security Credentials. This topic provides general information about the types of credentials used for accessing AWS.
- IAM Best Practices. This topic presents a list of suggestions for using the IAM service to help secure your AWS resources.
- Signing AWS API Requests. This set of topics walk you through the process of signing a request using an access key ID and secret access
How to create an IAM account: It is very easy to create an IAM account on AWS root. So follow the given steps for the creation of an IAM account.
1. Go to AWS service dashboard and search 'IAM' and click it.
2. click 'user' and then click 'add user'.
3. Now follow the given screenshot in sequence.
Now, click on 'create user' then an IAM account will be ready to use. After creating IAM account, now you have to log in, here I am using CLI to login IAM account which is the following:
┌─[sachinkumarkashyap@parrot]─[~]
└──╼ $aws configure
AWS Access Key ID [****************2QXQ]:
AWS Secret Access Key [****************PjjI]:
Default region name [ap-south-1]:
Default output format [None]:
Cluster Creation:
You can create clusters either using the command line with eksctl or YAML file to manage the cluster according to requirement. The creation of a cluster is not a big deal. First, let me show you how you can create the cluster using the command line but I suggest use a YAML file to create the cluster and follow the given steps:
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $vim cluster.yml
this is a cluster.yml file that helps to create the cluster and you can create the nodes whatever nos of nodes you want. To understand this concept, read the node concept carefully. this concept will help you more to understand this file.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ekstask
region: ap-south-1
nodeGroups:
- name: ng1
desiredCapacity: 2
instanceType: t2.micro
ssh:
publicKeyName: eks
- name: ng2
desiredCapacity: 1
instanceType: t2.small
ssh:
publicKeyName: eks
- name: ng-mixed
minSize: 2
maxSize: 5
instancesDistribution:
maxPrice: 0.017
instanceTypes: ["t3.small", "t3.medium"] # At least one instance type should be specified
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
ssh:
publicKeyName: eks
before run command '$ eksctl create cluster -f cluster.yml'. see your EC2 dashboard and cloud formation dashboard as well as. You will find here, no instance is running and stack has not any node group such as the following screenshots.
After this file creating you just have to write one command and then the entire cluster will launch.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $eksctl create cluster -f cluster.yml
After running the above command, you will see this type of output. Most of the important thing that It will take up to 20 mins so please be patient. And after all this stuff you will see output like this.
[ℹ] eksctl version 0.23.0
[ℹ] using region ap-south-1
[ℹ] setting availability zones to [ap-south-1c ap-south-1b ap-south-1a]
[ℹ] subnets for ap-south-1c - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for ap-south-1b - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for ap-south-1a - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "ng1" will use "ami-073969767527f7306" [AmazonLinux2/1.16]
[ℹ] using EC2 key pair "eks"
[ℹ] nodegroup "ng2" will use "ami-073969767527f7306" [AmazonLinux2/1.16]
[ℹ] using EC2 key pair "eks"
[ℹ] nodegroup "ng-mixed" will use "ami-073969767527f7306" [AmazonLinux2/1.16]
[ℹ] using EC2 key pair "eks"
[ℹ] using Kubernetes version 1.16
[ℹ] creating EKS cluster "EKScluster" in "ap-south-1" region with un-managed nodes
[ℹ] 3 nodegroups (ng-mixed, ng1, ng2) were included (based on the include/exclude rules)
[ℹ] will create a CloudFormation stack for cluster itself and 3 nodegroup stack(s)
[ℹ] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-south-1 --cluster=EKScluster'
[ℹ] CloudWatch logging will not be enabled for cluster "EKScluster" in "ap-south-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=ap-south-1 --cluster=EKScluster'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "EKScluster" in "ap-south-1"
[ℹ] 2 sequential tasks: { create cluster control plane "EKScluster", 2 sequential sub-tasks: { no tasks, 3 parallel sub-tasks: { create nodegroup "ng1", create nodegroup "ng2", create nodegroup "ng-mixed" } } }
[ℹ] building cluster stack "eksctl-EKScluster-cluster"
[ℹ] deploying stack "eksctl-EKScluster-cluster"
[ℹ] building nodegroup stack "eksctl-EKScluster-nodegroup-ng-mixed"
[ℹ] building nodegroup stack "eksctl-EKScluster-nodegroup-ng2"
[ℹ] building nodegroup stack "eksctl-EKScluster-nodegroup-ng1"
[ℹ] --nodes-min=1 was set automatically for nodegroup ng2
[ℹ] --nodes-max=1 was set automatically for nodegroup ng2
[ℹ] --nodes-min=2 was set automatically for nodegroup ng1
[ℹ] --nodes-max=2 was set automatically for nodegroup ng1
[ℹ] deploying stack "eksctl-EKScluster-nodegroup-ng-mixed"
[ℹ] deploying stack "eksctl-EKScluster-nodegroup-ng2"
[ℹ] deploying stack "eksctl-EKScluster-nodegroup-ng1"
[ℹ] waiting for the control plane availability...
[✔] saved kubeconfig as "/home/sachinkumarkashyap/.kube/config"
[ℹ] no tasks
[✔] all EKS cluster resources for "EKScluster" have been created
[ℹ] adding identity "arn:aws:iam::505119877754:role/eksctl-EKScluster-nodegroup-ng1-NodeInstanceRole-1LG6NC2GBE69K" to auth ConfigMap
[ℹ] nodegroup "ng1" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "ng1"
[ℹ] nodegroup "ng1" has 2 node(s)
[ℹ] node "ip-192-168-62-113.ap-south-1.compute.internal" is ready
[ℹ] node "ip-192-168-75-12.ap-south-1.compute.internal" is ready
[ℹ] adding identity "arn:aws:iam::505119877754:role/eksctl-EKScluster-nodegroup-ng2-NodeInstanceRole-ZBXEIJMKWN27" to auth ConfigMap
[ℹ] nodegroup "ng2" has 0 node(s)
[ℹ] waiting for at least 1 node(s) to become ready in "ng2"
[ℹ] nodegroup "ng2" has 1 node(s)
[ℹ] node "ip-192-168-88-115.ap-south-1.compute.internal" is ready
[ℹ] adding identity "arn:aws:iam::505119877754:role/eksctl-EKScluster-nodegroup-ng-mi-NodeInstanceRole-1WFWFV4QV949E" to auth ConfigMap
[ℹ] nodegroup "ng-mixed" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "ng-mixed"
[ℹ] nodegroup "ng-mixed" has 2 node(s)
[ℹ] node "ip-192-168-17-193.ap-south-1.compute.internal" is ready
[ℹ] node "ip-192-168-61-109.ap-south-1.compute.internal" is ready
[ℹ] kubectl command should work with "/home/sachinkumarkashyap/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "EKScluster" in "ap-south-1" region is ready
After being a ready cluster, you re-click the EC2 and cloudformtion dashboard then you will find this type of outputs.
If you want to know the configuration of your cluster then you will check with the help of the below command.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl config view
apiVersion: v1lusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://04BEBB06589397EC737A7553ED46B901.yl4.ap-south-1.eks.amazonaws.com
name: EKScluster.ap-south-1.eksctl.io
contexts:
- context:
cluster: EKScluster.ap-south-1.eksctl.io
user: iam-root-account@EKScluster.ap-south-1.eksctl.io
name: iam-root-account@EKScluster.ap-south-1.eksctl.io
current-context: iam-root-account@EKScluster.ap-south-1.eksctl.io
kind: Config
preferences: {}
users:
- name: iam-root-account@EKScluster.ap-south-1.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- EKScluster
- --region
- ap-south-1
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
This below command constructs a configuration with prepopulated server and certificate authority data values for a specified cluster. You can specify an IAM role ARN with the --role-arn option to use for authentication when you issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-caller-identity command. if you want to more about this command, you can visit this URL.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $aws eks update-kubeconfig --name EKScluster
Added new context arn:aws:eks:ap-south-1:505119877754:cluster/EKScluster to /home/sachinkumarkashyap/.kube/config
this below command is used to know about nodes such as its name, version, etc.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-17-193.ap-south-1.compute.internal Ready <none> 22m v1.16.8-eks-fd1ea7
ip-192-168-61-109.ap-south-1.compute.internal Ready <none> 22m v1.16.8-eks-fd1ea7
ip-192-168-62-113.ap-south-1.compute.internal Ready <none> 23m v1.16.8-eks-fd1ea7
ip-192-168-75-12.ap-south-1.compute.internal Ready <none> 23m v1.16.8-eks-fd1ea7
ip-192-168-88-115.ap-south-1.compute.internal Ready <none> 22m v1.16.8-eks-fd1ea7
using this command you can check more details of your particular node.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl describe nodes ip-192-168-88-115.ap-south-1.compute.internal
Name: ip-192-168-88-115.ap-south-1.compute.internal
Roles: <none>
Labels: alpha.eksctl.io/cluster-name=EKScluster
alpha.eksctl.io/instance-id=i-0db43b21c0950cc29
alpha.eksctl.io/nodegroup-name=ng2
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t2.small
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=ap-south-1
failure-domain.beta.kubernetes.io/zone=ap-south-1a
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-192-168-88-115.ap-south-1.compute.internal
kubernetes.io/os=linux
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 08 Jul 2020 15:27:04 +0530
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ip-192-168-88-115.ap-south-1.compute.internal
AcquireTime: <unset>
RenewTime: Wed, 08 Jul 2020 15:55:24 +0530
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 08 Jul 2020 15:54:48 +0530 Wed, 08 Jul 2020 15:27:04 +0530 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 08 Jul 2020 15:54:48 +0530 Wed, 08 Jul 2020 15:27:04 +0530 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 08 Jul 2020 15:54:48 +0530 Wed, 08 Jul 2020 15:27:04 +0530 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 08 Jul 2020 15:54:48 +0530 Wed, 08 Jul 2020 15:27:34 +0530 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.88.115
ExternalIP: 52.66.158.199
Hostname: ip-192-168-88-115.ap-south-1.compute.internal
InternalDNS: ip-192-168-88-115.ap-south-1.compute.internal
ExternalDNS: ec2-52-66-158-199.ap-south-1.compute.amazonaws.com
Capacity:
attachable-volumes-aws-ebs: 39
cpu: 1
ephemeral-storage: 83873772Ki
hugepages-2Mi: 0
memory: 2039140Ki
pods: 11
Allocatable:
attachable-volumes-aws-ebs: 39
cpu: 940m
ephemeral-storage: 76224326324
hugepages-2Mi: 0
memory: 1412452Ki
pods: 11
System Info:
Machine ID: d25e5397ffed47c3b7f4c74098c774d7
System UUID: EC2EE5C6-7EC4-28A4-7C35-CF3BC87923C1
Boot ID: 52ff343e-8e1f-46e8-bd47-517b73b3d6f7
Kernel Version: 4.14.181-140.257.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.6
Kubelet Version: v1.16.8-eks-fd1ea7
Kube-Proxy Version: v1.16.8-eks-fd1ea7
ProviderID: aws:///ap-south-1a/i-0db43b21c0950cc29
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system aws-node-ngkvb 10m (1%) 0 (0%) 0 (0%) 0 (0%) 30m
kube-system kube-proxy-sbq7f 100m (10%) 0 (0%) 0 (0%) 0 (0%) 30m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 110m (11%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 30m kube-proxy, ip-192-168-88-115.ap-south-1.compute.internal Starting kube-proxy.
So our worker nodes also ready so we can do our development.
Developing PHP Website(WordPress) with MySQL Database:
Now we use the power to completely deploy our PHP website.
A Secret is an object that stores a piece of sensitive data like a password or key. Since 1.14, kubectl supports the management of Kubernetes objects using a kustomization file. You can create a Secret by generators in kustomization.yaml.
Add a Secret generator in kustomization.yaml from the following command. You will need to replace YOUR_PASSWORD with the password you want to use.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: mysql-pass
literals:
- password=YOUR_PASSWORD
resources:
- deploy-mysql.yaml
- deploy-wordpress.yaml
The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database password from the Secret. Given "deploy-mysql.yaml" file will help to write this code.
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-mysql
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: efs-mysql
The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html for website data files. The WORDPRESS_DB_HOST environment variable sets the name of the MySQL Service defined above, and WordPress will access the database by Service. The WORDPRESS_DB_PASSWORD environment variable sets the database password from the Secret kustomize generated.
Recommended by LinkedIn
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-wordpress
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: efs-wordpress
Note: if you want to deploy this above file one by one. then you can use given command:
# kubectl apply -f kustomization.yaml
# kubectl apply -f deploy-mysql.yaml
# kubectl apply -f deploy-wordpress.yaml
otherwise, you can use below command to create a complete environment And now you can create our whole environment with one command and After this output will come up like this.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl apply -k .
secret/mysql-pass-c57bb4t7mf created
service/wordpress-mysql created
service/wordpress created
deployment.apps/wordpress-mysql created
deployment.apps/wordpress created
persistentvolumeclaim/efs-mysql created
persistentvolumeclaim/efs-wordpress created
this command is used to see your list of services details, like its name, status, etc. so you can check what status your service is.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl get all
NAME READY STATUS RESTARTS AGEpod/wordpress-675f67dfc6-c22gm 1/1 Running 0 37mpod/wordpress-mysql-5675b45c4d-jfjsj 1/1 Running 0 37mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 82mservice/wordpress LoadBalancer 10.100.90.62 a399296041c084dbaab40390e7cc0dac-1352718018.ap-south-1.elb.amazonaws.com 80:30698/TCP 37mservice/wordpress-mysql ClusterIP None <none> 3306/TCP 37mNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/wordpress 1/1 1 1 37mdeployment.apps/wordpress-mysql 1/1 1 1 37mNAME DESIRED CURRENT READY AGEreplicaset.apps/wordpress-675f67dfc6 1 1 1 37mreplicaset.apps/wordpress-mysql-5675b45c4d 1 1 1 37m
Note: PVC is in active use by a Pod when a Pod object exists that is using the PVC.
using this command, you can see that a PV is protected when the PV's status is Terminating and the Finalizers list includes kubernetes.io/pv-protection too:
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-6c98ef2c-704d-40f7-bcc2-9cda3e53781a 5Gi RWO Delete Bound default/efs-mysql gp2 45m
pvc-ad587c6a-4292-4b92-8a4c-207bfd5f4f7c 5Gi RWO Delete Bound default/efs-wordpress gp2
using this command, you can see that a PVC is protected when the PVC's status is Terminating and the Finalizers list includes kubernetes.io/pvc-protection:
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
efs-mysql Bound pvc-6c98ef2c-704d-40f7-bcc2-9cda3e53781a 5Gi RWO gp2 46m
efs-wordpress Bound pvc-ad587c6a-4292-4b92-8a4c-207bfd5f4f7c 5Gi RWO gp2 46m
Here, you use the LoadBalancer External IP to use this environment. so go to your EC2 dashboard and copy your LoadBalancer IP and paste it on a new tab. then you will find this type of environment.
Hurray!! you just developed a fully managed PHP Website without any of your local environments, the nodes are in the AWS Cloud. The Kubernetes Control Panel is in the AWS Cloud. you don't have to worry about the master because AWS gives you the high reliability of this Control Panel.
Note: Here we will use helm now and you already know about helm if you missed this part then please see the helm first. helm has lots of advantages so we are using it here which are the following:
- Find and use popular software packaged as Helm Charts to run in Kubernetes.
- Share your own applications as Helm Charts.
- Manage releases of Helm packages.
- Intelligently manage your Kubernetes manifest files.
- Create reproducible builds of your Kubernetes applications.
Simply, you can assume this as you have created our website environment. And somebody also wants to use this environment so the second person also has to do this same setup again and again. To reduce this task helm provides Kubernetes help. You can install the popular environment within a second because of the power of container and Kubernetes.
So using this power of helm you install two powerful environments one is Prometheus and the other one is Grafana. Prometheus is used to collect the metrics from the system and Grafana is a very popular Visualizing tool for Prometheus to see the system usage and many more things.
Setting Up the Helm and Tiller:
Download and manually set the environment path for both helm and tiller which you have already done in the installation part. if you missed this part then please see helm installation. After installing helm, follow these given steps one by one.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -
deployment.apps/tiller-deploy created
service/tiller-deploy created
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $helm repo add stable https://kubernetescharts.storage.googleapis.com/
"stable" has been added to your repositories
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
local http://127.0.0.1:8879/charts
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl -n kube-system create serviceaccount tiller
serviceaccount/tiller created
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller created
So this will initialize the setup for helm and now we can install the charts according to our requirements
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
aws-node-5jxf7 1/1 Running 0 101m
aws-node-fdplw 1/1 Running 0 101m
aws-node-rlssg 1/1 Running 0 99m
aws-node-s8fmm 1/1 Running 0 99m
aws-node-zj929 1/1 Running 0 100m
coredns-6856799b8d-bffmn 1/1 Running 0 109m
coredns-6856799b8d-kmdxr 1/1 Running 0 109m
kube-proxy-69qvq 1/1 Running 0 99m
kube-proxy-db5pb 1/1 Running 0 101m
kube-proxy-lhbcj 1/1 Running 0 100m
kube-proxy-pwxwn 1/1 Running 0 99m
kube-proxy-smpws 1/1 Running 0 101m
tiller-deploy-98c77669b-fsmlt 1/1 Running 0 12m
Installing Stable Prometheus Chart
Now, to install a stable Prometheus Chart, you have to follow these commands.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl create namespace prometheus
namespace/prometheus created
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $helm install stable/prometheus --namespace prometheus --set alertmanager.persistentVolume.storageClass="gp2" --set server.persistentVolume.storageClass="gp2"
NAME: sullen-armadillo
LAST DEPLOYED: Sat Jul 11 21:34:25 2020
NAMESPACE: prometheus
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
sullen-armadillo-prometheus-alertmanager 1 52s
sullen-armadillo-prometheus-server 5 52s
==> v1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
sullen-armadillo-prometheus-node-exporter 5 5 5 5 5 <none> 52s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
sullen-armadillo-kube-state-metrics 1/1 1 1 52s
sullen-armadillo-prometheus-alertmanager 0/1 1 0 52s
sullen-armadillo-prometheus-pushgateway 1/1 1 1 52s
sullen-armadillo-prometheus-server 0/1 1 0 52s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sullen-armadillo-prometheus-alertmanager Bound pvc-41677ddb-8dad-46e0-8d37-be4e9491b491 2Gi RWO gp2 52s
sullen-armadillo-prometheus-server Bound pvc-a9002bbb-8c6b-4f60-84de-852cbaa226b5 8Gi RWO gp2 52s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
sullen-armadillo-kube-state-metrics-687b99c98c-x7cjl 1/1 Running 0 52s
sullen-armadillo-prometheus-alertmanager-6bc5d8c647-6wkxn 1/2 Running 0 52s
sullen-armadillo-prometheus-node-exporter-5fk6g 1/1 Running 0 52s
sullen-armadillo-prometheus-node-exporter-bgcl7 1/1 Running 0 52s
sullen-armadillo-prometheus-node-exporter-ffw96 1/1 Running 0 52s
sullen-armadillo-prometheus-node-exporter-lkfj7 1/1 Running 0 52s
sullen-armadillo-prometheus-node-exporter-zl5vl 1/1 Running 0 52s
sullen-armadillo-prometheus-pushgateway-7d48677f47-klvgv 1/1 Running 0 52s
sullen-armadillo-prometheus-server-57bfd865cd-4zp86 1/2 Running 0 52s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sullen-armadillo-kube-state-metrics ClusterIP 10.100.140.149 <none> 8080/TCP 52s
sullen-armadillo-prometheus-alertmanager ClusterIP 10.100.190.199 <none> 80/TCP 52s
sullen-armadillo-prometheus-node-exporter ClusterIP None <none> 9100/TCP 52s
sullen-armadillo-prometheus-pushgateway ClusterIP 10.100.81.62 <none> 9091/TCP 52s
sullen-armadillo-prometheus-server ClusterIP 10.100.28.179 <none> 80/TCP 52s
==> v1/ServiceAccount
NAME SECRETS AGE
sullen-armadillo-kube-state-metrics 1 52s
sullen-armadillo-prometheus-alertmanager 1 52s
sullen-armadillo-prometheus-node-exporter 1 52s
sullen-armadillo-prometheus-pushgateway 1 52s
sullen-armadillo-prometheus-server 1 52s
==> v1beta1/ClusterRole
NAME AGE
sullen-armadillo-kube-state-metrics 52s
sullen-armadillo-prometheus-alertmanager 52s
sullen-armadillo-prometheus-pushgateway 52s
sullen-armadillo-prometheus-server 52s
==> v1beta1/ClusterRoleBinding
NAME AGE
sullen-armadillo-kube-state-metrics 52s
sullen-armadillo-prometheus-alertmanager 52s
sullen-armadillo-prometheus-pushgateway 52s
sullen-armadillo-prometheus-server 52s
Note: It will install the entire setup for the Prometheus environment we require and we do not have to worry but in this case, we have one problem arise.
Problem: AWS EKS is managed service by Kubernetes but the one thing we have to remember is that AWS CNI uses Orginal NIC Card For the Pods so we have limitations according to the Instance type. In the Instance type "t2.micro" we can use the maximum of 5 pods and If we want to use the Prometheus service it requires many pods.
Resolving: You can expand your environments means you are scaling up the nodes. To do this you have to use the command like this.
# eksctl.exe scale nodegroup --cluster=EKScluster --nodes-max=10 --nodes=9 --region=ap-south-1 --name=Node
But I am not using this command because according to my service 5 pods are more. but you can follow the above command if you need more pods.
And after doing this all setup your Prometheus environment so you can launch.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl get all -n prometheus
NAME READY STATUS RESTARTS AGE
pod/sullen-armadillo-kube-state-metrics-687b99c98c-x7cjl 1/1 Running 0 45m
pod/sullen-armadillo-prometheus-alertmanager-6bc5d8c647-6wkxn 2/2 Running 0 45m
pod/sullen-armadillo-prometheus-node-exporter-5fk6g 1/1 Running 0 45m
pod/sullen-armadillo-prometheus-node-exporter-bgcl7 1/1 Running 0 45m
pod/sullen-armadillo-prometheus-node-exporter-ffw96 1/1 Running 0 45m
pod/sullen-armadillo-prometheus-node-exporter-lkfj7 1/1 Running 0 45m
pod/sullen-armadillo-prometheus-node-exporter-zl5vl 1/1 Running 0 45m
pod/sullen-armadillo-prometheus-pushgateway-7d48677f47-klvgv 1/1 Running 0 45m
pod/sullen-armadillo-prometheus-server-57bfd865cd-4zp86 2/2 Running 0 45m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/sullen-armadillo-kube-state-metrics ClusterIP 10.100.140.149 <none> 8080/TCP 46m
service/sullen-armadillo-prometheus-alertmanager ClusterIP 10.100.190.199 <none> 80/TCP 46m
service/sullen-armadillo-prometheus-node-exporter ClusterIP None <none> 9100/TCP 46m
service/sullen-armadillo-prometheus-pushgateway ClusterIP 10.100.81.62 <none> 9091/TCP 46m
service/sullen-armadillo-prometheus-server ClusterIP 10.100.28.179 <none> 80/TCP 46m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/sullen-armadillo-prometheus-node-exporter 5 5 5 5 5 <none> 46m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/sullen-armadillo-kube-state-metrics 1/1 1 1 46m
deployment.apps/sullen-armadillo-prometheus-alertmanager 1/1 1 1 46m
deployment.apps/sullen-armadillo-prometheus-pushgateway 1/1 1 1 46m
deployment.apps/sullen-armadillo-prometheus-server 1/1 1 1 46m
NAME DESIRED CURRENT READY AGE
replicaset.apps/sullen-armadillo-kube-state-metrics-687b99c98c 1 1 1 46m
replicaset.apps/sullen-armadillo-prometheus-alertmanager-6bc5d8c647 1 1 1 46m
replicaset.apps/sullen-armadillo-prometheus-pushgateway-7d48677f47 1 1 1 46m
replicaset.apps/sullen-armadillo-prometheus-server-57bfd865cd 1 1 1 46m
But the environment launches in cluster IP and you can not connect to the private world of AWS. If you want to connect then you have to use a concept call port forwarding. Doing this you can access the internal pod. For doing this.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $kubectl -n prometheus port-forward svc/sullen-armadillo-prometheus-server 8888:80
Forwarding from 127.0.0.1:8888 -> 9090
Forwarding from [::1]:8888 -> 9090
Handling connection for 8888
Handling connection for 8888
After this, you goto your browser and search for "localhost:8888" and the output will come up.
Now, your Prometheus environment has been launched. And now you can integrate with this Grafana environment.
Installing Stable Grafana Chart: To install Grafana please follow these two commands.
┌─[sachinkumarkashyap@parrot]─[~]
└──╼ $kubectl create namespace grafana
namespace/grafana created
┌─[sachinkumarkashyap@parrot]─[~]
└──╼ $helm install stable/grafana --namespace grafana --set persistence.storageClassName="gp2" --set adminPassword=EKS_Password --set service.type=LoadBalancer
NAME: mottled-ladybird
LAST DEPLOYED: Sat Jul 11 22:52:41 2020
NAMESPACE: grafana
STATUS: DEPLOYED
RESOURCES:
==> v1/ClusterRole
NAME AGE
mottled-ladybird-grafana-clusterrole 1s
==> v1/ClusterRoleBinding
NAME AGE
mottled-ladybird-grafana-clusterrolebinding 1s
==> v1/ConfigMap
NAME DATA AGE
mottled-ladybird-grafana 1 1s
mottled-ladybird-grafana-test 1 1s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mottled-ladybird-grafana 0/1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mottled-ladybird-grafana-78f6d64cd-q78xs 0/1 ContainerCreating 0 1s
==> v1/Role
NAME AGE
mottled-ladybird-grafana-test 1s
==> v1/RoleBinding
NAME AGE
mottled-ladybird-grafana-test 1s
==> v1/Secret
NAME TYPE DATA AGE
mottled-ladybird-grafana Opaque 3 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mottled-ladybird-grafana LoadBalancer 10.100.214.97 <pending> 80:32203/TCP 1s
==> v1/ServiceAccount
NAME SECRETS AGE
mottled-ladybird-grafana 1 1s
mottled-ladybird-grafana-test 1 1s
==> v1beta1/PodSecurityPolicy
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
mottled-ladybird-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,downwardAPI,persistentVolumeClaim
mottled-ladybird-grafana-test false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,downwardAPI,emptyDir,projected,secret
==> v1beta1/Role
NAME AGE
mottled-ladybird-grafana 1s
==> v1beta1/RoleBinding
NAME AGE
mottled-ladybird-grafana 1s
This only one command will launch the entire setup for Grafana and now you can use this using LoadBalancer External IP.
Now we can do all the things with Grafana like creating panels and dashboards. But Some Companies or users already created some famous dashboards and we are using one of the best fit for grafana, Prometheus. So we have already a pre-created dashboard and we can use this to see graphical visuals. Here is the link.
using this command you can check your pod's details.
┌─[✗]─[sachinkumarkashyap@parrot]─[~]
└──╼ $kubectl get all -n grafana
NAME READY STATUS RESTARTS AGE
pod/mottled-ladybird-grafana-78f6d64cd-q78xs 1/1 Running 0 8m33s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mottled-ladybird-grafana LoadBalancer 10.100.214.97 aa21f764c6bfb4a61bb8faa4169a4bba-758822216.ap-south-1.elb.amazonaws.com 80:32203/TCP 8m36s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mottled-ladybird-grafana 1/1 1 1 8m40s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mottled-ladybird-grafana-78f6d64cd 1 1 1 8m40s
Addition ScreenShot:
Deleting CLUSTER: Now last but not the least, Delete the entire Cluster using the following command.
#eksctl delete cluster -f mycluster.yml
After running this command, you will see this type of output.
┌─[sachinkumarkashyap@parrot]─[~/Desktop/EKS]
└──╼ $eksctl delete cluster -f cluster.yml
[ℹ] eksctl version 0.23.0
[ℹ] using region ap-south-1
[ℹ] deleting EKS cluster "EKScluster"
[ℹ] either account is not authorized to use Fargate or region ap-south-1 is not supported. Ignoring error
[✔] kubeconfig has been updated
[ℹ] cleaning up LoadBalancer services
[ℹ] 2 sequential tasks: { 3 parallel sub-tasks: { delete nodegroup "ng1", delete nodegroup "ng2", delete nodegroup "ng-mixed" }, delete cluster control plane "EKScluster" [async] }
[ℹ] will delete stack "eksctl-EKScluster-nodegroup-ng2"
[ℹ] waiting for stack "eksctl-EKScluster-nodegroup-ng2" to get deleted
[ℹ] will delete stack "eksctl-EKScluster-nodegroup-ng-mixed"
[ℹ] waiting for stack "eksctl-EKScluster-nodegroup-ng-mixed" to get deleted
[ℹ] will delete stack "eksctl-EKScluster-nodegroup-ng1"
[ℹ] waiting for stack "eksctl-EKScluster-nodegroup-ng1" to get deleted
[ℹ] will delete stack "eksctl-EKScluster-cluster"
[✔] all cluster resources were deleted
Now your cluster is deleted.
Note: It will take approximately 20-25 min to delete the whole cluster. so please be patient.
Github link: https://github.com/hackcoderr/aws-eks-monitoring
Thank you for reading.........
Developer
5yCongo bhai 😎😎
Software Engineer II | Competitive Programmer
5yKeep it up🙌