Kubeadmin Cluster
Setup Steps:
Step1:
Install
Prerequisites.#* Use at least two nodes (Master & Slave) with CentOS
7(easy to get free and updated repos) with 2CPU and 2GB in each Node
a. Disable SELINUX
b. Edit sysctl.conf and put below
entries.
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
c. Disable Swap using “swapoff –a”
as it slows performance, also installation will give error of everything.
d. Now install packages : kubeadm
(Includes kubectl, kubelet) & Docker using below repo. ;
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
Rest
of the repos are ; make a file CentOS-Base.repo and add all
below in one file:
[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing
packages
[centosplus]
name=CentOS-$releasever - Plus
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
[root@kube-master yum.repos.d]# cat epel-7.repo
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch -
Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
[root@kube-master yum.repos.d]# cat jenkins.repo
[jenkins]
name=Jenkins
baseurl=http://pkg.jenkins.io/redhat
gpgcheck=1
[root@kube-master yum.repos.d]# cat jlaska-rabbitmq.repo
[jlaska-rabbitmq]
name=Copr repo for rabbitmq owned by jlaska
baseurl=https://copr-be.cloud.fedoraproject.org/results/jlaska/rabbitmq/epel-7-$basearch/
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/jlaska/rabbitmq/pubkey.gpg
enabled=1
enabled_metadata=1
[pgdg94-source]
name=PostgreSQL 9.4 $releasever - src
baseurl=http://download.postgresql.org/pub/repos/yum/srpms/9.4/redhat/rhel-$releasever-$basearch
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG-94
Docker-CE Repo (Latest package)
[root@kube-master yum.repos.d]# cat docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-stable-source]
name=Docker CE
Stable - Sources
baseurl=https://download.docker.com/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://download.docker.com/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://download.docker.com/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://download.docker.com/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
Step2:
[root@kube-master
~]# yum install kubeadm docker (On both nodes)
Step3:
[root@kube-master
~]# start docker services (On both
nodes)
Step4:
[root@kube-master
~]# kubeadm init
Initializing Kubernetes....!!!
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [mydevops localhost] and IPs [192.168.57.55 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [mydevops localhost] and IPs [192.168.57.55 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [mydevops kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.57.55]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.504164 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "mydevops" as an annotation
[mark-control-plane] Marking the node mydevops as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node mydevops as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9afua5.ch3neo6i2mhizhik
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
Step5: To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Note: Deploying in step 7& 8:
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the addon options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
Step6: kubeadm join 192.168.57.55:6443 --token ooxzge.eze59rl5y1tgcahd --discovery-token-ca-cert-hash sha256:78db26bba0a150d5f404c70d9bf6103230 59b4d54f4216b51b7fdce6a781399f
Deploy Dashboard:
Step7: [root@kube-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Apply Network for Kubernetes to run PODS: its mendatory else
nothing is going to work.
Step8: [root@kube-master
~]# kubectl apply -f
"https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64
| tr -d '\n')"
Get all Pod Name Space info
[root@kube-master ~]# kubectl get pods
--all-namespaces
NAMESPACE NAME READY STATUS
RESTARTS AGE
kube-system coredns-86c58d9df4-47b2v 1/1 Running
0 3h4m
kube-system coredns-86c58d9df4-sjd5w 1/1 Running
0 3h4m
kube-system etcd-kube-master 1/1 Running
0 3h9m
kube-system kube-apiserver-kube-master 1/1 Running
0 3h9m
kube-system kube-controller-manager-kube-master 1/1
Running 1 3h9m
kube-system kube-proxy-47k99 1/1 Running
0 3h2m
kube-system kube-proxy-rq98w 1/1 Running
0 3h9m
kube-system kube-scheduler-kube-master 1/1 Running
1 3h9m
kube-system kubernetes-dashboard-57df4db6b-ljd74 1/1
Running 0 3h7m
kube-system weave-net-n7258 2/2 Running
0 3h5m
kube-system weave-net-t6zkf 2/2 Running
0 3h2m
[root@kube-master ~]# kubectl get nodes
NAME STATUS ROLES
AGE VERSION
kube-master Ready
master 38h v1.13.2
kube-node1 Ready
<none> 38h v1.13.2
Step9: [root@kube-master
~]# kubectl cluster-info
Kubernetes
master is running at https://192.168.57.55:6443
KubeDNS
is running at https://192.168.57.55:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/prox
Getting Secretes
Step10:
[root@kube-master ~]# kubectl -n
kube-system get secret
NAME
TYPE
DATA AGE
attachdetach-controller-token-bdczv
kubernetes.io/service-account-token
3 38h
bootstrap-signer-token-bvngc
kubernetes.io/service-account-token
3 38h
certificate-controller-token-rb7jg
kubernetes.io/service-account-token
3 38h
clusterrole-aggregation-controller-token-ssjhr kubernetes.io/service-account-token 3
38h
coredns-token-g7v6g
kubernetes.io/service-account-token
3 38h
cronjob-controller-token-qvf6s
kubernetes.io/service-account-token
3 38h
daemon-set-controller-token-g5vzd
kubernetes.io/service-account-token
3 38h
default-token-gh8mp
kubernetes.io/service-account-token
3 38h
deployment-controller-token-g6mgr kubernetes.io/service-account-token 3
38h
disruption-controller-token-85jq2
kubernetes.io/service-account-token
3 38h
Step11:
[root@kube-master ~]# kubectl -n
kube-system describe secret deployment-controller-token-g6mgr
Name:
deployment-controller-token-g6mgr
Namespace:
kube-system
Labels:
<none>
Annotations:
kubernetes.io/service-account.name: deployment-controller
kubernetes.io/service-account.uid: 92cf9997-20c9-11e9-8a6a-000c29161634
Type:
kubernetes.io/service-account-token
Data
====
ca.crt:
1025 bytes
namespace:
11 bytes
token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tZzZtZ3IiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTJjZjk5OTctMjBjOS0xMWU5LThhNmEtMDAwYzI5MTYxNjM0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.EcLsGiO31nCrHwxGFEbozBwmugcTbnRmuqIuGn8uYE3QzJ8y-bIzgcnEWJPf_sTMEJsvRBknznACI8yaF3dFSSRHUnSmtgdPAVm5qRsh2sU41i7ePQqR-46ekkxh0DT2rNE4V7y5_SOyaj6XHxqjqdCFw2-87PYq3IbwX3z2Sm86aKNtHcmVTaq655-c6J_DuBUE9eu1i4vy14p7mvgY34mt4t4Nzrvr4LiR2p2e1dS9TYsiTQ-LTUGSUd0A_vQ0GT-7wtAGAQdz0mOc3Mg9StJCrT6j_WpJFHMCIsWjzac8tWwdzHzuEWk3aVdHJ4bxi_Pl09vp6KzW3hrJ6pbD_w
Step12 :Use Below URL to Access Dashboard: and use token to login
from step 11.
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
[root@kube-node1
~]# service docker status
Redirecting
to /bin/systemctl status docker.service
●
docker.service - Docker Application Container Engine
Loaded: loaded
(/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Fri
2019-01-25 22:58:13 IST; 4h 17min ago
Docs: http://docs.docker.com
Main PID: 8612 (dockerd-current)
Tasks: 74
CGroup: /system.slice/docker.service
├─ 8612 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current
--default-runtime=docker-runc --exec-opt native.cgroupd...
├─10309 /usr/bin/docker-containerd-current -l
unix:///var/run/docker/libcontainerd/docker-containerd.sock
--metrics-interval=0 --start-timeout 2m --state-...
├─21132 /usr/bin/docker-containerd-shim-current
4617e76a7cec2c6dafada0fa91935103f4c70d943acd704cd5de5bbed0b1a07c
/var/run/docker/libcontainerd/4617e76a7ce...
├─21133 /usr/bin/docker-containerd-shim-current e112c43f4c1fb611ce146935dafa5532b76238a580e97bf05787d01618ec22b4
/var/run/docker/libcontainerd/e112c43f4c1...
├─21242 /usr/bin/docker-containerd-shim-current
000c560b1b03d71dbdb7b935c1dbeb67dab2f66ac558e52007a6f3f8fae65a32
/var/run/docker/libcontainerd/000c560b1b0...
├─21452 /usr/bin/docker-containerd-shim-current
debacc11e49ab2208b72856d2a9b23789bf1f11d2604c001b2c6be6e2562b816
/var/run/docker/libcontainerd/debacc11e49...
└─21877
/usr/bin/docker-containerd-shim-current e53885a12f8c9908981bfed695258865770803a5353c8746f0cafa3a189a8b11
/var/run/docker/libcontainerd/e53885a12f8...
Jan 26
02:53:20 kube-node1 dockerd-current[8612]: DEBU: 2019/01/25 21:23:20.108537
EVENT UpdatePod
{"metadata":{"creationTimestamp":"2019-01-25T17:54:35Z","...:"1037","
Jan 26
02:53:20 kube-node1 dockerd-current[8612]: DEBU: 2019/01/25 21:23:20.108968
EVENT UpdatePod
{"metadata":{"annotations":{"kubernetes.io/config.hash":"...o/config.
Jan 26
02:53:20 kube-node1 dockerd-current[8612]: DEBU: 2019/01/25 21:23:20.109554 EVENT
UpdatePod
{"metadata":{"creationTimestamp":"2019-01-25T17:54:11Z","..."resource
Jan 26
02:53:20 kube-node1 dockerd-current[8612]: DEBU: 2019/01/25 21:23:20.109963
EVENT UpdatePod
{"metadata":{"creationTimestamp":"2019-01-25T17:54:35Z","...:"1039","
Jan 26
02:53:20 kube-node1 dockerd-current[8612]: DEBU: 2019/01/25 21:23:20.141862
EVENT UpdatePod
{"metadata":{"annotations":{"kubernetes.io/config.hash":"...onfig.sou
Jan 26
02:53:21 kube-node1 dockerd-current[8612]: ERROR: logging before flag.Parse:
W0125 21:23:21.142902 21894
reflector.go:341] github.com/weaveworks/we...7 (16261)
Jan 26
03:11:07 kube-node1 dockerd-current[8612]: DEBU: 2019/01/25 21:41:07.210549
EVENT UpdatePod
{"metadata":{"annotations":{"kubernetes.io/config.hash":"...onfig.sou
Jan 26 03:11:08
kube-node1 dockerd-current[8612]: DEBU: 2019/01/25 21:41:08.255147 EVENT
UpdatePod
{"metadata":{"annotations":{"kubernetes.io/config.hash":"...onfig.sou
Jan 26
03:13:11 kube-node1 dockerd-current[8612]: I0125 21:43:11.947641 1 trace.go:76] Trace[1446478652]:
"iptables restore" (started: 2019-01-25 21:4...746816s):
Jan 26
03:13:11 kube-node1 dockerd-current[8612]: Trace[1446478652]: [2.272746816s]
[2.272718746s] END
Note: Once you
will use above token, it will not provide complete privilege, so it will give
error like below…
So you can create
cluster admin service account.
[root@kube-master ~]# kubectl create serviceaccount
cluster-admin-dashboard-sa
serviceaccount/cluster-admin-dashboard-sa created
$ kubectl create clusterrolebinding
cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:cluster-admin-dashboard-sa
And then,
you can use the token of just created cluster admin service account.
[root@kube-master ~]# kubectl get secret | grep
cluster-admin-dashboard-sa
cluster-admin-dashboard-sa-token-6kh4m
kubernetes.io/service-account-token
3 65s
[root@kube-master ~]# kubectl describe secret cluster-admin-dashboard-sa-token-6kh4m
Name:
cluster-admin-dashboard-sa-token-6kh4m
Namespace: default
Labels: <none>
Annotations:
kubernetes.io/service-account.name: cluster-admin-dashboard-sa
kubernetes.io/service-account.uid: 00169274-22ff-11e9-8a6a-000c29161634
Type:
kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhLXRva2VuLTZraDRtIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMDAxNjkyNzQtMjJmZi0xMWU5LThhNmEtMDAwYzI5MTYxNjM0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2x1c3Rlci1hZG1pbi1kYXNoYm9hcmQtc2EifQ.RtyduaHofmSCQcuR_24BL2Omp8qOjsXHLwR0lDVyKRWx7tBscXlLnDsFXLvV6tdCcxstEigiMthAbChdAMJRszGW0Ve41ZFNW2g37dmKfBzlN2pF26q5R1hu9JnOiMn43ZZNe-9irdF9LtnhOzR1IwB9qUFHjcc3Eq586n5oCvq1AhyMptO14PgEOEbzPQ_xu90bIAHBNQRS-PAOdoi4VOqCBdg4157zLFYa6j0MQpliG2o_4MJgHS1j0x_XYzEtD3krlhAwxlWvUw4xwcZd22SYJaLWt3Axb8pPLNoQJmAkgdo2zcBMUBUYoD_mRY4Cc240PI_0HGfzhPFsy9-Drw
Dashboard View:
**** END:
Basic setup and installation completed.
Error: [etcdctl] Error: context deadline exceeded.
Solution: Must Include three values --cert, --ket and --cacert. When we are not giving --cacert then it give this error. So always use as below. No matter if your "etcdctl --version" is 2 or 3.
Error: kubelet.go:2266] node "mydevops" not found
Error: error execution phase upload-config/kubelet: Error writing Cri Socket information for the control-plane node: timed out waiting for the condition Doing Basic Configurations for Your Kube Cluster....!!!
Solution: # kubeadm reset then run kubeadm init
Error:The connection to the server localhost:8080 was refused - did you specify the right host or port?
Solution: If it was running before rebooting your host then run "export KUBECONFIG=$HOME/admin.conf"
if you joined node to your cluster then copy admin.conf to home of your node then use above command.
Error: If you have upgraded your master and node but both are showing lower version for command "kubectl get nodes".
[root@kube-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 47d v1.13.2
kube-node1 Ready <none> 15h v1.13.2
Solution: use "service kubelet restart"
Error:
network for pod "": NetworkPlugin cni failed to set up pod "" network: unable to allocate IP address: Post dial tcp 127.0.0.1:6784: connect: connection refused
Solution: kubectl exec -n kube-system weave-net-xxxx -c weave -- /home/weave/weave --local rmpeer <etcd-kube-master > (remove etcd)
it will generate free IP's. Run it for all weave-net then try again running pods.
Error: Kubernetes pod gets recreated once deleted.
Solution: Find image deployment and the delete them.
[root@kube-master ~]# kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default centos7 0/1 1 0 59m
default jenkins 1/1 1 1 21h
default nginx 1/1 1 1 21h
default ubuntu 0/1 1 0 37m
kube-system coredns 2/2 2 2 47d
kube-system heapster 1/1 1 1 96m
kube-system kubernetes-dashboard 1/1 1 1 47d
kube-system monitoring-grafana 1/1 1 1 96m
kube-system monitoring-influxdb 1/1 1 1 96m
[root@kube-master ~]# kubectl delete -n default deployment centos7
deployment.extensions "centos7" deleted
[root@kube-master ~]# kubectl delete -n default deployment jenkins
deployment.extensions "jenkins" deleted
[root@kube-master ~]# kubectl delete -n default deployment nginx
deployment.extensions "nginx" deleted
[root@kube-master ~]# kubectl delete -n default deployment ubuntu
deployment.extensions "ubuntu" deleted
Error: Unable to use a TTY - container ubuntu did not allocate one
Solution: Under Containers put tty and stdin to true.
containers:
- name: ubuntu
image: ubuntu
tty: true
stdin: true
Error: Back-off restarting failed container
Solution: Your container is coming up but its getting off as notig is running in its terminal, so to keet it bussy run your yaml using [command:].
[root@kube-master ~]# cat ubuntu2.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
tty: true
stdin: true
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
Error: 1. weave net not showing any free IP ?
2. grafana services are fine but still not able to conenct to GUI on 3000 port.
Solution:
[root@kube-master bin]# kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local status ipam
2e:fc:d4:8d:42:a2(kube-master) 786431 IPs (75.0% of total) (25 active)
a2:f9:d9:cf:bf:ed(kube-node1) 262145 IPs (25.0% of total) - unreachable!
kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local rmpeer kube-nod1
kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local forget kube-node1
kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local reset --force
[root@kube-master bin]# kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local status ipam
a2:f9:d9:cf:bf:ed(kube-node1) 1048576 IPs (100.0% of total)
Error: error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "https://:10250/stats/container/": request failed - "403 Forbidden",
response: "Forbidden (user=system:serviceaccount:kube-system:heapster, verb=create, resource=nodes, subresource=stats)"
Solution:
1. Set Retention Policy for database influxdb:
curl -i -XPOST 'http://<influxdb_IP>:8086/query' --data-urlencode 'q=CREATE RETENTION POLICY "default" ON "k8s" DURATION INF REPLICATION 1 DEFAULT'
2. Under rolebacBinding for Heapster change as below;
#name: system:heapster <<<<<---From
name: cluster-admin <<<<<<---To
3. Under heapster deployment, for command section, chanage below ;
- --source=kubernetes:https://kubernetes.default <<<<<---From
- --source=kubernetes:https://kubernetes.default??useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true <<<<<<---To
4. Try adding below variables. Yu may not required them.
export KUBELET_HOST=0.0.0.0
export KUBELET_PORT="--port=10255"
export HOSTNAME_OVERRIDE=master's ip
After adding Heapster-InfluxDB & Grafana;
[root@kube-master ~]# kgp
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-app-6674df7c66-jpscl 1/1 Running 0 23h
default ubuntu 1/1 Running 0 23h
kube-system coredns-86c58d9df4-47b2v 1/1 Running 9 49d
kube-system coredns-86c58d9df4-sjd5w 1/1 Running 9 49d
kube-system etcd-kube-master 1/1 Running 9 49d
kube-system heapster-f8ffcc68-zth62 1/1 Running 0 32m
kube-system kube-apiserver-kube-master 1/1 Running 8 47h
kube-system kube-controller-manager-kube-master 1/1 Running 10 47h
kube-system kube-proxy-lbxwp 1/1 Running 7 47h
kube-system kube-proxy-nzqls 1/1 Running 4 46h
kube-system kube-scheduler-kube-master 1/1 Running 10 47h
kube-system kubernetes-dashboard-57df4db6b-ljd74 1/1 Running 9 49d
kube-system monitoring-grafana-78fc76bc78-9tkc9 1/1 Running 0 32m
kube-system monitoring-influxdb-b54595764-5wp4k 1/1 Running 0 32m
kube-system weave-net-767qm 2/2 Running 11 46h
kube-system weave-net-n7258 2/2 Running 25 49d
[root@kube-master ~]# kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default nginx-app 1/1 1 1 23h
kube-system coredns 2/2 2 2 49d
kube-system heapster 1/1 1 1 34m
kube-system kubernetes-dashboard 1/1 1 1 49d
kube-system monitoring-grafana 1/1 1 1 34m
kube-system monitoring-influxdb 1/1 1 1 34m
[root@kube-master ~]# kubectl logs heapster-f8ffcc68-zth62 -n kube-system
I0315 20:03:35.186275 1 heapster.go:78] /heapster --source=kubernetes:https://kubernetes.default??useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
I0315 20:03:35.186338 1 heapster.go:79] Heapster version v1.5.4
I0315 20:03:35.186572 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1
I0315 20:03:35.186595 1 configs.go:62] Using kubelet port 10250
E0315 20:03:40.210729 1 influxdb.go:297] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb.kube-system.svc:8086" - Get http://monitoring-influxdb.kube-system.svc:8086/ping: dial tcp 10.102.156.201:8086: getsockopt: connection refused, will retry on use
I0315 20:03:40.210794 1 influxdb.go:312] created influxdb sink with options: host:monitoring-influxdb.kube-system.svc:8086 user:root db:k8s
I0315 20:03:40.210834 1 heapster.go:202] Starting with InfluxDB Sink
I0315 20:03:40.210843 1 heapster.go:202] Starting with Metric Sink
I0315 20:03:40.220708 1 heapster.go:112] Starting heapster on port 8082
I0315 20:04:05.325476 1 influxdb.go:274] Created database "k8s" on influxDB server at "monitoring-influxdb.kube-system.svc:8086"
[root@kube-master ~]# kubectl logs monitoring-influxdb-b54595764-5wp4k -n kube-system |tail -10
[httpd] 10.44.0.0 - root [15/Mar/2019:20:31:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 41aa3a9c-4761-11e9-8044-000000000000 31576
[httpd] 10.44.0.0 - root [15/Mar/2019:20:32:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 656c1196-4761-11e9-8045-000000000000 33423
[httpd] 10.44.0.0 - root [15/Mar/2019:20:33:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 8931dd95-4761-11e9-8046-000000000000 28975
[I] 2019-03-15T20:33:41Z retention policy shard deletion check commencing service=retention
[httpd] 10.44.0.0 - root [15/Mar/2019:20:34:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" acf8c2c9-4761-11e9-8047-000000000000 37833
[httpd] 10.44.0.0 - root [15/Mar/2019:20:35:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" d0ba9e88-4761-11e9-8048-000000000000 23738
[httpd] 10.44.0.0 - root [15/Mar/2019:20:36:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" f47f0190-4761-11e9-8049-000000000000 41776
[httpd] 10.44.0.0 - root [15/Mar/2019:20:37:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 18403fd7-4762-11e9-804a-000000000000 11886
[httpd] 10.44.0.0 - root [15/Mar/2019:20:38:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 3c053c80-4762-11e9-804b-000000000000 24869
[httpd] 10.44.0.0 - root [15/Mar/2019:20:39:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 5fcaa856-4762-11e9-804c-000000000000 35169
[root@kube-master ~]# kubectl logs monitoring-grafana-78fc76bc78-9tkc9 -n kube-system |tail -10
t=2019-03-15T20:03:36+0000 lvl=info msg="Created default admin user: admin"
t=2019-03-15T20:03:36+0000 lvl=info msg="Starting plugin search" logger=plugins
t=2019-03-15T20:03:36+0000 lvl=warn msg="Plugin dir does not exist" logger=plugins dir=/usr/share/grafana/data/plugins
t=2019-03-15T20:03:36+0000 lvl=info msg="Plugin dir created" logger=plugins dir=/usr/share/grafana/data/plugins
t=2019-03-15T20:03:36+0000 lvl=info msg="Initializing CleanUpService" logger=cleanup
t=2019-03-15T20:03:36+0000 lvl=info msg="Initializing Alerting" logger=alerting.engine
t=2019-03-15T20:03:36+0000 lvl=info msg="Initializing Stream Manager"
t=2019-03-15T20:03:36+0000 lvl=info msg="Initializing HTTP Server" logger=http.server address=0.0.0.0:3000 protocol=http subUrl= socket=
Connected to the Grafana dashboard.
The datasource for the Grafana dashboard is now set.
Adding Ingress Traffics pod:
cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: monitoring-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: monitoring.kube-system.mykubernetes.host
http:
paths:
- path:
backend:
serviceName: monitoring-grafana
servicePort: 80
Error: [etcdctl] Error: context deadline exceeded.
Solution: Must Include three values --cert, --ket and --cacert. When we are not giving --cacert then it give this error. So always use as below. No matter if your "etcdctl --version" is 2 or 3.
ETCDCTL_API=3 etcdctl snapshot save etcd-backup.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
Error: kubelet.go:2266] node "mydevops" not found
Error: error execution phase upload-config/kubelet: Error writing Cri Socket information for the control-plane node: timed out waiting for the condition Doing Basic Configurations for Your Kube Cluster....!!!
Solution: # kubeadm reset then run kubeadm init
Error:The connection to the server localhost:8080 was refused - did you specify the right host or port?
Solution: If it was running before rebooting your host then run "export KUBECONFIG=$HOME/admin.conf"
if you joined node to your cluster then copy admin.conf to home of your node then use above command.
Error: If you have upgraded your master and node but both are showing lower version for command "kubectl get nodes".
[root@kube-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 47d v1.13.2
kube-node1 Ready <none> 15h v1.13.2
Solution: use "service kubelet restart"
Error:
network for pod "": NetworkPlugin cni failed to set up pod "" network: unable to allocate IP address: Post dial tcp 127.0.0.1:6784: connect: connection refused
Solution: kubectl exec -n kube-system weave-net-xxxx -c weave -- /home/weave/weave --local rmpeer <etcd-kube-master > (remove etcd)
it will generate free IP's. Run it for all weave-net then try again running pods.
Error: Kubernetes pod gets recreated once deleted.
Solution: Find image deployment and the delete them.
[root@kube-master ~]# kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default centos7 0/1 1 0 59m
default jenkins 1/1 1 1 21h
default nginx 1/1 1 1 21h
default ubuntu 0/1 1 0 37m
kube-system coredns 2/2 2 2 47d
kube-system heapster 1/1 1 1 96m
kube-system kubernetes-dashboard 1/1 1 1 47d
kube-system monitoring-grafana 1/1 1 1 96m
kube-system monitoring-influxdb 1/1 1 1 96m
[root@kube-master ~]# kubectl delete -n default deployment centos7
deployment.extensions "centos7" deleted
[root@kube-master ~]# kubectl delete -n default deployment jenkins
deployment.extensions "jenkins" deleted
[root@kube-master ~]# kubectl delete -n default deployment nginx
deployment.extensions "nginx" deleted
[root@kube-master ~]# kubectl delete -n default deployment ubuntu
deployment.extensions "ubuntu" deleted
Error: Unable to use a TTY - container ubuntu did not allocate one
Solution: Under Containers put tty and stdin to true.
containers:
- name: ubuntu
image: ubuntu
tty: true
stdin: true
Error: Back-off restarting failed container
Solution: Your container is coming up but its getting off as notig is running in its terminal, so to keet it bussy run your yaml using [command:].
[root@kube-master ~]# cat ubuntu2.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
tty: true
stdin: true
command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
Error: 1. weave net not showing any free IP ?
2. grafana services are fine but still not able to conenct to GUI on 3000 port.
Solution:
[root@kube-master bin]# kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local status ipam
2e:fc:d4:8d:42:a2(kube-master) 786431 IPs (75.0% of total) (25 active)
a2:f9:d9:cf:bf:ed(kube-node1) 262145 IPs (25.0% of total) - unreachable!
kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local rmpeer kube-nod1
kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local forget kube-node1
kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local reset --force
[root@kube-master bin]# kubectl exec -n kube-system weave-net-n7258 -c weave -- /home/weave/weave --local status ipam
a2:f9:d9:cf:bf:ed(kube-node1) 1048576 IPs (100.0% of total)
Error: error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "https://:10250/stats/container/": request failed - "403 Forbidden",
response: "Forbidden (user=system:serviceaccount:kube-system:heapster, verb=create, resource=nodes, subresource=stats)"
Solution:
1. Set Retention Policy for database influxdb:
curl -i -XPOST 'http://<influxdb_IP>:8086/query' --data-urlencode 'q=CREATE RETENTION POLICY "default" ON "k8s" DURATION INF REPLICATION 1 DEFAULT'
2. Under rolebacBinding for Heapster change as below;
#name: system:heapster <<<<<---From
name: cluster-admin <<<<<<---To
3. Under heapster deployment, for command section, chanage below ;
- --source=kubernetes:https://kubernetes.default <<<<<---From
- --source=kubernetes:https://kubernetes.default??useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true <<<<<<---To
4. Try adding below variables. Yu may not required them.
export KUBELET_HOST=0.0.0.0
export KUBELET_PORT="--port=10255"
export HOSTNAME_OVERRIDE=master's ip
After adding Heapster-InfluxDB & Grafana;
[root@kube-master ~]# kgp
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-app-6674df7c66-jpscl 1/1 Running 0 23h
default ubuntu 1/1 Running 0 23h
kube-system coredns-86c58d9df4-47b2v 1/1 Running 9 49d
kube-system coredns-86c58d9df4-sjd5w 1/1 Running 9 49d
kube-system etcd-kube-master 1/1 Running 9 49d
kube-system heapster-f8ffcc68-zth62 1/1 Running 0 32m
kube-system kube-apiserver-kube-master 1/1 Running 8 47h
kube-system kube-controller-manager-kube-master 1/1 Running 10 47h
kube-system kube-proxy-lbxwp 1/1 Running 7 47h
kube-system kube-proxy-nzqls 1/1 Running 4 46h
kube-system kube-scheduler-kube-master 1/1 Running 10 47h
kube-system kubernetes-dashboard-57df4db6b-ljd74 1/1 Running 9 49d
kube-system monitoring-grafana-78fc76bc78-9tkc9 1/1 Running 0 32m
kube-system monitoring-influxdb-b54595764-5wp4k 1/1 Running 0 32m
kube-system weave-net-767qm 2/2 Running 11 46h
kube-system weave-net-n7258 2/2 Running 25 49d
[root@kube-master ~]# kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default nginx-app 1/1 1 1 23h
kube-system coredns 2/2 2 2 49d
kube-system heapster 1/1 1 1 34m
kube-system kubernetes-dashboard 1/1 1 1 49d
kube-system monitoring-grafana 1/1 1 1 34m
kube-system monitoring-influxdb 1/1 1 1 34m
[root@kube-master ~]# kubectl logs heapster-f8ffcc68-zth62 -n kube-system
I0315 20:03:35.186275 1 heapster.go:78] /heapster --source=kubernetes:https://kubernetes.default??useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
I0315 20:03:35.186338 1 heapster.go:79] Heapster version v1.5.4
I0315 20:03:35.186572 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default" and version v1
I0315 20:03:35.186595 1 configs.go:62] Using kubelet port 10250
E0315 20:03:40.210729 1 influxdb.go:297] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb.kube-system.svc:8086" - Get http://monitoring-influxdb.kube-system.svc:8086/ping: dial tcp 10.102.156.201:8086: getsockopt: connection refused, will retry on use
I0315 20:03:40.210794 1 influxdb.go:312] created influxdb sink with options: host:monitoring-influxdb.kube-system.svc:8086 user:root db:k8s
I0315 20:03:40.210834 1 heapster.go:202] Starting with InfluxDB Sink
I0315 20:03:40.210843 1 heapster.go:202] Starting with Metric Sink
I0315 20:03:40.220708 1 heapster.go:112] Starting heapster on port 8082
I0315 20:04:05.325476 1 influxdb.go:274] Created database "k8s" on influxDB server at "monitoring-influxdb.kube-system.svc:8086"
[root@kube-master ~]# kubectl logs monitoring-influxdb-b54595764-5wp4k -n kube-system |tail -10
[httpd] 10.44.0.0 - root [15/Mar/2019:20:31:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 41aa3a9c-4761-11e9-8044-000000000000 31576
[httpd] 10.44.0.0 - root [15/Mar/2019:20:32:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 656c1196-4761-11e9-8045-000000000000 33423
[httpd] 10.44.0.0 - root [15/Mar/2019:20:33:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 8931dd95-4761-11e9-8046-000000000000 28975
[I] 2019-03-15T20:33:41Z retention policy shard deletion check commencing service=retention
[httpd] 10.44.0.0 - root [15/Mar/2019:20:34:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" acf8c2c9-4761-11e9-8047-000000000000 37833
[httpd] 10.44.0.0 - root [15/Mar/2019:20:35:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" d0ba9e88-4761-11e9-8048-000000000000 23738
[httpd] 10.44.0.0 - root [15/Mar/2019:20:36:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" f47f0190-4761-11e9-8049-000000000000 41776
[httpd] 10.44.0.0 - root [15/Mar/2019:20:37:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 18403fd7-4762-11e9-804a-000000000000 11886
[httpd] 10.44.0.0 - root [15/Mar/2019:20:38:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 3c053c80-4762-11e9-804b-000000000000 24869
[httpd] 10.44.0.0 - root [15/Mar/2019:20:39:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.5.4" 5fcaa856-4762-11e9-804c-000000000000 35169
[root@kube-master ~]# kubectl logs monitoring-grafana-78fc76bc78-9tkc9 -n kube-system |tail -10
t=2019-03-15T20:03:36+0000 lvl=info msg="Created default admin user: admin"
t=2019-03-15T20:03:36+0000 lvl=info msg="Starting plugin search" logger=plugins
t=2019-03-15T20:03:36+0000 lvl=warn msg="Plugin dir does not exist" logger=plugins dir=/usr/share/grafana/data/plugins
t=2019-03-15T20:03:36+0000 lvl=info msg="Plugin dir created" logger=plugins dir=/usr/share/grafana/data/plugins
t=2019-03-15T20:03:36+0000 lvl=info msg="Initializing CleanUpService" logger=cleanup
t=2019-03-15T20:03:36+0000 lvl=info msg="Initializing Alerting" logger=alerting.engine
t=2019-03-15T20:03:36+0000 lvl=info msg="Initializing Stream Manager"
t=2019-03-15T20:03:36+0000 lvl=info msg="Initializing HTTP Server" logger=http.server address=0.0.0.0:3000 protocol=http subUrl= socket=
Connected to the Grafana dashboard.
The datasource for the Grafana dashboard is now set.
Adding Ingress Traffics pod:
cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: monitoring-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: monitoring.kube-system.mykubernetes.host
http:
paths:
- path:
backend:
serviceName: monitoring-grafana
servicePort: 80
This question paper to test the ability and knowledge of a person for machine learning is very interesting. You can attempt this to test your capability. To grab more information and answer to these questions, read more about machine learning solutions.
ReplyDeleteAs per my opinion, videos play a vital role in learning. And when you consider Automated big data engineering , then you should focus on all the learning methods. Udacity seems to be an excellent place to explore machine learning.
ReplyDeleteExcellent Blog! I would Thanks for sharing this wonderful content.its very useful to us.I gained many unknown information, the way you have clearly explained is really fantastic.keep posting such useful information.
ReplyDeleteFull Stack Training in Chennai | Certification | Online Training Course
Full Stack Training in Bangalore | Certification | Online Training Course
Full Stack Training in Hyderabad | Certification | Online Training Course
Full Stack Developer Training in Chennai | Mean Stack Developer Training in Chennai
Full Stack Training
Full Stack Online Training