Docker学习 Docker学习-VMware Workstation 本地多台虚拟机互通,主机网络互通搭建 Docker学习-Docker搭建Consul集群 Docker学习-简单的私有DockerHub搭建 Docker学习-Spring Boot on Docker Docker学习-Kubernetes - 集群部署 Docker学习-Kubernetes - Spring Boot 应用 简介 kubernetes,简称K8s,是用8代替8个字符“ubernete”而成的缩写。是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。 Kubernetes是Google开源的一个容器编排引擎,它支持自动化部署、大规模可伸缩、应用容器化管理。在生产环境中部署一个应用程序时,通常要部署该应用的多个实例以便对应用请求进行负载均衡。 在Kubernetes中,我们可以创建多个容器,每个容器里面运行一个应用实例,然后通过内置的负载均衡策略,实现对这一组应用实例的管理、发现、访问,而这些细节都不需要运维人员去进行复杂的手工配置和处理。 基本概念 Kubernetes 中的绝大部分概念都抽象成 Kubernetes 管理的一种资源对象 Master:Master 节点是 Kubernetes 集群的控制节点,负责整个集群的管理和控制。Master 节点上包含以下组件: kube-apiserver:集群控制的入口,提供 HTTP REST 服务 kube-controller-manager:Kubernetes 集群中所有资源对象的自动化控制中心 kube-scheduler:负责 Pod 的调度 Node:Node 节点是 Kubernetes 集群中的工作节点,Node 上的工作负载由 Master 节点分配,工作负载主要是运行容器应用。Node 节点上包含以下组件: kubelet:负责 Pod 的创建、启动、监控、重启、销毁等工作,同时与 Master 节点协作,实现集群管理的基本功能。 kube-proxy:实现 Kubernetes Service 的通信和负载均衡 运行容器化(Pod)应用 Pod: Pod 是 Kubernetes 最基本的部署调度单元。每个 Pod 可以由一个或多个业务容器和一个根容器(Pause 容器)组成。一个 Pod 表示某个应用的一个实例 ReplicaSet:是 Pod 副本的抽象,用于解决 Pod 的扩容和伸缩 Deployment:Deployment 表示部署,在内部使用ReplicaSet 来实现。可以通过 Deployment 来生成相应的 ReplicaSet 完成 Pod 副本的创建 Service:Service 是 Kubernetes 最重要的资源对象。Kubernetes 中的 Service 对象可以对应微服务架构中的微服务。Service 定义了服务的访问入口,服务的调用者通过这个地址访问 Service 后端的 Pod 副本实例。Service 通过 Label Selector 同后端的 Pod 副本建立关系,Deployment 保证后端Pod 副本的数量,也就是保证服务的伸缩性。 Kubernetes 主要由以下几个核心组件组成: etcd 保存了整个集群的状态,就是一个数据库; apiserver 提供了资源操作的唯一入口,并提供认证、授权、访问控制、API 注册和发现等机制; controller manager 负责维护集群的状态,比如故障检测、自动扩展、滚动更新等; scheduler 负责资源的调度,按照预定的调度策略将 Pod 调度到相应的机器上; kubelet 负责维护容器的生命周期,同时也负责 Volume(CSI)和网络(CNI)的管理; Container runtime 负责镜像管理以及 Pod 和容器的真正运行(CRI); kube-proxy 负责为 Service 提供 cluster 内部的服务发现和负载均衡; 当然了除了上面的这些核心组件,还有一些推荐的插件: kube-dns 负责为整个集群提供 DNS 服务 Ingress Controller 为服务提供外网入口 Heapster 提供资源监控 Dashboard 提供 GUI 组件通信 Kubernetes 多组件之间的通信原理: apiserver 负责 etcd 存储的所有操作,且只有 apiserver 才直接操作 etcd 集群 apiserver 对内(集群中的其他组件)和对外(用户)提供统一的 REST API,其他组件均通过 apiserver 进行通信 controller manager、scheduler、kube-proxy 和 kubelet 等均通过 apiserver watch API 监测资源变化情况,并对资源作相应的操作 所有需要更新资源状态的操作均通过 apiserver 的 REST API 进行 apiserver 也会直接调用 kubelet API(如 logs, exec, attach 等),默认不校验 kubelet 证书,但可以通过 --kubelet-certificate-authority 开启(而 GKE 通过 SSH 隧道保护它们之间的通信) 比如最典型的创建 Pod 的流程:​​ 用户通过 REST API 创建一个 Pod apiserver 将其写入 etcd scheduluer 检测到未绑定 Node 的 Pod,开始调度并更新 Pod 的 Node 绑定 kubelet 检测到有新的 Pod 调度过来,通过 container runtime 运行该 Pod kubelet 通过 container runtime 取到 Pod 状态,并更新到 apiserver 中 集群部署 使用kubeadm工具安装 1. master和node 都用yum 安装kubelet,kubeadm,docker 2. master 上初始化:kubeadm init 3. master 上启动一个flannel的pod 4. node上加入集群:kubeadm join 准备环境 Centos7 192.168.50.21 k8s-master   Centos7 192.168.50.22 k8s-node01 Centos7 192.168.50.23 k8s-node02 修改主机名(3台机器都需要修改) hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02 关闭防火墙 systemctl stop firewalld.service 配置docker yum源 复制代码 yum install -y yum-utils device-mapper-persistent-data lvm2 wget cd /etc/yum.repos.d wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 复制代码 配置kubernetes yum 源 复制代码 cd /opt/ wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg rpm --import yum-key.gpg rpm --import rpm-package-key.gpg cd /etc/yum.repos.d vi kubernetes.repo 输入以下内容 [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg enabled=1 yum repolist 复制代码 master和node 安装kubelet,kubeadm,docker 1 2 3 yum install docker yum install kubelet-1.13.1 yum install kubeadm-1.13.1 master 上安装kubectl yum install kubectl-1.13.1 docker的配置 配置私有仓库和镜像加速地址,私有仓库配置参见 https://www.cnblogs.com/woxpp/p/11871886.html vi /etc/docker/daemon.json 复制代码 { "registry-mirror":[ "http://hub-mirror.c.163.com" ], "insecure-registries":[ "192.168.50.24:5000" ] } 复制代码 启动docker 复制代码 systemctl daemon-reload systemctl start docker docker info 复制代码 master 上初始化:kubeadm init 1 vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false" 复制代码 kubeadm init \ --apiserver-advertise-address=192.168.50.21 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.13.1 \ --pod-network-cidr=10.244.0.0/16 复制代码 初始化命令说明: --apiserver-advertise-address 指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。 --pod-network-cidr 指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对 --pod-network-cidr 有自己的要求,这里设置为 10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。 --image-repository Kubenetes默认Registries地址是 k8s.gcr.io,在国内并不能访问 gcr.io,在1.13版本中我们可以增加–image-repository参数,默认值是 k8s.gcr.io,将其指定为阿里云镜像地址:registry.aliyuncs.com/google_containers。 --kubernetes-version=v1.13.1 关闭版本探测,因为它的默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(最新版:v1.13.1)来跳过网络请求。 初始化过程中 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 是在下载镜像文件,过程比较慢。 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 24.002300 seconds 这个过程也比较慢 可以忽略 复制代码 [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.50.21] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.21 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.21 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 24.002300 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 7ax0k4.nxpjjifrqnbrpojv [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.50.21:6443 --token 7ax0k4.nxpjjifrqnbrpojv --discovery-token-ca-cert-hash sha256:95942f10859a71879c316e75498de02a8b627725c37dee33f74cd040e1cd9d6b 复制代码 初始化过程说明: 1) [preflight] kubeadm 执行初始化前的检查。 2) [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml” 3) [certificates] 生成相关的各种token和证书 4) [kubeconfig] 生成 KubeConfig 文件,kubelet 需要这个文件与 Master 通信 5) [control-plane] 安装 Master 组件,会从指定的 Registry 下载组件的 Docker 镜像。 6) [bootstraptoken] 生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到 7) [addons] 安装附加组件 kube-proxy 和 kube-dns。 8) Kubernetes Master 初始化成功,提示如何配置常规用户使用kubectl访问集群。 9) 提示如何安装 Pod 网络。 10) 提示如何注册其他节点到 Cluster。 异常情况: 复制代码 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Hostname]: hostname "k8s-master" could not be reached [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 114.114.114.114:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' 复制代码 运行 systemctl enable docker.service systemctl enable kubelet.service 会提示以下错误 复制代码 [WARNING Hostname]: hostname "k8s-master" could not be reached [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 114.114.114.114:53: no such host error execution phase preflight: [preflight] Some fatal errors occurred: 复制代码 配置host 复制代码 cat >> /etc/hosts << EOF 192.168.50.21 k8s-master 192.168.50.22 k8s-node01 192.168.50.23 k8s-node02 EOF 复制代码 再次运行初始化命令会出现 复制代码 [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2 --设置虚拟机CPU个数大于2 [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1 复制代码 echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables 设置好虚拟机CPU个数,重启后再次运行: 复制代码 kubeadm init \ --apiserver-advertise-address=192.168.50.21 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.13.1 \ --pod-network-cidr=10.244.0.0/16 复制代码 复制代码 [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull 复制代码 解决办法:docker.io仓库对google的容器做了镜像,可以通过下列命令下拉取相关镜像 先看下需要用到哪些 kubeadm config images list 配置yum源 [root@k8s-master opt]# vi kubeadm-config.yaml 复制代码 apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.13.1 imageRepository: registry.aliyuncs.com/google_containers apiServer: certSANs: - 192.168.50.21 controlPlaneEndpoint: "192.168.50.20:16443" networking: # This CIDR is a Calico default. Substitute or remove for your CNI provider. podSubnet: "172.168.0.0/16" 复制代码 kubeadm config images pull --config /opt/kubeadm-config.yaml 初始化master kubeadm init --config=kubeadm-config.yaml --upload-certs 复制代码 xecution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml alre