How to install kubernetes On CentOS7

来自koorka知识分享
跳转至: 导航搜索

Install the kubernetes with yum

系统信息和网络规划:

Hosts:
kube-master.koorka.cn = 192.168.1.230
kube-node01.koorka.cn = 192.168.1.231
kube-node02.koorka.cn = 192.168.1.232
kube-node03.koorka.cn = 192.168.1.233

准备工作:

  • Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
echo "192.168.121.9    centos-master
192.168.121.65    centos-Node-1
192.168.121.66  centos-Node-2
192.168.121.67  centos-Node-3" >> /etc/hosts
  • Disable the firewall on the master and all the nodes, as docker does not play well with other firewall rule managers. CentOS won’t let you disable the firewall as long as SELinux is enforcing, so that needs to be disabled first.
setenforce 0
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

在Master 上安装和配置etcd 服务

  • etcd软件安装

ETCD是用于共享配置和服务发现的分布式,一致性的KV(key, value)存储系统。该项目目前最新稳定版本为2.3.0. 具体信息请参考[项目首页]和[Github]。ETCD是CoreOS公司发起的一个开源项目,授权协议为Apache。

ETCD功能与Zookeeper类似,主要应用场景包括:

  1. 配置管理
  2. 服务注册与发现
  3. 选择主服务器
  4. 应用调度
  5. 分布式队列
  6. 分布式锁

官方文档: https://coreos.com/etcd/docs/latest/

在本案例中,使用下面的命令安装etcd:
yum -y install etcd
  • 配置etcd
编辑 /etc/etcd/etcd.conf , 内容如下:
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
  • 启动etcd服务:
systemctl enable etcd
systemctl start etcd
systemctl status etcd -l
  • 设置flannel网络信息(在flannel service中会用到)
创建flannel的网络信息文件 flannel-config-vxlan.json , 文件内容如下:
{
    "Network": "17.30.0.0/16",
    "SubnetLen": 24,
    "Backend": {
         "Type": "vxlan",
         "VNI": 1
     }
}
将该网络信息存储到etcd服务中:
etcdctl set /koorka.cn/network/config < flannel-config-vxlan.json
验证存储信息:
etcdctl get /koorka.cn/network/config
curl -L http://kube-master.koorka.cn:2379/v2/keys/koorka.cn/network/config

在Master上安装和配置 flanneld 服务

Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。

安装命令:
yum -y install flannel
编辑 /etc/sysconfig/flanneld 文件,内容如下:
FLANNEL_ETCD="http://kube-master.koorka.cn:2379"
FLANNEL_ETCD_KEY="/koorka.cn/network"
FLANNEL_OPTIONS="--ip-masq=true"
备注:FLANNEL_OPTIONS="--ip-masq=true" option will create a postrouting nat configuration for inter docker container communication.

FLANNEL_ETCD_KEY 是Key - Value值对中的key,可以自己定义。

启动和验证服务:
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld -l
检查配置: cat /run/flannel/subnet.env ,内容类似:
FLANNEL_NETWORK=17.30.0.0/16
FLANNEL_SUBNET=17.30.36.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
系统上将生成一个虚拟网络设备 flannel.1 , 使用ifconfig flannel.1可以查看
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 17.30.36.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether be:6d:d4:2b:4e:b6  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

在所有节点服务器上安装配置Docker和flannel服务器

  • 安装和配置flannel
yum -y install flannel
编辑 /etc/sysconfig/flanneld, 内容如下:
FLANNEL_ETCD_ENDPOINTS="http://kube-master.koorka.cn:2379"
FLANNEL_ETCD_PREFIX="/koorka.cn/network"
FLANNEL_OPTIONS="--ip-masq=true"
启动和检查服务状态:
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld -l
检查配置: cat /run/flannel/subnet.env
FLANNEL_NETWORK=17.30.0.0/16
FLANNEL_SUBNET=17.30.67.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
[root@kube-node01 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 17.30.67.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 6e:74:78:a9:f1:06  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  • 安装和配置Docker
yum -y install docker
启动和检查Docker服务状态:
systemctl enable docker
systemctl start docker
systemctl status docker -l
检查Docker和flannel的网络桥接(bridge)配置 : [root@kube-node01 ~]# ip -4 a|grep inet
    inet 127.0.0.1/8 scope host lo
    inet 192.168.1.231/24 brd 192.168.1.255 scope global ens33
    inet 17.30.67.0/32 scope global flannel.1
    inet 17.30.67.1/24 scope global docker0
注意flannel.1 和 docker0

在所有节点上执行上面的安装和配置。

验证各节点的容器网络工作情况

在node01 和 node02 节点上 :

docker run --rm -it centos:7.3 /bin/bash

使用 ifconfig 查看node01 上的 docker container分配到的 IP, 在Node02上的 docker container ping该IP, 网络应该能正常联通。测试与外部网络的连通,ping www.koorka.com 应该能正常连通

在Master上安装配置 kubernetes cluster

yum -y install kubernetes-master
创建一个安全密钥文件(secret key file),该密钥文件将用于 kube-apiserver and kube-controller-manager 服务之间进行通信验证:

[root@kube-master ~]# openssl genrsa -out /etc/kubernetes/serviceaccount.key 2048

编辑 /etc/kubernetes/apiserver ,内容如下:
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master.koorka.cn:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/serviceaccount.key"
编辑 /etc/kubernetes/controller-manager , 内容如下:
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/etc/kubernetes/serviceaccount.key"
启动并检查 kube-controller-manager, kube-apiserver, kube-scheduler 服务的状态,同时enable 这些服务使系统启动时自动启动服务:
for SERVICES in kube-apiserver kube-controller-manager kube-scheduler; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES -l
done
服务器验证: [root@kube-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"a55267932d501b9fbd6d73e5ded47d79b5763ce5", GitTreeState:"clean", BuildDate:"2017-04-14T13:36:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"a55267932d501b9fbd6d73e5ded47d79b5763ce5", GitTreeState:"clean", BuildDate:"2017-04-14T13:36:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
[root@kube-master ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

在各节点上配置安装和配置 kubernetes-node

yum -y install kubernetes-node
编辑 /etc/kubernetes/config ,该文件内的配置项将用于 kubelet 和 kube-proxy 服务。内容如下:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://kube-master.koorka.cn:8080"
编辑 /etc/kubernetes/kubelet ,内容如下:
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=kube-node01.koorka.cn"
KUBELET_API_SERVER="--api-servers=http://kube-master.koorka.cn:8080"
KUBELET_ARGS=""
启动和检查服务状态:
systemctl start kubelet kube-proxy
systemctl enable kubelet kube-proxy
systemctl status kubelet kube-proxy -l
在其它所有节点上执行上面的安装配置步骤。

安装kubernetes客户端(kubectl)

  • Configure kubectl
    kubectl config set-cluster default-cluster --server=http://centos-master:8080
    kubectl config set-context default-context --cluster=default-cluster --user=default-admin
    kubectl config use-context default-context
    
    如果要使用SSL访问cluster:
kubectl config set-cluster default-cluster --server=https://kube-master.koorka.cn:6443
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context
kubectl config set-cluster default-cluster --certificate-authority=keys/ca.pem
kubectl config set-credentials default-admin --client-certificate=keys/apiserver.pem
kubectl config set-credentials default-admin --client-key=keys/apiserver-key.pem
cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /Users/zhaoxiong/kubernetes/keys/ca.pem
    server: https://kube-master.koorka.cn:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    user: default-admin
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-admin
  user:
    client-certificate: /Users/zhaoxiong/kubernetes/keys/apiserver.pem
    client-key: /Users/zhaoxiong/kubernetes/keys/apiserver-key.pem
  • Check to make sure the cluster can see the node (on centos-master)
    $ kubectl get nodes
    NAME                  STATUS    AGE       VERSION
    kube-node01.koorka.cn   Ready     1d        v1.5.2
    kube-node02.koorka.cn  Ready     1d        v1.5.2
    

不能访问google containers的解决办法

当创建ReplicationController 等资源时会出现类似下面的错误:

10m       12m       3         mysql-rc-fzbkg   Pod                 Warning   FailedSync   kubelet, kube-node02.yfq.com   Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause-amd64:3.0, this may be because there are no credentials on this request.  details: (Get https://gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout)"

原因你懂的。解决办法:

使用你的服务器可以访问的Docker Registry 或 建立自己的私有 Registry, 然后通过一个可以访问Google服务器的中间机器,使用Docker pull 命令下载该image, tag 后,再 push到私有Registry中。

在所有Docker container节点上:

编辑 /etc/kubernetes/kubelet
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker-registry.koorka.com/google_containers/pause-amd64:3.0"
创建 /etc/docker/daemon.json, 内容如下:
{ "insecure-registries":["docker-registry.koorka.com"] }
重启docker 和 kubelet服务:
systemctl restart docker kubelet

kubernetes SSL 配置(可选)

如果要安装kubernetes dashboard, 必须配置 ssl

创建自认证CA证书

openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=koorka.com/C=CN/ST=TianJin/L=TianJinNanKai/O=Koorka Ltd/OU=Technical"

创建使用CA签名的证书

openssl 配置文件:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]
countryName              = Country Name (2 letter code)
countryName_default = CN
stateOrProvinceName             = State or Province Name (full name)
stateOrProvinceName_default = TianJin
localityName              = Locality Name (eg, city)
localityName_default = TianJinNanKai
organizationalUnitName             = Organizational Unit Name (eg, section)
organizationalUnitName_default = Domain Control Validated
commonName         = Common Name (eg, your name or your server\'s hostname)
commonName_default         = Koorka Company Ltd
commonName_max = 64

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = *.koorka.cn
DNS.2 = localhost
IP.1 = 127.0.0.1
执行下面的命令生成证书:
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
关于证书生成的更多细节,可以参考: How to create Signature certificate with openssl

将证书拷贝到服务器上

for host in kube-master kube-node01 kube-node02
do
    ssh root@${host}.koorka.cn "mkdir -p /etc/kubernetes/ssl"
    scp ca.pem apiserver.pem apiserver-key.pem root@${host}.koorka.cn:/etc/kubernetes/ssl
done

配置并重起服务

在master上, 创建 kubeconfig 配置文件
vim /etc/kubernetes/cm-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: controllermanager
  user:
    client-certificate: /etc/kubernetes/ssl/apiserver.pem
    client-key: /etc/kubernetes/ssl/apiserver-key.pem
contexts:
- context:
    cluster: local
    user: controllermanager
  name: kubelet-context
current-context: kubelet-context
编辑 /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=https://kube-master.koorka.cn:6443"
编辑 /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--bind-address=0.0.0.0 --insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--secure-port=6443 --insecure-port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://kube-master.koorka.cn:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--client-ca-file=/etc/kubernetes/ssl/ca.pem \
               --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
               --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \
               --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem"
编辑 /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
                                                           --root-ca-file=/etc/kubernetes/ssl/ca.pem \
                                                           --master=https://127.0.0.1:8080 \
                                                           --kubeconfig=/etc/kubernetes/cm-kubeconfig.yaml"
编辑 /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--master=http://127.0.0.1:8080 \
                                         --kubeconfig=/etc/kubernetes/cm-kubeconfig.yaml"
重启服务
systemctl restart kube-apiserver kube-controller-manager kube-scheduler
systemctl status -l kube-apiserver kube-controller-manager kube-scheduler
在Node服务器上配置
编辑 /etc/kubernetes/worker-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/apiserver.pem
    client-key: /etc/kubernetes/ssl/apiserver-key.pem
contexts:
- context:
    cluster: local
    user: kubelet
  name: kubelet-context
current-context: kubelet-context
编辑 /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=https://kube-master.koorka.cn:6443"
编辑 /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=kube-node01.koorka.cn"
KUBELET_API_SERVER="--api-servers=https://kube-master.koorka.cn:6443"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker-registry.koorka.com/google_containers/pause-amd64:3.0"
KUBELET_ARGS="--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \
              --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \
              --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"
编辑 /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"
重起服务
systemctl restart docker kubelet kube-proxy
systemctl status -l docker kubelet kube-proxy
验证SSL配置是否生效
curl https://kube-master.koorka.cn:6443/api/v1/nodes --cert /etc/kubernetes/ssl/apiserver.pem --key /etc/kubernetes/ssl/apiserver-key.pem --cacert /etc/kubernetes/ssl/ca.pem

安装kubernetes dashboard (Web UI)

由于kubernetes dashboard container 使用 ssl 连接 kube-apiserver. 因此,在安装之前,请先配置 kubernetes SSL.

下载kubernetes dashboard 的 pod配置文件。

备注:由于本案例中使用的kubernetes 是 1.5.2的,所以使用 kubernetes dashboard 的 1.5.x 的最新版本 1.5.1 , kubernetes dashboard的最新版本可以到:https://github.com/kubernetes/dashboard/tags 查看。

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml
编辑 kubernetes-dashboard.yaml 文件,修改 image的值:
 image: docker-registry.koorka.com/google_containers/kubernetes-dashboard-amd64:v1.5.1
执行下面的命令创建kubernetes dashboard container:

MacBook:kubernetes zhaoxiong$ kubectl create -f kubernetes-dashboard.yaml

Reference:

https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/

https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/Docker_And_Kubernetes_Cluster_on_Power_with_RHEL7_Part_1_Preparing_all_Node?lang=en