1.当访问的文件和目录不存在时,重定向到某个html文件 if( !-e $request_filename ) { rewrite ^/(.*)$ index.html last;
}
或者:
#当URL符合^\/(api)规则时,跳转到http://127.0.0.1:9178,即在upstream中配置的那个值
location / { #默认跳转到http://127.0.0.1:9000 proxy_pass http://127.0.0.1:9000; }
location ~ ^\/(api){
proxy_pass http://api;
proxy_set_header X-Real-IP $remote_addr;
client_max_body_size 100m;
}如对图片实现找不图片请求其他服务器:location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { proxy_set_header Host apph.zhidekan.me; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; if (!-e $request_filename) { #proxy_cache_key $host$uri$is_args$args; proxy_pass http://apph; #proxy_pass_header Set-Cookie; } expires 30d; }其中:upstream.confupstream apph{ ip_hash; server 10.13.40.48:80 max_fails=2 fail_timeout=60s; #server 127.0.0.1:80 max_fails=2 fail_timeout=60s; }
月度归档:2017年04月
Kubernetes集群安装文档-v1.6版本
本系列文档介绍使用二进制部署 kubernetes
集群的所有步骤,而不是使用 kubeadm
等自动化方式来部署集群,同时开启了集群的TLS安全认证;
在部署的过程中,将详细列出各组件的启动参数,给出配置文件,详解它们的含义和可能遇到的问题。
部署完成后,你将理解系统各组件的交互原理,进而能快速解决实际问题。
所以本文档主要适合于那些有一定 kubernetes 基础,想通过一步步部署的方式来学习和了解系统配置、运行原理的人。
项目代码中提供了汇总后的markdon和pdf格式的安装文档,pdf版本文档下载。
注:本文档中不包括docker和私有镜像仓库的安装。
提供所有的配置文件
集群安装时所有组件用到的配置文件,包含在以下目录中:
- etc: service的环境变量配置文件
- manifest: kubernetes应用的yaml文件
- systemd :systemd serivce配置文件
集群详情
- Kubernetes 1.6.0
- Docker 1.12.5(使用yum安装)
- Etcd 3.1.5
- Flanneld 0.7 vxlan 网络
- TLS 认证通信 (所有组件,如 etcd、kubernetes master 和 node)
- RBAC 授权
- kublet TLS BootStrapping
- kubedns、dashboard、heapster(influxdb、grafana)、EFK(elasticsearch、fluentd、kibana) 集群插件
- 私有docker镜像仓库harbor(请自行部署,harbor提供离线安装包,直接使用docker-compose启动即可)
步骤介绍
- 创建 TLS 通信所需的证书和秘钥
- 创建 kubeconfig 文件
- 创建三节点的高可用 etcd 集群
- kubectl命令行工具
- 部署高可用 master 集群
- 部署 node 节点
- kubedns 插件
- Dashboard 插件
- Heapster 插件
- EFK 插件
一、创建 kubernetes 各组件 TLS 加密通信的证书和秘钥
kubernetes
系统的各组件需要使用 TLS
证书对通信进行加密,本文档使用 CloudFlare
的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 和其它证书;
生成的 CA 证书和秘钥文件如下:
- ca-key.pem
- ca.pem
- kubernetes-key.pem
- kubernetes.pem
- kube-proxy.pem
- kube-proxy-key.pem
- admin.pem
- admin-key.pem
使用证书的组件如下:
- etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
- kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
- kubelet:使用 ca.pem;
- kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
- kubectl:使用 ca.pem、admin-key.pem、admin.pem;
kube-controller
、kube-scheduler
当前需要和 kube-apiserver
部署在同一台机器上且使用非安全端口通信,故不需要证书。
安装 CFSSL
方式一:直接使用二进制源码包安装
$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ chmod +x cfssl_linux-amd64
$ sudo mv cfssl_linux-amd64 /root/local/bin/cfssl
$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ chmod +x cfssljson_linux-amd64
$ sudo mv cfssljson_linux-amd64 /root/local/bin/cfssljson
$ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl-certinfo_linux-amd64
$ sudo mv cfssl-certinfo_linux-amd64 /root/local/bin/cfssl-certinfo
$ export PATH=/root/local/bin:$PATH
方式二:使用go命令安装
我们的系统中安装了Go1.7.5,使用以下命令安装更快捷:
$go get -u github.com/cloudflare/cfssl/cmd/... $echo $GOPATH /usr/local $ls /usr/local/bin/cfssl* cfssl cfssl-bundle cfssl-certinfo cfssljson cfssl-newkey cfssl-scan
在$GOPATH/bin
目录下得到以cfssl开头的几个命令。
创建 CA (Certificate Authority)
创建 CA 配置文件
$ mkdir /root/ssl
$ cd /root/ssl
$ cfssl print-defaults config > config.json
$ cfssl print-defaults csr > csr.json
$ cat ca-config.json { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" } } } }
字段说明
-
ca-config.json
:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile; -
signing
:表示该证书可用于签名其它证书;生成的 ca.pem 证书中CA=TRUE
; -
server auth
:表示client可以用该 CA 对server提供的证书进行验证; -
client auth
:表示server可以用该CA对client提供的证书进行验证;
创建 CA 证书签名请求
$ cat ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] }
-
“CN”:
Common Name
,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法; -
“O”:
Organization
,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
生成 CA 证书和私钥
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca* ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
创建 kubernetes 证书
创建 kubernetes 证书签名请求
$ cat kubernetes-csr.json { "CN": "kubernetes", "hosts": [ "127.0.0.1", "172.20.0.112", "172.20.0.113", "172.20.0.114", "172.20.0.115", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] }
-
如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表,由于该证书后续被
etcd
集群和kubernetes master
集群使用,所以上面分别指定了etcd
集群、kubernetes master
集群的主机 IP 和kubernetes
服务的服务 IP(一般是kue-apiserver
指定的service-cluster-ip-range
网段的第一个IP,如 10.254.0.1。
生成 kubernetes 证书和私钥
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
$ ls kuberntes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
或者直接在命令行上指定相关参数:
$ echo '{"CN":"kubernetes","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname="127.0.0.1,172.20.0.112,172.20.0.113,172.20.0.114,172.20.0.115,kubernetes,kubernetes.default" - | cfssljson -bare kubernetes
创建 admin 证书
创建 admin 证书签名请求
$ cat admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] }
-
后续
kube-apiserver
使用RBAC
对客户端(如kubelet
、kube-proxy
、Pod
)请求进行授权; -
kube-apiserver
预定义了一些RBAC
使用的RoleBindings
,如cluster-admin
将 Groupsystem:masters
与 Rolecluster-admin
绑定,该 Role 授予了调用kube-apiserver
的所有 API的权限; -
OU 指定该证书的 Group 为
system:masters
,kubelet
使用该证书访问kube-apiserver
时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters
,所以被授予访问所有 API 的权限;
生成 admin 证书和私钥
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin* admin.csr admin-csr.json admin-key.pem admin.pem
创建 kube-proxy 证书
创建 kube-proxy 证书签名请求
$ cat kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] }
-
CN 指定该证书的 User 为
system:kube-proxy
; -
kube-apiserver
预定义的 RoleBindingcluster-admin
将Usersystem:kube-proxy
与 Rolesystem:node-proxier
绑定,该 Role 授予了调用kube-apiserver
Proxy 相关 API 的权限;
生成 kube-proxy 客户端证书和私钥
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
$ ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
校验证书
以 kubernetes 证书为例
使用 opsnssl
命令
$ openssl x509 -noout -text -in kubernetes.pem ... Signature Algorithm: sha256WithRSAEncryption Issuer: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=Kubernetes Validity Not Before: Apr 5 05:36:00 2017 GMT Not After : Apr 5 05:36:00 2018 GMT Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=kubernetes ... X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier: DD:52:04:43:10:13:A9:29:24:17:3A:0E:D7:14:DB:36:F8:6C:E0:E0
X509v3 Authority Key Identifier: keyid:44:04:3B:60:BD:69:78:14:68:AF:A0:41:13:F6:17:07:13:63:58:CD
X509v3 Subject Alternative Name: DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.default.svc.cluster.local, IP Address:127.0.0.1, IP Address:172.20.0.112, IP Address:172.20.0.113, IP Address:172.20.0.114, IP Address:172.20.0.115, IP Address:10.254.0.1 ...
-
确认
Issuer
字段的内容和ca-csr.json
一致; -
确认
Subject
字段的内容和kubernetes-csr.json
一致; -
确认
X509v3 Subject Alternative Name
字段的内容和kubernetes-csr.json
一致; -
确认
X509v3 Key Usage、Extended Key Usage
字段的内容和ca-config.json
中kubernetes
profile 一致;
使用 cfssl-certinfo
命令
$ cfssl-certinfo -cert kubernetes.pem ... { "subject": { "common_name": "kubernetes", "country": "CN", "organization": "k8s", "organizational_unit": "System", "locality": "BeiJing", "province": "BeiJing", "names": [ "CN", "BeiJing", "BeiJing", "k8s", "System", "kubernetes" ] }, "issuer": { "common_name": "Kubernetes", "country": "CN", "organization": "k8s", "organizational_unit": "System", "locality": "BeiJing", "province": "BeiJing", "names": [ "CN", "BeiJing", "BeiJing", "k8s", "System", "Kubernetes" ] }, "serial_number": "174360492872423263473151971632292895707129022309", "sans": [ "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "127.0.0.1", "10.64.3.7", "10.254.0.1" ], "not_before": "2017-04-05T05:36:00Z", "not_after": "2018-04-05T05:36:00Z", "sigalg": "SHA256WithRSA", ...
分发证书
将生成的证书和秘钥文件(后缀名为.pem
)拷贝到所有机器的 /etc/kubernetes/ssl
目录下备用;
$ sudo mkdir -p /etc/kubernetes/ssl
$ sudo cp *.pem /etc/kubernetes/ssl
参考
- Generate self-signed certificates
- Setting up a Certificate Authority and Creating TLS Certificates
- Client Certificates V/s Server Certificates
- 数字证书及 CA 的扫盲介绍
二、创建 kubeconfig 文件
kubelet
、kube-proxy
等 Node 机器上的进程与 Master 机器的 kube-apiserver
进程通信时需要认证和授权;
kubernetes 1.4 开始支持由 kube-apiserver
为客户端生成 TLS 证书的 TLS Bootstrapping 功能,这样就不需要为每个客户端生成证书了;该功能当前仅支持为 kubelet
生成证书;
创建 TLS Bootstrapping Token
Token auth file
Token可以是任意的包涵128 bit的字符串,可以使用安全的随机数发生器生成。
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
后三行是一句,直接复制上面的脚本运行即可。
将token.csv发到所有机器(Master 和 Node)的 /etc/kubernetes/
目录。
$cp token.csv /etc/kubernetes/
创建 kubelet bootstrapping kubeconfig 文件
$ cd /etc/kubernetes
$ export KUBE_APISERVER="https://172.20.0.113:6443" $ # 设置集群参数 $ kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig
$ # 设置客户端认证参数 $ kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig
$ # 设置上下文参数 $ kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig
$ # 设置默认上下文 $ kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
-
--embed-certs
为true
时表示将certificate-authority
证书写入到生成的bootstrap.kubeconfig
文件中; -
设置客户端认证参数时没有指定秘钥和证书,后续由
kube-apiserver
自动生成;
创建 kube-proxy kubeconfig 文件
$ export KUBE_APISERVER="https://172.20.0.113:6443" $ # 设置集群参数 $ kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig
$ # 设置客户端认证参数 $ kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig
$ # 设置上下文参数 $ kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig
$ # 设置默认上下文 $ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
-
设置集群参数和客户端认证参数时
--embed-certs
都为true
,这会将certificate-authority
、client-certificate
和client-key
指向的证书文件内容写入到生成的kube-proxy.kubeconfig
文件中; -
kube-proxy.pem
证书中 CN 为system:kube-proxy
,kube-apiserver
预定义的 RoleBindingcluster-admin
将Usersystem:kube-proxy
与 Rolesystem:node-proxier
绑定,该 Role 授予了调用kube-apiserver
Proxy 相关 API 的权限;
分发 kubeconfig 文件
将两个 kubeconfig 文件分发到所有 Node 机器的 /etc/kubernetes/
目录
$ cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/
三、创建高可用 etcd 集群
kuberntes 系统使用 etcd 存储所有数据,本文档介绍部署一个三节点高可用 etcd 集群的步骤,这三个节点复用 kubernetes master 机器,分别命名为sz-pg-oam-docker-test-001.tendcloud.com
、sz-pg-oam-docker-test-002.tendcloud.com
、sz-pg-oam-docker-test-003.tendcloud.com
:
- sz-pg-oam-docker-test-001.tendcloud.com:172.20.0.113
- sz-pg-oam-docker-test-002.tendcloud.com:172.20.0.114
- sz-pg-oam-docker-test-003.tendcloud.com:172.20.0.115
TLS 认证文件
需要为 etcd 集群创建加密通信的 TLS 证书,这里复用以前创建的 kubernetes 证书
$ cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl
-
kubernetes 证书的
hosts
字段列表中包含上面三台机器的 IP,否则后续证书校验会失败;
下载二进制文件
到 https://github.com/coreos/etcd/releases
页面下载最新版本的二进制文件
$ https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz $ tar -xvf etcd-v3.1.4-linux-amd64.tar.gz
$ sudo mv etcd-v3.1.4-linux-amd64/etcd* /root/local/bin
创建 etcd 的 systemd unit 文件
注意替换 ETCD_NAME
和 INTERNAL_IP
变量的值;
$ export ETCD_NAME=sz-pg-oam-docker-test-001.tendcloud.com
$ export INTERNAL_IP=172.20.0.113 $ sudo mkdir -p /var/lib/etcd /var/lib/etcd
$ cat > etcd.service <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/root/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster sz-pg-oam-docker-test-001.tendcloud.com=https://172.20.0.113:2380,sz-pg-oam-docker-test-002.tendcloud.com=https://172.20.0.114:2380,sz-pg-oam-docker-test-003.tendcloud.com=https://172.20.0.115:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
-
指定
etcd
的工作目录为/var/lib/etcd
,数据目录为/var/lib/etcd
,需在启动服务前创建这两个目录; - 为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);
-
创建
kubernetes.pem
证书时使用的kubernetes-csr.json
文件的hosts
字段包含所有 etcd 节点的 INTERNAL_IP,否则证书校验会出错; -
--initial-cluster-state
值为new
时,--name
的参数值必须位于--initial-cluster
列表中;
完整 unit 文件见:etcd.service
启动 etcd 服务
$ sudo mv etcd.service /etc/systemd/system/ $ sudo systemctl daemon-reload
$ sudo systemctl enable etcd
$ sudo systemctl start etcd
$ systemctl status etcd
在所有的 kubernetes master 节点重复上面的步骤,直到所有机器的 etcd 服务都已启动。
验证服务
在任一 kubernetes master 机器上执行如下命令:
$ etcdctl \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
cluster-health 2017-04-11 15:17:09.082250 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated 2017-04-11 15:17:09.083681 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member 9a2ec640d25672e5 is healthy: got healthy result from https://172.20.0.115:2379 member bc6f27ae3be34308 is healthy: got healthy result from https://172.20.0.114:2379 member e5c92ea26c4edba0 is healthy: got healthy result from https://172.20.0.113:2379 cluster is healthy
结果最后一行为 cluster is healthy
时表示集群服务正常。
三、创建高可用 etcd 集群
kuberntes 系统使用 etcd 存储所有数据,本文档介绍部署一个三节点高可用 etcd 集群的步骤,这三个节点复用 kubernetes master 机器,分别命名为sz-pg-oam-docker-test-001.tendcloud.com
、sz-pg-oam-docker-test-002.tendcloud.com
、sz-pg-oam-docker-test-003.tendcloud.com
:
- sz-pg-oam-docker-test-001.tendcloud.com:172.20.0.113
- sz-pg-oam-docker-test-002.tendcloud.com:172.20.0.114
- sz-pg-oam-docker-test-003.tendcloud.com:172.20.0.115
TLS 认证文件
需要为 etcd 集群创建加密通信的 TLS 证书,这里复用以前创建的 kubernetes 证书
$ cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl
-
kubernetes 证书的
hosts
字段列表中包含上面三台机器的 IP,否则后续证书校验会失败;
下载二进制文件
到 https://github.com/coreos/etcd/releases
页面下载最新版本的二进制文件
$ https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz $ tar -xvf etcd-v3.1.4-linux-amd64.tar.gz
$ sudo mv etcd-v3.1.4-linux-amd64/etcd* /root/local/bin
创建 etcd 的 systemd unit 文件
注意替换 ETCD_NAME
和 INTERNAL_IP
变量的值;
$ export ETCD_NAME=sz-pg-oam-docker-test-001.tendcloud.com
$ export INTERNAL_IP=172.20.0.113 $ sudo mkdir -p /var/lib/etcd /var/lib/etcd
$ cat > etcd.service <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/root/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster sz-pg-oam-docker-test-001.tendcloud.com=https://172.20.0.113:2380,sz-pg-oam-docker-test-002.tendcloud.com=https://172.20.0.114:2380,sz-pg-oam-docker-test-003.tendcloud.com=https://172.20.0.115:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
-
指定
etcd
的工作目录为/var/lib/etcd
,数据目录为/var/lib/etcd
,需在启动服务前创建这两个目录; - 为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);
-
创建
kubernetes.pem
证书时使用的kubernetes-csr.json
文件的hosts
字段包含所有 etcd 节点的 INTERNAL_IP,否则证书校验会出错; -
--initial-cluster-state
值为new
时,--name
的参数值必须位于--initial-cluster
列表中;
完整 unit 文件见:etcd.service
启动 etcd 服务
$ sudo mv etcd.service /etc/systemd/system/ $ sudo systemctl daemon-reload
$ sudo systemctl enable etcd
$ sudo systemctl start etcd
$ systemctl status etcd
在所有的 kubernetes master 节点重复上面的步骤,直到所有机器的 etcd 服务都已启动。
验证服务
在任一 kubernetes master 机器上执行如下命令:
$ etcdctl \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
cluster-health 2017-04-11 15:17:09.082250 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated 2017-04-11 15:17:09.083681 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member 9a2ec640d25672e5 is healthy: got healthy result from https://172.20.0.115:2379 member bc6f27ae3be34308 is healthy: got healthy result from https://172.20.0.114:2379 member e5c92ea26c4edba0 is healthy: got healthy result from https://172.20.0.113:2379 cluster is healthy
结果最后一行为 cluster is healthy
时表示集群服务正常。
四、下载和配置 kubectl 命令行工具
本文档介绍下载和配置 kubernetes 集群命令行工具 kubelet 的步骤。
下载 kubectl
$ wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz $ tar -xzvf kubernetes-client-linux-amd64.tar.gz
$ cp kubernetes/client/bin/kube* /usr/bin/ $ chmod a+x /usr/bin/kube*
创建 kubectl kubeconfig 文件
$ export KUBE_APISERVER="https://172.20.0.113:6443" $ # 设置集群参数 $ kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} $ # 设置客户端认证参数 $ kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem
$ # 设置上下文参数 $ kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin
$ # 设置默认上下文 $ kubectl config use-context kubernetes
-
admin.pem
证书 OU 字段值为system:masters
,kube-apiserver
预定义的 RoleBindingcluster-admin
将 Groupsystem:masters
与 Rolecluster-admin
绑定,该 Role 授予了调用kube-apiserver
相关 API 的权限; -
生成的 kubeconfig 被保存到
~/.kube/config
文件;
五、部署高可用 kubernetes master 集群
kubernetes master 节点包含的组件:
- kube-apiserver
- kube-scheduler
- kube-controller-manager
目前这三个组件需要部署在同一台机器上。
-
kube-scheduler
、kube-controller-manager
和kube-apiserver
三者的功能紧密相关; -
同时只能有一个
kube-scheduler
、kube-controller-manager
进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader;
本文档记录部署一个三个节点的高可用 kubernetes master 集群步骤。(后续创建一个 load balancer 来代理访问 kube-apiserver 的请求)
TLS 证书文件
pem和token.csv证书文件我们在TLS证书和秘钥这一步中已经创建过了。我们再检查一下。
$ ls /etc/kubernetes/ssl
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem
下载最新版本的二进制文件
有两种下载方式
方式一
从 github release 页面 下载发布版 tarball,解压后再执行下载脚本
$ wget https://github.com/kubernetes/kubernetes/releases/download/v1.6.0/kubernetes.tar.gz $ tar -xzvf kubernetes.tar.gz ... $ cd kubernetes
$ ./cluster/get-kube-binaries.sh ...
方式二
从 CHANGELOG
页面 下载 client
或 server
tarball 文件
server
的 tarball kubernetes-server-linux-amd64.tar.gz
已经包含了 client
(kubectl
) 二进制文件,所以不用单独下载kubernetes-client-linux-amd64.tar.gz
文件;
$ # wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz $ wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz $ tar -xzvf kubernetes-server-linux-amd64.tar.gz ... $ cd kubernetes
$ tar -xzvf kubernetes-src.tar.gz
将二进制文件拷贝到指定路径
$ cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /root/local/bin/
配置和启动 kube-apiserver
创建 kube-apiserver的service配置文件
serivce配置文件/usr/lib/systemd/system/kube-apiserver.service
内容:
[Unit] Description=Kubernetes API Service Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
/etc/kubernetes/config
文件的内容为:
### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=true" # How the controller-manager, scheduler, and proxy find the apiserver #KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080" KUBE_MASTER="--master=http://172.20.0.113:8080"
该配置文件同时被kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy使用。
apiserver配置文件/etc/kubernetes/apiserver
内容为:
### ## kubernetes system config ## ## The following values are used to configure the kube-apiserver ## # ## The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=sz-pg-oam-docker-test-001.tendcloud.com" KUBE_API_ADDRESS="--advertise-address=172.20.0.113 --bind-address=172.20.0.113 --insecure-bind-address=172.20.0.113" # ## The port on the local server to listen on. #KUBE_API_PORT="--port=8080" # ## Port minions listen on #KUBELET_PORT="--kubelet-port=10250" # ## Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=https://172.20.0.113:2379,172.20.0.114:2379,172.20.0.115:2379" # ## Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # ## default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota" # ## Add your own! KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"
-
--authorization-mode=RBAC
指定在安全端口使用 RBAC 授权模式,拒绝未通过授权的请求; - kube-scheduler、kube-controller-manager 一般和 kube-apiserver 部署在同一台机器上,它们使用非安全端口和 kube-apiserver通信;
- kubelet、kube-proxy、kubectl 部署在其它 Node 节点上,如果通过安全端口访问 kube-apiserver,则必须先通过 TLS 证书认证,再通过 RBAC 授权;
- kube-proxy、kubectl 通过在使用的证书里指定相关的 User、Group 来达到通过 RBAC 授权的目的;
-
如果使用了 kubelet TLS Boostrap 机制,则不能再指定
--kubelet-certificate-authority
、--kubelet-client-certificate
和--kubelet-client-key
选项,否则后续 kube-apiserver 校验 kubelet 证书时出现 ”x509: certificate signed by unknown authority“ 错误; -
--admission-control
值必须包含ServiceAccount
; -
--bind-address
不能为127.0.0.1
; -
runtime-config
配置为rbac.authorization.k8s.io/v1beta1
,表示运行时的apiVersion; -
--service-cluster-ip-range
指定 Service Cluster IP 地址段,该地址段不能路由可达; -
缺省情况下 kubernetes 对象保存在 etcd
/registry
路径下,可以通过--etcd-prefix
参数进行调整;
完整 unit 见 kube-apiserver.service
启动kube-apiserver
$ systemctl daemon-reload
$ systemctl enable kube-apiserver
$ systemctl start kube-apiserver
$ systemctl status kube-apiserver
配置和启动 kube-controller-manager
创建 kube-controller-manager的serivce配置文件
文件路径/usr/lib/systemd/system/kube-controller-manager.service
Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
配置文件/etc/kubernetes/controller-manager
。
### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"
-
--service-cluster-ip-range
参数指定 Cluster 中 Service 的CIDR范围,该网络在各 Node 间必须路由不可达,必须和 kube-apiserver 中的参数一致; -
--cluster-signing-*
指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥; -
--root-ca-file
用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件; -
--address
值必须为127.0.0.1
,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器,否则:$ kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused controller-manager Healthy ok etcd-2 Unhealthy Get http://172.20.0.113:2379/health: malformed HTTP response "\x15\x03\x01\x00\x02\x02" etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
参考:https://github.com/kubernetes-incubator/bootkube/issues/64
完整 unit 见 kube-controller-manager.service
启动 kube-controller-manager
$ systemctl daemon-reload
$ systemctl enable kube-controller-manager
$ systemctl start kube-controller-manager
配置和启动 kube-scheduler
创建 kube-scheduler的serivce配置文件
文件路径/usr/lib/systemd/system/kube-scheduler.serivce
。
[Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
配置文件/etc/kubernetes/scheduler
。
### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
-
--address
值必须为127.0.0.1
,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器;
完整 unit 见 kube-scheduler.service
启动 kube-scheduler
$ systemctl daemon-reload
$ systemctl enable kube-scheduler
$ systemctl start kube-scheduler
验证 master 节点功能
$ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"}
六、部署kubernetes node节点
kubernetes node 节点包含如下组件:
- Flanneld:参考我之前写的文章Kubernetes基于Flannel的网络配置,之前没有配置TLS,现在需要在serivce配置文件中增加TLS配置。
- Docker1.12.5:docker的安装很简单,这里也不说了。
- kubelet
- kube-proxy
下面着重讲kubelet
和kube-proxy
的安装,同时还要将之前安装的flannel集成TLS验证。
目录和文件
我们再检查一下三个节点上,经过前几步操作生成的配置文件。
$ ls /etc/kubernetes/ssl
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem
$ ls /etc/kubernetes/ apiserver bootstrap.kubeconfig config controller-manager kubelet kube-proxy.kubeconfig proxy scheduler ssl token.csv
配置Flanneld
参考我之前写的文章Kubernetes基于Flannel的网络配置,之前没有配置TLS,现在需要在serivce配置文件中增加TLS配置。
service配置文件/usr/lib/systemd/system/flanneld.service
。
[Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile=/etc/sysconfig/flanneld EnvironmentFile=-/etc/sysconfig/docker-network ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service
/etc/sysconfig/flanneld
配置文件。
# Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="https://172.20.0.113:2379,https://172.20.0.114:2379,https://172.20.0.115:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/kube-centos/network" # Any additional options that you want to pass FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"
在FLANNEL_OPTIONS中增加TLS的配置。
安装和配置 kubelet
kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):
$ cd /etc/kubernetes
$ kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
-
--user=kubelet-bootstrap
是在/etc/kubernetes/token.csv
文件中指定的用户名,同时也写入了/etc/kubernetes/bootstrap.kubeconfig
文件;
下载最新的 kubelet 和 kube-proxy 二进制文件
$ wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz $ tar -xzvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes
$ tar -xzvf kubernetes-src.tar.gz
$ cp -r ./server/bin/{kube-proxy,kubelet} /usr/bin/
创建 kubelet 的service配置文件
文件位置/usr/lib/systemd/system/kubelet.serivce
。
[Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target
kubelet的配置文件/etc/kubernetes/kubelet
。其中的IP地址更改为你的每台node节点的IP地址。
### ## kubernetes kubelet (minion) config # ## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=172.20.0.113" # ## The port for the info server to serve on #KUBELET_PORT="--port=10250" # ## You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=172.20.0.113" # ## location of the api-server KUBELET_API_SERVER="--api-servers=http://172.20.0.113:8080" # ## pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=sz-pg-oam-docker-hub-001.tendcloud.com/library/pod-infrastructure:rhel7" # ## Add your own! KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
-
--address
不能设置为127.0.0.1
,否则后续 Pods 访问 kubelet 的 API 接口时会失败,因为 Pods 访问的127.0.0.1
指向自己而不是 kubelet; -
如果设置了
--hostname-override
选项,则kube-proxy
也需要设置该选项,否则会出现找不到 Node 的情况; -
--experimental-bootstrap-kubeconfig
指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求; -
管理员通过了 CSR 请求后,kubelet 自动在
--cert-dir
目录创建证书和私钥文件(kubelet-client.crt
和kubelet-client.key
),然后写入--kubeconfig
文件; -
建议在
--kubeconfig
配置文件中指定kube-apiserver
地址,如果未指定--api-servers
选项,则必须指定--require-kubeconfig
选项后才从配置文件中读取 kube-apiserver 的地址,否则 kubelet 启动后将找不到 kube-apiserver (日志中提示未找到 API Server),kubectl get nodes
不会返回对应的 Node 信息; -
--cluster-dns
指定 kubedns 的 Service IP(可以先分配,后续创建 kubedns 服务时指定该 IP),--cluster-domain
指定域名后缀,这两个参数同时指定后才会生效;
完整 unit 见 kubelet.service
启动kublet
$ systemctl daemon-reload
$ systemctl enable kubelet
$ systemctl start kubelet
$ systemctl status kubelet
通过 kublet 的 TLS 证书请求
kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须通过后 kubernetes 系统才会将该 Node 加入到集群。
查看未授权的 CSR 请求
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-2b308 4m kubelet-bootstrap Pending $ kubectl get nodes No resources found.
通过 CSR 请求
$ kubectl certificate approve csr-2b308 certificatesigningrequest "csr-2b308" approved
$ kubectl get nodes
NAME STATUS AGE VERSION 10.64.3.7 Ready 49m v1.6.1
自动生成了 kubelet kubeconfig 文件和公私钥
$ ls -l /etc/kubernetes/kubelet.kubeconfig -rw------- 1 root root 2284 Apr 7 02:07 /etc/kubernetes/kubelet.kubeconfig
$ ls -l /etc/kubernetes/ssl/kubelet* -rw-r--r-- 1 root root 1046 Apr 7 02:07 /etc/kubernetes/ssl/kubelet-client.crt -rw------- 1 root root 227 Apr 7 02:04 /etc/kubernetes/ssl/kubelet-client.key -rw-r--r-- 1 root root 1103 Apr 7 02:07 /etc/kubernetes/ssl/kubelet.crt -rw------- 1 root root 1675 Apr 7 02:07 /etc/kubernetes/ssl/kubelet.key
配置 kube-proxy
创建 kube-proxy 的service配置文件
文件路径/usr/lib/systemd/system/kube-proxy.service
。
[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
kube-proxy配置文件/etc/kubernetes/proxy
。
### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--bind-address=172.20.0.113 --hostname-override=172.20.0.113 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"
-
--hostname-override
参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 iptables 规则; -
kube-proxy 根据
--cluster-cidr
判断集群内部和外部流量,指定--cluster-cidr
或--masquerade-all
选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT; -
--kubeconfig
指定的配置文件嵌入了 kube-apiserver 的地址、用户名、证书、秘钥等请求和认证信息; -
预定义的 RoleBinding
cluster-admin
将Usersystem:kube-proxy
与 Rolesystem:node-proxier
绑定,该 Role 授予了调用kube-apiserver
Proxy 相关 API 的权限;
完整 unit 见 kube-proxy.service
启动 kube-proxy
$ systemctl daemon-reload
$ systemctl enable kube-proxy
$ systemctl start kube-proxy
$ systemctl status kube-proxy
验证测试
我们创建一个niginx的service试一下集群是否可用。
$ kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=sz-pg-oam-docker-hub-001.tendcloud.com/library/nginx:1.9 --port=80 deployment "nginx" created
$ kubectl expose deployment nginx --type=NodePort --name=example-service
service "example-service" exposed
$ kubectl describe svc example-service Name: example-service Namespace: default Labels: run=load-balancer-example Annotations: <none> Selector: run=load-balancer-example Type: NodePort IP: 10.254.62.207 Port: <unset> 80/TCP NodePort: <unset> 32724/TCP Endpoints: 172.30.60.2:80,172.30.94.2:80 Session Affinity: None Events: <none> $ curl "10.254.62.207:80" <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
访问172.20.0.113:32724
或172.20.0.114:32724
或者172.20.0.115:32724
都可以得到nginx的页面。
七、安装和配置 kubedns 插件
官方的yaml文件目录:kubernetes/cluster/addons/dns
。
该插件直接使用kubernetes部署,官方的配置文件中包含以下镜像:
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
我clone了上述镜像,上传到我的私有镜像仓库:
sz-pg-oam-docker-hub-001.tendcloud.com/library/k8s-dns-dnsmasq-nanny-amd64:1.14.1 sz-pg-oam-docker-hub-001.tendcloud.com/library/k8s-dns-kube-dns-amd64:1.14.1 sz-pg-oam-docker-hub-001.tendcloud.com/library/k8s-dns-sidecar-amd64:1.14.1
同时上传了一份到时速云备份:
index.tenxcloud.com/jimmy/k8s-dns-dnsmasq-nanny-amd64:1.14.1 index.tenxcloud.com/jimmy/k8s-dns-kube-dns-amd64:1.14.1 index.tenxcloud.com/jimmy/k8s-dns-sidecar-amd64:1.14.1
以下yaml配置文件中使用的是私有镜像仓库中的镜像。
kubedns-cm.yaml
kubedns-sa.yaml
kubedns-controller.yaml
kubedns-svc.yaml
已经修改好的 yaml 文件见:dns
系统预定义的 RoleBinding
预定义的 RoleBinding system:kube-dns
将 kube-system 命名空间的 kube-dns
ServiceAccount 与 system:kube-dns
Role 绑定, 该 Role 具有访问 kube-apiserver DNS 相关 API 的权限;
$ kubectl get clusterrolebindings system:kube-dns -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: 2017-04-11T11:20:42Z labels: kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-dns
resourceVersion: "58" selfLink: /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindingssystem%3Akube-dns
uid: e61f4d92-1ea8-11e7-8cd7-f4e9d49f8ed0
roleRef: apiGroup: rbac.authorization.k8s.io
kind: ClusterRole name: system:kube-dns
subjects: - kind: ServiceAccount name: kube-dns namespace: kube-system
kubedns-controller.yaml
中定义的 Pods 时使用了 kubedns-sa.yaml
文件定义的 kube-dns
ServiceAccount,所以具有访问 kube-apiserver DNS 相关 API 的权限。
配置 kube-dns ServiceAccount
无需修改。
配置 kube-dns
服务
$ diff kubedns-svc.yaml.base kubedns-svc.yaml 30c30 < clusterIP: __PILLAR__DNS__SERVER__ --- > clusterIP: 10.254.0.2
-
spec.clusterIP = 10.254.0.2,即明确指定了 kube-dns Service IP,这个 IP 需要和 kubelet 的
--cluster-dns
参数值一致;
配置 kube-dns
Deployment
$ diff kubedns-controller.yaml.base kubedns-controller.yaml 58c58 < image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/k8s-dns-kube-dns-amd64:v1.14.1 88c88 < - --domain=__PILLAR__DNS__DOMAIN__. --- > - --domain=cluster.local. 92c92 < __PILLAR__FEDERATIONS__DOMAIN__MAP__ --- > #__PILLAR__FEDERATIONS__DOMAIN__MAP__ 110c110 < image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/k8s-dns-dnsmasq-nanny-amd64:v1.14.1 129c129 < - --server=/__PILLAR__DNS__DOMAIN__/127.0.0.1#10053 --- > - --server=/cluster.local./127.0.0.1#10053 148c148 < image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/k8s-dns-sidecar-amd64:v1.14.1 161,162c161,162 < - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,A < - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,A --- > - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,A > - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,A
-
使用系统已经做了 RoleBinding 的
kube-dns
ServiceAccount,该账户具有访问 kube-apiserver DNS 相关 API 的权限;
执行所有定义文件
$ pwd /root/kubedns
$ ls *.yaml
kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml
$ kubectl create -f .
检查 kubedns 功能
新建一个 Deployment
$ cat my-nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment metadata: name: my-nginx
spec: replicas: 2 template: metadata: labels: run: my-nginx
spec: containers: - name: my-nginx
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/nginx:1.9 ports: - containerPort: 80 $ kubectl create -f my-nginx.yaml
Export 该 Deployment, 生成 my-nginx
服务
$ kubectl expose deploy my-nginx
$ kubectl get services --all-namespaces |grep my-nginx default my-nginx 10.254.179.239 <none> 80/TCP 42m
创建另一个 Pod,查看 /etc/resolv.conf
是否包含 kubelet
配置的 --cluster-dns
和 --cluster-domain
,是否能够将服务my-nginx
解析到 Cluster IP 10.254.179.239
。
$ kubectl create -f nginx-pod.yaml
$ kubectl exec nginx -i -t -- /bin/bash
root@nginx:/# cat /etc/resolv.conf nameserver 10.254.0.2 search default.svc.cluster.local. svc.cluster.local. cluster.local. tendcloud.com
options ndots:5 root@nginx:/# ping my-nginx PING my-nginx.default.svc.cluster.local (10.254.179.239): 56 data bytes
76 bytes from 119.147.223.109: Destination Net Unreachable
^C--- my-nginx.default.svc.cluster.local ping statistics ---
root@nginx:/# ping kubernetes PING kubernetes.default.svc.cluster.local (10.254.0.1): 56 data bytes ^C--- kubernetes.default.svc.cluster.local ping statistics --- 11 packets transmitted, 0 packets received, 100% packet loss
root@nginx:/# ping kube-dns.kube-system.svc.cluster.local PING kube-dns.kube-system.svc.cluster.local (10.254.0.2): 56 data bytes ^C--- kube-dns.kube-system.svc.cluster.local ping statistics --- 6 packets transmitted, 0 packets received, 100% packet loss
从结果来看,service名称可以正常解析。
八、配置和安装 dashboard
官方文件目录:kubernetes/cluster/addons/dashboard
我们使用的文件
$ ls *.yaml
dashboard-controller.yaml dashboard-service.yaml dashboard-rbac.yaml
已经修改好的 yaml 文件见:dashboard
由于 kube-apiserver
启用了 RBAC
授权,而官方源码目录的 dashboard-controller.yaml
没有定义授权的 ServiceAccount,所以后续访问 kube-apiserver
的 API 时会被拒绝,web中提示:
Forbidden (403) User "system:serviceaccount:kube-system:default" cannot list jobs.batch in the namespace "default". (get jobs.batch)
增加了一个dashboard-rbac.yaml
文件,定义一个名为 dashboard 的 ServiceAccount,然后将它和 Cluster Role view 绑定。
配置dashboard-service
$ diff dashboard-service.yaml.orig dashboard-service.yaml 10a11 > type: NodePort
- 指定端口类型为 NodePort,这样外界可以通过地址 nodeIP:nodePort 访问 dashboard;
配置dashboard-controller
$ diff dashboard-controller.yaml.orig dashboard-controller.yaml 23c23 < image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/kubernetes-dashboard-amd64:v1.6.0
执行所有定义文件
$ pwd /root/kubernetes/cluster/addons/dashboard
$ ls *.yaml
dashboard-controller.yaml dashboard-service.yaml
$ kubectl create -f . service "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
检查执行结果
查看分配的 NodePort
$ kubectl get services kubernetes-dashboard -n kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.254.224.130 <nodes> 80:30312/TCP 25s
- NodePort 30312映射到 dashboard pod 80端口;
检查 controller
$ kubectl get deployment kubernetes-dashboard -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1 1 1 1 3m $ kubectl get pods -n kube-system | grep dashboard
kubernetes-dashboard-1339745653-pmn6z 1/1 Running 0 4m
访问dashboard
有以下三种方式:
-
kubernetes-dashboard 服务暴露了 NodePort,可以使用
http://NodeIP:nodePort
地址访问 dashboard; - 通过 kube-apiserver 访问 dashboard(https 6443端口和http 8080端口方式);
- 通过 kubectl proxy 访问 dashboard:
通过 kubectl proxy 访问 dashboard
启动代理
$ kubectl proxy --address='172.20.0.113' --port=8086 --accept-hosts='^*$' Starting to serve on 172.20.0.113:8086
-
需要指定
--accept-hosts
选项,否则浏览器访问 dashboard 页面时提示 “Unauthorized”;
浏览器访问 URL:http://172.20.0.113:8086/ui
自动跳转到:http://172.20.0.113:8086/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default
通过 kube-apiserver 访问dashboard
获取集群服务地址列表
$ kubectl cluster-info Kubernetes master is running at https://172.20.0.113:6443 KubeDNS is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
浏览器访问 URL:https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
(浏览器会提示证书验证,因为通过加密通道,以改方式访问的话,需要提前导入证书到你的计算机中)。这是我当时在这遇到的坑:通过 kube-apiserver 访问dashboard,提示User “system:anonymous” cannot proxy services in the namespace “kube-system”. #5,已经解决。
导入证书
将生成的admin.pem证书转换格式
openssl pkcs12 -export -in admin.pem -out admin.p12 -inkey admin-key.pem
将生成的admin.p12
证书导入的你的电脑,导出的时候记住你设置的密码,导入的时候还要用到。
如果你不想使用https的话,可以直接访问insecure port 8080端口:http://172.20.0.113:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
由于缺少 Heapster 插件,当前 dashboard 不能展示 Pod、Nodes 的 CPU、内存等 metric 图形。
九、配置和安装 Heapster
到 heapster release 页面 下载最新版本的 heapster。
$ wget https://github.com/kubernetes/heapster/archive/v1.3.0.zip $ unzip v1.3.0.zip
$ mv v1.3.0.zip heapster-1.3.0
文件目录: heapster-1.3.0/deploy/kube-config/influxdb
$ cd heapster-1.3.0/deploy/kube-config/influxdb
$ ls *.yaml
grafana-deployment.yaml grafana-service.yaml heapster-deployment.yaml heapster-service.yaml influxdb-deployment.yaml influxdb-service.yaml heapster-rbac.yaml
我们自己创建了heapster的rbac配置heapster-rbac.yaml
。
已经修改好的 yaml 文件见:heapster
配置 grafana-deployment
$ diff grafana-deployment.yaml.orig grafana-deployment.yaml 16c16 < image: gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/heapster-grafana-amd64:v4.0.2 40,41c40,41 < # value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ < value: /
--- > value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ > #value: /
-
如果后续使用 kube-apiserver 或者 kubectl proxy 访问 grafana dashboard,则必须将
GF_SERVER_ROOT_URL
设置为/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
,否则后续访问grafana时访问时提示找不到http://10.64.3.7:8086/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/api/dashboards/home
页面;
配置 heapster-deployment
$ diff heapster-deployment.yaml.orig heapster-deployment.yaml 16c16 < image: gcr.io/google_containers/heapster-amd64:v1.3.0-beta.1 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/heapster-amd64:v1.3.0-beta.1
配置 influxdb-deployment
influxdb 官方建议使用命令行或 HTTP API 接口来查询数据库,从 v1.1.0 版本开始默认关闭 admin UI,将在后续版本中移除 admin UI 插件。
开启镜像中 admin UI的办法如下:先导出镜像中的 influxdb 配置文件,开启 admin 插件后,再将配置文件内容写入 ConfigMap,最后挂载到镜像中,达到覆盖原始配置的目的:
注意:manifests 目录已经提供了 修改后的 ConfigMap 定义文件
$ # 导出镜像中的 influxdb 配置文件 $ docker run --rm --entrypoint 'cat' -ti lvanneo/heapster-influxdb-amd64:v1.1.1 /etc/config.toml >config.toml.orig
$ cp config.toml.orig config.toml
$ # 修改:启用 admin 接口 $ vim config.toml
$ diff config.toml.orig config.toml 35c35 < enabled = false --- > enabled = true $ # 将修改后的配置写入到 ConfigMap 对象中 $ kubectl create configmap influxdb-config --from-file=config.toml -n kube-system
configmap "influxdb-config" created
$ # 将 ConfigMap 中的配置文件挂载到 Pod 中,达到覆盖原始配置的目的 $ diff influxdb-deployment.yaml.orig influxdb-deployment.yaml 16c16 < image: grc.io/google_containers/heapster-influxdb-amd64:v1.1.1 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/heapster-influxdb-amd64:v1.1.1 19a20,21 > - mountPath: /etc/ > name: influxdb-config 22a25,27 > - name: influxdb-config > configMap: > name: influxdb-config
配置 monitoring-influxdb Service
$ diff influxdb-service.yaml.orig influxdb-service.yaml 12a13 > type: NodePort 15a17,20 > name: http > - port: 8083 > targetPort: 8083 > name: admin
- 定义端口类型为 NodePort,额外增加了 admin 端口映射,用于后续浏览器访问 influxdb 的 admin UI 界面;
执行所有定义文件
$ pwd /root/heapster-1.3.0/deploy/kube-config/influxdb
$ ls *.yaml
grafana-service.yaml heapster-rbac.yaml influxdb-cm.yaml influxdb-service.yaml
grafana-deployment.yaml heapster-deployment.yaml heapster-service.yaml influxdb-deployment.yaml
$ kubectl create -f . deployment "monitoring-grafana" created
service "monitoring-grafana" created
deployment "heapster" created
serviceaccount "heapster" created
clusterrolebinding "heapster" created
service "heapster" created
configmap "influxdb-config" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
检查执行结果
检查 Deployment
$ kubectl get deployments -n kube-system | grep -E 'heapster|monitoring' heapster 1 1 1 1 2m monitoring-grafana 1 1 1 1 2m monitoring-influxdb 1 1 1 1 2m
检查 Pods
$ kubectl get pods -n kube-system | grep -E 'heapster|monitoring' heapster-110704576-gpg8v 1/1 Running 0 2m monitoring-grafana-2861879979-9z89f 1/1 Running 0 2m monitoring-influxdb-1411048194-lzrpc 1/1 Running 0 2m
检查 kubernets dashboard 界面,看是显示各 Nodes、Pods 的 CPU、内存、负载等利用率曲线图;
访问 grafana
-
通过 kube-apiserver 访问:获取 monitoring-grafana 服务 URL
$ kubectl cluster-info Kubernetes master is running at https://172.20.0.113:6443 Heapster is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard monitoring-grafana is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana monitoring-influxdb is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
浏览器访问 URL:
http://172.20.0.113:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
-
通过 kubectl proxy 访问:创建代理
$ kubectl proxy --address='172.20.0.113' --port=8086 --accept-hosts='^*$' Starting to serve on 172.20.0.113:8086
浏览器访问 URL:
http://172.20.0.113:8086/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
访问 influxdb admin UI
获取 influxdb http 8086 映射的 NodePort
$ kubectl get svc -n kube-system|grep influxdb
monitoring-influxdb 10.254.22.46 <nodes> 8086:32299/TCP,8083:30269/TCP 9m
通过 kube-apiserver 的非安全端口访问 influxdb 的 admin UI 界面:http://172.20.0.113:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:8083/
在页面的 “Connection Settings” 的 Host 中输入 node IP, Port 中输入 8086 映射的 nodePort 如上面的 32299,点击 “Save” 即可(我的集群中的地址是172.20.0.113:32299):
十、配置和安装 EFK
官方文件目录:cluster/addons/fluentd-elasticsearch
$ ls *.yaml
es-controller.yaml es-service.yaml fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml efk-rbac.yaml
同样EFK服务也需要一个efk-rbac.yaml
文件,配置serviceaccount为efk
。
已经修改好的 yaml 文件见:EFK
配置 es-controller.yaml
$ diff es-controller.yaml.orig es-controller.yaml 24c24 < - image: gcr.io/google_containers/elasticsearch:v2.4.1-2 --- > - image: sz-pg-oam-docker-hub-001.tendcloud.com/library/elasticsearch:v2.4.1-2
配置 es-service.yaml
无需配置;
配置 fluentd-es-ds.yaml
$ diff fluentd-es-ds.yaml.orig fluentd-es-ds.yaml 26c26 < image: gcr.io/google_containers/fluentd-elasticsearch:1.22 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/fluentd-elasticsearch:1.22
配置 kibana-controller.yaml
$ diff kibana-controller.yaml.orig kibana-controller.yaml 22c22 < image: gcr.io/google_containers/kibana:v4.6.1-1 --- > image: sz-pg-oam-docker-hub-001.tendcloud.com/library/kibana:v4.6.1-1
给 Node 设置标签
定义 DaemonSet fluentd-es-v1.22
时设置了 nodeSelector beta.kubernetes.io/fluentd-ds-ready=true
,所以需要在期望运行 fluentd 的 Node 上设置该标签;
$ kubectl get nodes
NAME STATUS AGE VERSION 172.20.0.113 Ready 1d v1.6.0 $ kubectl label nodes 172.20.0.113 beta.kubernetes.io/fluentd-ds-ready=true node "172.20.0.113" labeled
给其他两台node打上同样的标签。
执行定义文件
$ kubectl create -f . serviceaccount "efk" created
clusterrolebinding "efk" created
replicationcontroller "elasticsearch-logging-v1" created
service "elasticsearch-logging" created
daemonset "fluentd-es-v1.22" created
deployment "kibana-logging" created
service "kibana-logging" created
检查执行结果
$ kubectl get deployment -n kube-system|grep kibana
kibana-logging 1 1 1 1 2m $ kubectl get pods -n kube-system|grep -E 'elasticsearch|fluentd|kibana' elasticsearch-logging-v1-mlstp 1/1 Running 0 1m elasticsearch-logging-v1-nfbbf 1/1 Running 0 1m fluentd-es-v1.22-31sm0 1/1 Running 0 1m fluentd-es-v1.22-bpgqs 1/1 Running 0 1m fluentd-es-v1.22-qmn7h 1/1 Running 0 1m kibana-logging-1432287342-0gdng 1/1 Running 0 1m $ kubectl get service -n kube-system|grep -E 'elasticsearch|kibana' elasticsearch-logging 10.254.77.62 <none> 9200/TCP 2m kibana-logging 10.254.8.113 <none> 5601/TCP 2m
kibana Pod 第一次启动时会用**较长时间(10-20分钟)**来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度:
$ kubectl logs kibana-logging-1432287342-0gdng -n kube-system -f
ELASTICSEARCH_URL=http://elasticsearch-logging:9200 server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging {"type":"log","@timestamp":"2017-04-12T13:08:06Z","tags":["info","optimize"],"pid":7,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"} {"type":"log","@timestamp":"2017-04-12T13:18:17Z","tags":["info","optimize"],"pid":7,"message":"Optimization of bundles for kibana and statusPage complete in 610.40 seconds"} {"type":"log","@timestamp":"2017-04-12T13:18:17Z","tags":["status","plugin:kibana@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:18Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:kbn_vislib_vis_types@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:markdown_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:metric_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:spyModes@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:statusPage@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["status","plugin:table_vis@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-04-12T13:18:19Z","tags":["listening","info"],"pid":7,"message":"Server running at http://0.0.0.0:5601"} {"type":"log","@timestamp":"2017-04-12T13:18:24Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-04-12T13:18:29Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":7,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}
访问 kibana
-
通过 kube-apiserver 访问:获取 monitoring-grafana 服务 URL
$ kubectl cluster-info Kubernetes master is running at https://172.20.0.113:6443 Elasticsearch is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging Heapster is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/heapster Kibana is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging KubeDNS is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard monitoring-grafana is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana monitoring-influxdb is running at https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
浏览器访问 URL:
https://172.20.0.113:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana
-
通过 kubectl proxy 访问:创建代理
$ kubectl proxy --address='172.20.0.113' --port=8086 --accept-hosts='^*$' Starting to serve on 172.20.0.113:8086
浏览器访问 URL:
http://172.20.0.113:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging
在 Settings -> Indices 页面创建一个 index(相当于 mysql 中的一个 database),选中 Index contains time-based events
,使用默认的 logstash-*
pattern,点击 Create
;
可能遇到的问题
如果你在这里发现Create按钮是灰色的无法点击,且Time-filed name中没有选项,fluentd要读取/var/log/containers/
目录下的log日志,这些日志是从/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log
链接过来的,查看你的docker配置,—log-dirver
需要设置为json-file格式,默认的可能是journald,参考docker logging。
使用Kubeadm安装Kubernetes1.5版本
在《当Docker遇到systemd》一文中,我提到过这两天儿一直在做的一个task:使用kubeadm在Ubuntu 16.04上安装部署Kubernetes的最新发布版本-k8s 1.5.1。
年中,Docker宣布在Docker engine中集成swarmkit工具包,这一announcement在轻量级容器界引发轩然大波。毕竟开发者是懒惰的^0^,有了docker swarmkit,驱动developer去安装其他容器编排工具的动力在哪里呢?即便docker engine还不是当年那个被人们高频使用的IE浏览器。作为针对Docker公司这一市场行为的回应,容器集群管理和服务编排领先者Kubernetes在三个月后发布了Kubernetes1.4.0版本。在这个版本中K8s新增了kubeadm工具。kubeadm的使用方式有点像集成在docker engine中的swarm kit工具,旨在改善开发者在安装、调试和使用k8s时的体验,降低安装和使用门槛。理论上通过两个命令:init和join即可搭建出一套完整的Kubernetes cluster。
不过,和初入docker引擎的swarmkit一样,kubeadm目前也在active development中,也不是那么stable,因此即便在当前最新的k8s 1.5.1版本中,它仍然处于Alpha状态,官方不建议在Production环境下使用。每次执行kubeadm init时,它都会打印如下提醒日志:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
不过由于之前部署的k8s 1.3.7集群运行良好,这给了我们在k8s这条路上继续走下去并走好的信心。但k8s在部署和管理方面的体验的确是太繁琐了,于是我们准备试验一下kubeadm是否能带给我们超出预期的体验。之前在aliyun ubuntu 14.04上安装kubernetes 1.3.7的经验和教训,让我略微有那么一丢丢底气,但实际安装过程依旧是一波三折。这既与kubeadm的unstable有关,同样也与cni、第三方网络add-ons的质量有关。无论哪一方出现问题都会让你的install过程异常坎坷曲折。
一、环境与约束
在kubeadm支持的Ubuntu 16.04+, CentOS 7 or HypriotOS v1.0.1+三种操作系统中,我们选择了Ubuntu 16.04。由于阿里云尚无官方16.04 Image可用,我们新开了两个Ubuntu 14.04ECS实例,并通过apt-get命令手工将其升级到Ubuntu 16.04.1,详细版本是:Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-58-generic x86_64)。
Ubuntu 16.04使用了systemd作为init system,在安装和配置Docker时,可以参考我的这篇《当Docker遇到system》。Docker版本我选择了目前可以得到的lastest stable release: 1.12.5。
# docker version Client: Version: 1.12.5 API version: 1.24 Go version: go1.6.4 Git commit: 7392c3b Built: Fri Dec 16 02:42:17 2016 OS/Arch: linux/amd64 Server: Version: 1.12.5 API version: 1.24 Go version: go1.6.4 Git commit: 7392c3b Built: Fri Dec 16 02:42:17 2016 OS/Arch: linux/amd64
至于Kubernetes版本,前面已经提到过了,我们就使用最新发布的Kubernetes 1.5.1版本。1.5.1是1.5.0的一个紧急fix版本,主要”to address default flag values which in isolation were not problematic, but in concert could result in an insecure cluster”。官方建议skip 1.5.0,直接用1.5.1。
这里再重申一下:Kubernetes的安装、配置和调通是很难的,在阿里云上调通就更难了,有时还需要些运气。Kubernetes、Docker、cni以及各种网络Add-ons都在active development中,也许今天还好用的step、tip和trick,明天就out-dated,因此在借鉴本文的操作步骤时,请谨记这些^0^。
二、安装包准备
我们这次新开了两个ECS实例,一个作为master node,一个作为minion node。Kubeadm默认安装时,master node将不会参与Pod调度,不会承载work load,即不会有非核心组件的Pod在Master node上被创建出来。当然通过kubectl taint命令可以解除这一限制,不过这是后话了。
集群拓扑:
master node:10.47.217.91,主机名:iZ25beglnhtZ minion node:10.28.61.30,主机名:iZ2ze39jeyizepdxhwqci6Z
本次安装的主参考文档就是Kubernetes官方的那篇《Installing Kubernetes on Linux with kubeadm》。
本小节,我们将进行安装包准备,即将kubeadm以及此次安装所需要的k8s核心组件统统下载到上述两个Node上。注意:如果你有加速器,那么本节下面的安装过程将尤为顺利,反之,…
。以下命令,在两个Node上均要执行。
1、添加apt-key
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - OK
2、添加Kubernetes源并更新包信息
添加Kubernetes源到sources.list.d目录下:
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF # cat /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main
更新包信息:
# apt-get update ... ... Hit:2 http://mirrors.aliyun.com/ubuntu xenial InRelease Hit:3 https://apt.dockerproject.org/repo ubuntu-xenial InRelease Get:4 http://mirrors.aliyun.com/ubuntu xenial-security InRelease [102 kB] Get:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [6,299 B] Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [1,739 B] Get:6 http://mirrors.aliyun.com/ubuntu xenial-updates InRelease [102 kB] Get:7 http://mirrors.aliyun.com/ubuntu xenial-proposed InRelease [253 kB] Get:8 http://mirrors.aliyun.com/ubuntu xenial-backports InRelease [102 kB] Fetched 568 kB in 19s (28.4 kB/s) Reading package lists... Done
3、下载Kubernetes核心组件
在此次安装中,我们通过apt-get就可以下载Kubernetes的核心组件,包括kubelet、kubeadm、kubectl和kubernetes-cni等。
# apt-get install -y kubelet kubeadm kubectl kubernetes-cni Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libtimedate-perl Use 'apt autoremove' to remove it. The following additional packages will be installed: ebtables ethtool socat The following NEW packages will be installed: ebtables ethtool kubeadm kubectl kubelet kubernetes-cni socat 0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded. Need to get 37.6 MB of archives. After this operation, 261 MB of additional disk space will be used. Get:2 http://mirrors.aliyun.com/ubuntu xenial/main amd64 ebtables amd64 2.0.10.4-3.4ubuntu1 [79.6 kB] Get:6 http://mirrors.aliyun.com/ubuntu xenial/main amd64 ethtool amd64 1:4.5-1 [97.5 kB] Get:7 http://mirrors.aliyun.com/ubuntu xenial/universe amd64 socat amd64 1.7.3.1-1 [321 kB] Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.3.0.1-07a8a2-00 [6,877 kB] Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.5.1-00 [15.1 MB] Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.5.1-00 [7,954 kB] Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.6.0-alpha.0-2074-a092d8e0f95f52-00 [7,120 kB] Fetched 37.6 MB in 36s (1,026 kB/s) ... ... Unpacking kubeadm (1.6.0-alpha.0-2074-a092d8e0f95f52-00) ... Processing triggers for systemd (229-4ubuntu13) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for man-db (2.7.5-1) ... Setting up ebtables (2.0.10.4-3.4ubuntu1) ... update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults Setting up ethtool (1:4.5-1) ... Setting up kubernetes-cni (0.3.0.1-07a8a2-00) ... Setting up socat (1.7.3.1-1) ... Setting up kubelet (1.5.1-00) ... Setting up kubectl (1.5.1-00) ... Setting up kubeadm (1.6.0-alpha.0-2074-a092d8e0f95f52-00) ... Processing triggers for systemd (229-4ubuntu13) ... Processing triggers for ureadahead (0.100.0-19) ... ... ...
下载后的kube组件并未自动运行起来。在 /lib/systemd/system下面我们能看到kubelet.service:
# ls /lib/systemd/system|grep kube kubelet.service //kubelet.service [Unit] Description=kubelet: The Kubernetes Node Agent Documentation=http://kubernetes.io/docs/ [Service] ExecStart=/usr/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target
kubelet的版本:
# kubelet --version Kubernetes v1.5.1
k8s的核心组件都有了,接下来我们就要boostrap kubernetes cluster了。同时,问题也就随之而来了,而这些问题以及问题的解决才是本篇要说明的重点。
三、初始化集群
前面说过,理论上通过kubeadm使用init和join命令即可建立一个集群,这init就是在master节点对集群进行初始化。和k8s 1.4之前的部署方式不同的是,kubeadm安装的k8s核心组件都是以容器的形式运行于master node上的。因此在kubeadm init之前,最好给master node上的docker engine挂上加速器代理,因为kubeadm要从gcr.io/google_containers repository中pull许多核心组件的images,大约有如下一些:
gcr.io/google_containers/kube-controller-manager-amd64 v1.5.1 cd5684031720 2 weeks ago 102.4 MB gcr.io/google_containers/kube-apiserver-amd64 v1.5.1 8c12509df629 2 weeks ago 124.1 MB gcr.io/google_containers/kube-proxy-amd64 v1.5.1 71d2b27b03f6 2 weeks ago 175.6 MB gcr.io/google_containers/kube-scheduler-amd64 v1.5.1 6506e7b74dac 2 weeks ago 53.97 MB gcr.io/google_containers/etcd-amd64 3.0.14-kubeadm 856e39ac7be3 5 weeks ago 174.9 MB gcr.io/google_containers/kubedns-amd64 1.9 26cf1ed9b144 5 weeks ago 47 MB gcr.io/google_containers/dnsmasq-metrics-amd64 1.0 5271aabced07 7 weeks ago 14 MB gcr.io/google_containers/kube-dnsmasq-amd64 1.4 3ec65756a89b 3 months ago 5.13 MB gcr.io/google_containers/kube-discovery-amd64 1.0 c5e0c9a457fc 3 months ago 134.2 MB gcr.io/google_containers/exechealthz-amd64 1.2 93a43bfb39bf 3 months ago 8.375 MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 7 months ago 746.9 kB
在Kubeadm的文档中,Pod Network的安装是作为一个单独的步骤的。kubeadm init并没有为你选择一个默认的Pod network进行安装。我们将首选Flannel 作为我们的Pod network,这不仅是因为我们的上一个集群用的就是flannel,而且表现稳定。更是由于Flannel就是coreos为k8s打造的专属overlay network add-ons。甚至于flannel repository的readme.md都这样写着:“flannel is a network fabric for containers, designed for Kubernetes”。如果我们要使用Flannel,那么在执行init时,按照kubeadm文档要求,我们必须给init命令带上option:–pod-network-cidr=10.244.0.0/16。
1、执行kubeadm init
执行kubeadm init命令:
# kubeadm init --pod-network-cidr=10.244.0.0/16 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [init] Using Kubernetes version: v1.5.1 [tokens] Generated token: "2e7da9.7fc5668ff26430c7" [certificates] Generated Certificate Authority key and certificate. [certificates] Generated API Server key and certificate [certificates] Generated Service Account signing keys [certificates] Created keys and certificates in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [apiclient] Created API client, waiting for the control plane to become ready //如果没有挂加速器,可能会在这里hang住。 [apiclient] All control plane components are healthy after 54.789750 seconds [apiclient] Waiting for at least one node to register and become ready [apiclient] First node is ready after 1.003053 seconds [apiclient] Creating a test deployment [apiclient] Test deployment succeeded [token-discovery] Created the kube-discovery deployment, waiting for it to become ready [token-discovery] kube-discovery is ready after 62.503441 seconds [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=2e7da9.7fc5668ff26430c7 123.56.200.187
init成功后的master node有啥变化?k8s的核心组件均正常启动:
# ps -ef|grep kube root 2477 2461 1 16:36 ? 00:00:04 kube-proxy --kubeconfig=/run/kubeconfig root 30860 1 12 16:33 ? 00:01:09 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local root 30952 30933 0 16:33 ? 00:00:01 kube-scheduler --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 root 31128 31103 2 16:33 ? 00:00:11 kube-controller-manager --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 --cluster-name=kubernetes --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 root 31223 31207 2 16:34 ? 00:00:10 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address=123.56.200.187 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379 root 31491 31475 0 16:35 ? 00:00:00 /usr/local/bin/kube-discovery
而且是多以container的形式启动:
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c16c442b7eca gcr.io/google_containers/kube-proxy-amd64:v1.5.1 "kube-proxy --kubecon" 6 minutes ago Up 6 minutes k8s_kube-proxy.36dab4e8_kube-proxy-sb4sm_kube-system_43fb1a2c-cb46-11e6-ad8f-00163e1001d7_2ba1648e 9f73998e01d7 gcr.io/google_containers/kube-discovery-amd64:1.0 "/usr/local/bin/kube-" 8 minutes ago Up 8 minutes k8s_kube-discovery.7130cb0a_kube-discovery-1769846148-6z5pw_kube-system_1eb97044-cb46-11e6-ad8f-00163e1001d7_fd49c2e3 dd5412e5e15c gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 9 minutes ago Up 9 minutes k8s_kube-apiserver.1c5a91d9_kube-apiserver-iz25beglnhtz_kube-system_eea8df1717e9fea18d266103f9edfac3_8cae8485 60017f8819b2 gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm "etcd --listen-client" 9 minutes ago Up 9 minutes k8s_etcd.c323986f_etcd-iz25beglnhtz_kube-system_3a26566bb004c61cd05382212e3f978f_06d517eb 03c2463aba9c gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1 "kube-controller-mana" 9 minutes ago Up 9 minutes k8s_kube-controller-manager.d30350e1_kube-controller-manager-iz25beglnhtz_kube-system_9a40791dd1642ea35c8d95c9e610e6c1_3b05cb8a fb9a724540a7 gcr.io/google_containers/kube-scheduler-amd64:v1.5.1 "kube-scheduler --add" 9 minutes ago Up 9 minutes k8s_kube-scheduler.ef325714_kube-scheduler-iz25beglnhtz_kube-system_dc58861a0991f940b0834f8a110815cb_9b3ccda2 .... ...
不过这些核心组件并不是跑在pod network中的(没错,此时的pod network还没有创建),而是采用了host network。以kube-apiserver的pod信息为例:
kube-system kube-apiserver-iz25beglnhtz 1/1 Running 0 1h 10.47.217.91 iz25beglnhtz
kube-apiserver的IP是host ip,从而推断容器使用的是host网络,这从其对应的pause容器的network属性就可以看出:
# docker ps |grep apiserver a5a76bc59e38 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" About an hour ago Up About an hour k8s_kube-apiserver.2529402_kube-apiserver-iz25beglnhtz_kube-system_25d646be9a0092138dc6088fae6f1656_ec0079fc ef4d3bf057a6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD.d8dbe16c_kube-apiserver-iz25beglnhtz_kube-system_25d646be9a0092138dc6088fae6f1656_bbfd8a31
inspect pause容器,可以看到pause container的NetworkMode的值:
"NetworkMode": "host",
如果kubeadm init执行过程中途出现了什么问题,比如前期忘记挂加速器导致init hang住,你可能会ctrl+c退出init执行。重新配置后,再执行kubeadm init,这时你可能会遇到下面kubeadm的输出:
# kubeadm init --pod-network-cidr=10.244.0.0/16 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Some fatal errors occurred: Port 10250 is in use /etc/kubernetes/manifests is not empty /etc/kubernetes/pki is not empty /var/lib/kubelet is not empty /etc/kubernetes/admin.conf already exists /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来。
# kubeadm reset [preflight] Running pre-flight checks [reset] Draining node: "iz25beglnhtz" [reset] Removing node: "iz25beglnhtz" [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Removing kubernetes-managed containers [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd] [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf]
2、安装flannel pod网络
kubeadm init之后,如果你探索一下当前cluster的状态或者核心组件的日志,你会发现某些“异常”,比如:从kubelet的日志中我们可以看到一直刷屏的错误信息:
Dec 26 16:36:48 iZ25beglnhtZ kubelet[30860]: E1226 16:36:48.365885 30860 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-pddz5_kube-system(43fd7264-cb46-11e6-ad8f-00163e1001d7)" using network plugins "cni": cni config unintialized; Skipping pod
通过命令kubectl get pod –all-namespaces -o wide,你也会发现kube-dns pod处于ContainerCreating状态。
这些都不打紧,因为我们还没有为cluster安装Pod network呢。前面说过,我们要使用Flannel网络,因此我们需要执行如下安装命令:
#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created
稍等片刻,我们再来看master node上的cluster信息:
# ps -ef|grep kube|grep flannel root 6517 6501 0 17:20 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr root 6573 6546 0 17:20 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done # kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system dummy-2088944543-s0c5g 1/1 Running 0 50m kube-system etcd-iz25beglnhtz 1/1 Running 0 50m kube-system kube-apiserver-iz25beglnhtz 1/1 Running 0 50m kube-system kube-controller-manager-iz25beglnhtz 1/1 Running 0 50m kube-system kube-discovery-1769846148-6z5pw 1/1 Running 0 50m kube-system kube-dns-2924299975-pddz5 4/4 Running 0 49m kube-system kube-flannel-ds-5ww9k 2/2 Running 0 4m kube-system kube-proxy-sb4sm 1/1 Running 0 49m kube-system kube-scheduler-iz25beglnhtz 1/1 Running 0 49m
至少集群的核心组件已经全部run起来了。看起来似乎是成功了。
3、minion node:join the cluster
接下来,就该minion node加入cluster了。这里我们用到了kubeadm的第二个命令:kubeadm join。
在minion node上执行(注意:这里要保证master node的9898端口在防火墙是打开的):
# kubeadm join --token=2e7da9.7fc5668ff26430c7 123.56.200.187 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [tokens] Validating provided token [discovery] Created cluster info discovery client, requesting info from "http://123.56.200.187:9898/cluster-info/v1/?token-id=2e7da9" [discovery] Cluster info object received, verifying signature using given token [discovery] Cluster info signature and contents are valid, will use API endpoints [https://123.56.200.187:6443] [bootstrap] Trying to connect to endpoint https://123.56.200.187:6443 [bootstrap] Detected server version: v1.5.1 [bootstrap] Successfully established connection with endpoint "https://123.56.200.187:6443" [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server: Issuer: CN=kubernetes | Subject: CN=system:node:iZ2ze39jeyizepdxhwqci6Z | CA: false Not before: 2016-12-26 09:31:00 +0000 UTC Not After: 2017-12-26 09:31:00 +0000 UTC [csr] Generating kubelet configuration [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
也很顺利。我们在minion node上看到的k8s组件情况如下:
d85cf36c18ed gcr.io/google_containers/kube-proxy-amd64:v1.5.1 "kube-proxy --kubecon" About an hour ago Up About an hour k8s_kube-proxy.36dab4e8_kube-proxy-lsn0t_kube-system_b8eddf1c-cb4e-11e6-ad8f-00163e1001d7_5826f32b a60e373b48b8 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD.d8dbe16c_kube-proxy-lsn0t_kube-system_b8eddf1c-cb4e-11e6-ad8f-00163e1001d7_46bfcf67 a665145eb2b5 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/bin/sh -c 'set -e -" About an hour ago Up About an hour k8s_install-cni.17d8cf2_kube-flannel-ds-tr8zr_kube-system_06eca729-cb72-11e6-ad8f-00163e1001d7_01e12f61 5b46f2cb0ccf gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD.d8dbe16c_kube-flannel-ds-tr8zr_kube-system_06eca729-cb72-11e6-ad8f-00163e1001d7_ac880d20
我们在master node上查看当前cluster状态:
# kubectl get nodes NAME STATUS AGE iz25beglnhtz Ready,master 1h iz2ze39jeyizepdxhwqci6z Ready 21s
k8s cluster创建”成功”!真的成功了吗?“折腾”才刚刚开始:(!
三、Flannel Pod Network问题
Join成功所带来的“余温”还未散去,我就发现了Flannel pod network的问题,troubleshooting正式开始:(。
1、minion node上的flannel时不时地报错
刚join时还好好的,可过了没一会儿,我们就发现在kubectl get pod –all-namespaces中有错误出现:
kube-system kube-flannel-ds-tr8zr 1/2 CrashLoopBackOff 189 16h
我们发现这是minion node上的flannel pod中的一个container出错导致的,跟踪到的具体错误如下:
# docker logs bc0058a15969 E1227 06:17:50.605110 1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-tr8zr': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-tr8zr: dial tcp 10.96.0.1:443: i/o timeout
10.96.0.1是pod network中apiserver service的cluster ip,而minion node上的flannel组件居然无法访问到这个cluster ip!这个问题的奇怪之处还在于,有些时候这个Pod在被调度restart N多次后或者被删除重启后,又突然变为running状态了,行为十分怪异。
在flannel github.com issues中,至少有两个open issue与此问题有密切关系:
https://github.com/coreos/flannel/issues/545
https://github.com/coreos/flannel/issues/535
这个问题暂无明确解。当minion node上的flannel pod自恢复为running状态时,我们又可以继续了。
2、minion node上flannel pod启动失败的一个应对方法
在下面issue中,很多developer讨论了minion node上flannel pod启动失败的一种可能原因以及临时应对方法:
https://github.com/kubernetes/kubernetes/issues/34101
这种说法大致就是minion node上的kube-proxy使用了错误的interface,通过下面方法可以fix这个问题。在minion node上执行:
# kubectl -n kube-system get ds -l 'component=kube-proxy' -o json | jq '.items[0].spec.template.spec.containers[0].command |= .+ ["--cluster-cidr=10.244.0.0/16"]' | kubectl apply -f - && kubectl -n kube-system delete pods -l 'component=kube-proxy' daemonset "kube-proxy" configured pod "kube-proxy-lsn0t" deleted pod "kube-proxy-sb4sm" deleted
执行后,flannel pod的状态:
kube-system kube-flannel-ds-qw291 2/2 Running 8 17h kube-system kube-flannel-ds-x818z 2/2 Running 17 1h
经过17次restart,minion node上的flannel pod 启动ok了。其对应的flannel container启动日志如下:
# docker logs 1f64bd9c0386 I1227 07:43:26.670620 1 main.go:132] Installing signal handlers I1227 07:43:26.671006 1 manager.go:133] Determining IP address of default interface I1227 07:43:26.670825 1 kube.go:233] starting kube subnet manager I1227 07:43:26.671514 1 manager.go:163] Using 59.110.67.15 as external interface I1227 07:43:26.671575 1 manager.go:164] Using 59.110.67.15 as external endpoint I1227 07:43:26.746811 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN I1227 07:43:26.749785 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE I1227 07:43:26.752343 1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE I1227 07:43:26.755126 1 manager.go:246] Lease acquired: 10.244.1.0/24 I1227 07:43:26.755444 1 network.go:58] Watching for L3 misses I1227 07:43:26.755475 1 network.go:66] Watching for new subnet leases I1227 07:43:27.755830 1 network.go:153] Handling initial subnet events I1227 07:43:27.755905 1 device.go:163] calling GetL2List() dev.link.Index: 10 I1227 07:43:27.756099 1 device.go:168] calling NeighAdd: 123.56.200.187, ca:68:7c:9b:cc:67
issue中说到,在kubeadm init时,显式地指定–advertise-address将会避免这个问题。不过目前不要在–advertise-address后面写上多个IP,虽然文档上说是支持的,但实际情况是,当你显式指定–advertise-address的值为两个或两个以上IP时,比如下面这样:
#kubeadm init --api-advertise-addresses=10.47.217.91,123.56.200.187 --pod-network-cidr=10.244.0.0/16
master初始化成功后,当minion node执行join cluster命令时,会panic掉:
# kubeadm join --token=92e977.f1d4d090906fc06a 10.47.217.91 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. ... ... [bootstrap] Successfully established connection with endpoint "https://10.47.217.91:6443" [bootstrap] Successfully established connection with endpoint "https://123.56.200.187:6443" E1228 10:14:05.405294 28378 runtime.go:64] Observed a panic: "close of closed channel" (close of closed channel) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49 /usr/local/go/src/runtime/asm_amd64.s:479 /usr/local/go/src/runtime/panic.go:458 /usr/local/go/src/runtime/chan.go:311 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:85 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:96 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:97 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:52 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:93 /usr/local/go/src/runtime/asm_amd64.s:2086 [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request panic: close of closed channel [recovered] panic: close of closed channel goroutine 29 [running]: panic(0x1342de0, 0xc4203eebf0) /usr/local/go/src/runtime/panic.go:500 +0x1a1 k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126 panic(0x1342de0, 0xc4203eebf0) /usr/local/go/src/runtime/panic.go:458 +0x243 k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection.func1.1() /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:85 +0x29d k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc420563ee0) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:96 +0x5e k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc420563ee0, 0x12a05f200, 0x0, 0xc420022e01, 0xc4202c2060) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:97 +0xad k8s.io/kubernetes/pkg/util/wait.Until(0xc420563ee0, 0x12a05f200, 0xc4202c2060) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/util/wait/wait.go:52 +0x4d k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection.func1(0xc4203a82f0, 0xc420269b90, 0xc4202c2060, 0xc4202c20c0, 0xc4203d8d80, 0x401, 0x480, 0xc4201e75e0, 0x17, 0xc4201e7560, ...) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:93 +0x100 created by k8s.io/kubernetes/cmd/kubeadm/app/node.EstablishMasterConnection /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/node/bootstrap.go:94 +0x3ed
关于join panic这个问题,在这个issue中有详细讨论:https://github.com/kubernetes/kubernetes/issues/36988
3、open /run/flannel/subnet.env: no such file or directory
前面说过,默认情况下,考虑安全原因,master node是不承担work load的,不参与pod调度。我们这里机器少,只能让master node也辛苦一下。通过下面这个命令可以让master node也参与pod调度:
# kubectl taint nodes --all dedicated- node "iz25beglnhtz" tainted
接下来,我们create一个deployment,manifest描述文件如下:
//run-my-nginx.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx:1.10.1 ports: - containerPort: 80
create后,我们发现调度到master上的my-nginx pod启动是ok的,但minion node上的pod则一直失败,查看到的失败原因如下:
Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 28s 28s 1 {default-scheduler } Normal Scheduled Successfully assigned my-nginx-2560993602-0440x to iz2ze39jeyizepdxhwqci6z 27s 1s 26 {kubelet iz2ze39jeyizepdxhwqci6z} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-2560993602-0440x_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-2560993602-0440x_default(ba5ce554-cbf1-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"
在minion node上的确没有找到/run/flannel/subnet.env该文件。但master node上有这个文件:
// /run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true
于是手动在minion node上创建一份/run/flannel/subnet.env,并复制master node同名文件的内容,保存。稍许片刻,minion node上的my-nginx pod从error变成running了。
4、no IP addresses available in network: cbr0
将之前的一个my-nginx deployment的replicas改为3,并创建基于该deployment中pods的my-nginx service:
//my-nginx-svc.yaml apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: type: NodePort ports: - port: 80 nodePort: 30062 protocol: TCP selector: run: my-nginx
修改后,通过curl localhost:30062测试服务连通性。发现通过VIP负载均衡到master node上的my-nginx pod的request都成功得到了Response,但是负载均衡到minion node上pod的request,则阻塞在那里,直到timeout。查看pod信息才发现,原来新调度到minion node上的my-nginx pod并没有启动ok,错误原因如下:
Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned my-nginx-1948696469-ph11m to iz2ze39jeyizepdxhwqci6z 2m 0s 177 {kubelet iz2ze39jeyizepdxhwqci6z} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-1948696469-ph11m_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-1948696469-ph11m_default(3700d74a-cc12-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": no IP addresses available in network: cbr0; Skipping pod"
查看minion node上/var/lib/cni/networks/cbr0目录,发现该目录下有如下文件:
10.244.1.10 10.244.1.12 10.244.1.14 10.244.1.16 10.244.1.18 10.244.1.2 10.244.1.219 10.244.1.239 10.244.1.3 10.244.1.5 10.244.1.7 10.244.1.9 10.244.1.100 10.244.1.120 10.244.1.140 10.244.1.160 10.244.1.180 10.244.1.20 10.244.1.22 10.244.1.24 10.244.1.30 10.244.1.50 10.244.1.70 10.244.1.90 10.244.1.101 10.244.1.121 10.244.1.141 10.244.1.161 10.244.1.187 10.244.1.200 10.244.1.220 10.244.1.240 10.244.1.31 10.244.1.51 10.244.1.71 10.244.1.91 10.244.1.102 10.244.1.122 10.244.1.142 10.244.1.162 10.244.1.182 10.244.1.201 10.244.1.221 10.244.1.241 10.244.1.32 10.244.1.52 10.244.1.72 10.244.1.92 10.244.1.103 10.244.1.123 10.244.1.143 10.244.1.163 10.244.1.183 10.244.1.202 10.244.1.222 10.244.1.242 10.244.1.33 10.244.1.53 10.244.1.73 10.244.1.93 10.244.1.104 10.244.1.124 10.244.1.144 10.244.1.164 10.244.1.184 10.244.1.203 10.244.1.223 10.244.1.243 10.244.1.34 10.244.1.54 10.244.1.74 10.244.1.94 10.244.1.105 10.244.1.125 10.244.1.145 10.244.1.165 10.244.1.185 10.244.1.204 10.244.1.224 10.244.1.244 10.244.1.35 10.244.1.55 10.244.1.75 10.244.1.95 10.244.1.106 10.244.1.126 10.244.1.146 10.244.1.166 10.244.1.186 10.244.1.205 10.244.1.225 10.244.1.245 10.244.1.36 10.244.1.56 10.244.1.76 10.244.1.96 10.244.1.107 10.244.1.127 10.244.1.147 10.244.1.167 10.244.1.187 10.244.1.206 10.244.1.226 10.244.1.246 10.244.1.37 10.244.1.57 10.244.1.77 10.244.1.97 10.244.1.108 10.244.1.128 10.244.1.148 10.244.1.168 10.244.1.188 10.244.1.207 10.244.1.227 10.244.1.247 10.244.1.38 10.244.1.58 10.244.1.78 10.244.1.98 10.244.1.109 10.244.1.129 10.244.1.149 10.244.1.169 10.244.1.189 10.244.1.208 10.244.1.228 10.244.1.248 10.244.1.39 10.244.1.59 10.244.1.79 10.244.1.99 10.244.1.11 10.244.1.13 10.244.1.15 10.244.1.17 10.244.1.19 10.244.1.209 10.244.1.229 10.244.1.249 10.244.1.4 10.244.1.6 10.244.1.8 last_reserved_ip 10.244.1.110 10.244.1.130 10.244.1.150 10.244.1.170 10.244.1.190 10.244.1.21 10.244.1.23 10.244.1.25 10.244.1.40 10.244.1.60 10.244.1.80 10.244.1.111 10.244.1.131 10.244.1.151 10.244.1.171 10.244.1.191 10.244.1.210 10.244.1.230 10.244.1.250 10.244.1.41 10.244.1.61 10.244.1.81 10.244.1.112 10.244.1.132 10.244.1.152 10.244.1.172 10.244.1.192 10.244.1.211 10.244.1.231 10.244.1.251 10.244.1.42 10.244.1.62 10.244.1.82 10.244.1.113 10.244.1.133 10.244.1.153 10.244.1.173 10.244.1.193 10.244.1.212 10.244.1.232 10.244.1.252 10.244.1.43 10.244.1.63 10.244.1.83 10.244.1.114 10.244.1.134 10.244.1.154 10.244.1.174 10.244.1.194 10.244.1.213 10.244.1.233 10.244.1.253 10.244.1.44 10.244.1.64 10.244.1.84 10.244.1.115 10.244.1.135 10.244.1.155 10.244.1.175 10.244.1.195 10.244.1.214 10.244.1.234 10.244.1.254 10.244.1.45 10.244.1.65 10.244.1.85 10.244.1.116 10.244.1.136 10.244.1.156 10.244.1.176 10.244.1.196 10.244.1.215 10.244.1.235 10.244.1.26 10.244.1.46 10.244.1.66 10.244.1.86 10.244.1.117 10.244.1.137 10.244.1.157 10.244.1.177 10.244.1.197 10.244.1.216 10.244.1.236 10.244.1.27 10.244.1.47 10.244.1.67 10.244.1.87 10.244.1.118 10.244.1.138 10.244.1.158 10.244.1.178 10.244.1.198 10.244.1.217 10.244.1.237 10.244.1.28 10.244.1.48 10.244.1.68 10.244.1.88 10.244.1.119 10.244.1.139 10.244.1.159 10.244.1.179 10.244.1.199 10.244.1.218 10.244.1.238 10.244.1.29 10.244.1.49 10.244.1.69 10.244.1.89
这已经将10.244.1.x段的所有ip占满,自然没有available的IP可供新pod使用了。至于为何占满,这个原因尚不明朗。下面两个open issue与这个问题相关:
https://github.com/containernetworking/cni/issues/306
https://github.com/kubernetes/kubernetes/issues/21656
进入到/var/lib/cni/networks/cbr0目录下,执行下面命令可以释放那些可能是kubelet leak的IP资源:
for hash in $(tail -n +1 * | grep '^[A-Za-z0-9]*$' | cut -c 1-8); do if [ -z $(docker ps -a | grep $hash | awk '{print $1}') ]; then grep -irl $hash ./; fi; done | xargs rm
执行后,目录下的文件列表变成了:
ls -l total 32 drw-r--r-- 2 root root 12288 Dec 27 17:11 ./ drw-r--r-- 3 root root 4096 Dec 27 13:52 ../ -rw-r--r-- 1 root root 64 Dec 27 17:11 10.244.1.2 -rw-r--r-- 1 root root 64 Dec 27 17:11 10.244.1.3 -rw-r--r-- 1 root root 64 Dec 27 17:11 10.244.1.4 -rw-r--r-- 1 root root 10 Dec 27 17:11 last_reserved_ip
不过pod仍然处于失败状态,但这次失败的原因又发生了变化:
Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned my-nginx-1948696469-7p4nn to iz2ze39jeyizepdxhwqci6z 22s 1s 22 {kubelet iz2ze39jeyizepdxhwqci6z} Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "my-nginx-1948696469-7p4nn_default" with SetupNetworkError: "Failed to setup network for pod \"my-nginx-1948696469-7p4nn_default(a40fe652-cc14-11e6-8c42-00163e1001d7)\" using network plugins \"cni\": \"cni0\" already has an IP address different from 10.244.1.1/24; Skipping pod"
而/var/lib/cni/networks/cbr0目录下的文件又开始迅速增加!问题陷入僵局。
5、flannel vxlan不通,后端换udp,仍然不通
折腾到这里,基本筋疲力尽了。于是在两个node上执行kubeadm reset,准备重新来过。
kubeadm reset后,之前flannel创建的bridge device cni0和网口设备flannel.1依然健在。为了保证环境彻底恢复到初始状态,我们可以通过下面命令删除这两个设备:
# ifconfig cni0 down # brctl delbr cni0 # ip link delete flannel.1
有了前面几个问题的“磨炼”后,重新init和join的k8s cluster显得格外顺利。这次minion node没有再出现什么异常。
# kubectl get nodes -o wide NAME STATUS AGE EXTERNAL-IP iz25beglnhtz Ready,master 5m <none> iz2ze39jeyizepdxhwqci6z Ready 51s <none> # kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default my-nginx-1948696469-71h1l 1/1 Running 0 3m default my-nginx-1948696469-zwt5g 1/1 Running 0 3m default my-ubuntu-2560993602-ftdm6 1/1 Running 0 3m kube-system dummy-2088944543-lmlbh 1/1 Running 0 5m kube-system etcd-iz25beglnhtz 1/1 Running 0 6m kube-system kube-apiserver-iz25beglnhtz 1/1 Running 0 6m kube-system kube-controller-manager-iz25beglnhtz 1/1 Running 0 6m kube-system kube-discovery-1769846148-l5lfw 1/1 Running 0 5m kube-system kube-dns-2924299975-mdq5r 4/4 Running 0 5m kube-system kube-flannel-ds-9zwr1 2/2 Running 0 5m kube-system kube-flannel-ds-p7xh2 2/2 Running 0 1m kube-system kube-proxy-dwt5f 1/1 Running 0 5m kube-system kube-proxy-vm6v2 1/1 Running 0 1m kube-system kube-scheduler-iz25beglnhtz 1/1 Running 0 6m
接下来我们创建my-nginx deployment和service来测试flannel网络的连通性。通过curl my-nginx service的nodeport,发现可以reach master上的两个nginx pod,但是minion node上的pod依旧不通。
在master上看flannel docker的日志:
I1228 02:52:22.097083 1 network.go:225] L3 miss: 10.244.1.2 I1228 02:52:22.097169 1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60 I1228 02:52:22.097335 1 network.go:236] AddL3 succeeded I1228 02:52:55.169952 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:00.801901 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:03.801923 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:04.801764 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:05.801848 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:06.888269 1 network.go:225] L3 miss: 10.244.1.2 I1228 02:53:06.888340 1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60 I1228 02:53:06.888507 1 network.go:236] AddL3 succeeded I1228 02:53:39.969791 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:45.153770 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:48.154822 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:49.153774 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:50.153734 1 network.go:220] Ignoring not a miss: 46:6c:7a:a6:06:60, 10.244.1.2 I1228 02:53:52.154056 1 network.go:225] L3 miss: 10.244.1.2 I1228 02:53:52.154110 1 device.go:191] calling NeighSet: 10.244.1.2, 46:6c:7a:a6:06:60 I1228 02:53:52.154256 1 network.go:236] AddL3 succeeded
日志中有大量:“Ignoring not a miss”字样的日志,似乎vxlan网络有问题。这个问题与下面issue中描述颇为接近:
https://github.com/coreos/flannel/issues/427
Flannel默认采用vxlan作为backend,使用kernel vxlan默认的udp 8742端口。Flannel还支持udp的backend,使用udp 8285端口。于是试着更换一下flannel后端。更换flannel后端的步骤如下:
- 将https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml文件下载到本地;
- 修改kube-flannel.yml文件内容:主要是针对net-conf.json属性,增加”Backend”字段属性:
--- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "type": "flannel", "delegate": { "isDefaultGateway": true } } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "udp", "Port": 8285 } } --- ... ...
- 卸载并重新安装pod网络
# kubectl delete -f kube-flannel.yml configmap "kube-flannel-cfg" deleted daemonset "kube-flannel-ds" deleted # kubectl apply -f kube-flannel.yml configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created # netstat -an|grep 8285 udp 0 0 123.56.200.187:8285 0.0.0.0:*
经过测试发现:udp端口是通的。在两个node上tcpdump -i flannel0 可以看到udp数据包的发送和接收。但是两个node间的pod network依旧不通。
6、failed to register network: failed to acquire lease: node “iz25beglnhtz” not found
正常情况下master node和minion node上的flannel pod的启动日志如下:
master node flannel的运行:
I1227 04:56:16.577828 1 main.go:132] Installing signal handlers I1227 04:56:16.578060 1 kube.go:233] starting kube subnet manager I1227 04:56:16.578064 1 manager.go:133] Determining IP address of default interface I1227 04:56:16.578576 1 manager.go:163] Using 123.56.200.187 as external interface I1227 04:56:16.578616 1 manager.go:164] Using 123.56.200.187 as external endpoint E1227 04:56:16.579079 1 network.go:106] failed to register network: failed to acquire lease: node "iz25beglnhtz" not found I1227 04:56:17.583744 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN I1227 04:56:17.585367 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE I1227 04:56:17.587765 1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE I1227 04:56:17.589943 1 manager.go:246] Lease acquired: 10.244.0.0/24 I1227 04:56:17.590203 1 network.go:58] Watching for L3 misses I1227 04:56:17.590255 1 network.go:66] Watching for new subnet leases I1227 07:43:27.164103 1 network.go:153] Handling initial subnet events I1227 07:43:27.164211 1 device.go:163] calling GetL2List() dev.link.Index: 5 I1227 07:43:27.164350 1 device.go:168] calling NeighAdd: 59.110.67.15, ca:50:97:1f:c2:ea
minion node上flannel的运行:
# docker logs 1f64bd9c0386 I1227 07:43:26.670620 1 main.go:132] Installing signal handlers I1227 07:43:26.671006 1 manager.go:133] Determining IP address of default interface I1227 07:43:26.670825 1 kube.go:233] starting kube subnet manager I1227 07:43:26.671514 1 manager.go:163] Using 59.110.67.15 as external interface I1227 07:43:26.671575 1 manager.go:164] Using 59.110.67.15 as external endpoint I1227 07:43:26.746811 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN I1227 07:43:26.749785 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE I1227 07:43:26.752343 1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE I1227 07:43:26.755126 1 manager.go:246] Lease acquired: 10.244.1.0/24 I1227 07:43:26.755444 1 network.go:58] Watching for L3 misses I1227 07:43:26.755475 1 network.go:66] Watching for new subnet leases I1227 07:43:27.755830 1 network.go:153] Handling initial subnet events I1227 07:43:27.755905 1 device.go:163] calling GetL2List() dev.link.Index: 10 I1227 07:43:27.756099 1 device.go:168] calling NeighAdd: 123.56.200.187, ca:68:7c:9b:cc:67
但在进行上面问题5的测试过程中,我们发现flannel container的启动日志中有如下错误:
master node:
# docker logs c2d1cee3df3d I1228 06:53:52.502571 1 main.go:132] Installing signal handlers I1228 06:53:52.502735 1 manager.go:133] Determining IP address of default interface I1228 06:53:52.503031 1 manager.go:163] Using 123.56.200.187 as external interface I1228 06:53:52.503054 1 manager.go:164] Using 123.56.200.187 as external endpoint E1228 06:53:52.503869 1 network.go:106] failed to register network: failed to acquire lease: node "iz25beglnhtz" not found I1228 06:53:52.503899 1 kube.go:233] starting kube subnet manager I1228 06:53:53.522892 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN I1228 06:53:53.524325 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE I1228 06:53:53.526622 1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE I1228 06:53:53.528438 1 manager.go:246] Lease acquired: 10.244.0.0/24 I1228 06:53:53.528744 1 network.go:58] Watching for L3 misses I1228 06:53:53.528777 1 network.go:66] Watching for new subnet leases
minion node:
# docker logs dcbfef45308b I1228 05:28:05.012530 1 main.go:132] Installing signal handlers I1228 05:28:05.012747 1 manager.go:133] Determining IP address of default interface I1228 05:28:05.013011 1 manager.go:163] Using 59.110.67.15 as external interface I1228 05:28:05.013031 1 manager.go:164] Using 59.110.67.15 as external endpoint E1228 05:28:05.013204 1 network.go:106] failed to register network: failed to acquire lease: node "iz2ze39jeyizepdxhwqci6z" not found I1228 05:28:05.013237 1 kube.go:233] starting kube subnet manager I1228 05:28:06.041602 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN I1228 05:28:06.042863 1 ipmasq.go:47] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE I1228 05:28:06.044896 1 ipmasq.go:47] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE I1228 05:28:06.046497 1 manager.go:246] Lease acquired: 10.244.1.0/24 I1228 05:28:06.046780 1 network.go:98] Watching for new subnet leases I1228 05:28:07.047052 1 network.go:191] Subnet added: 10.244.0.0/24
两个Node都有“注册网络”失败的错误:failed to register network: failed to acquire lease: node “xxxx” not found。很难断定是否是因为这两个错误导致的两个node间的网络不通。从整个测试过程来看,这个问题时有时无。在下面flannel issue中也有类似的问题讨论:
https://github.com/coreos/flannel/issues/435
Flannel pod network的诸多问题让我决定暂时放弃在kubeadm创建的kubernetes cluster中继续使用Flannel。
四、Calico pod network
Kubernetes支持的pod network add-ons中,除了Flannel,还有calico、Weave net等。这里我们试试基于边界网关BGP协议实现的Calico pod network。Calico Project针对在kubeadm建立的K8s集群的Pod网络安装也有专门的文档。文档中描述的需求和约束我们均满足,比如:
master node带有kubeadm.alpha.kubernetes.io/role: master标签:
# kubectl get nodes -o wide --show-labels NAME STATUS AGE EXTERNAL-IP LABELS iz25beglnhtz Ready,master 3m <none> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubeadm.alpha.kubernetes.io/role=master,kubernetes.io/hostname=iz25beglnhtz
在安装calico之前,我们还是要执行kubeadm reset重置环境,并将flannel创建的各种网络设备删除,可参考上面几个小节中的命令。
1、初始化集群
使用calico的kubeadm init无需再指定–pod-network-cidr=10.244.0.0/16 option:
# kubeadm init --api-advertise-addresses=10.47.217.91 [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [init] Using Kubernetes version: v1.5.1 [tokens] Generated token: "531b3f.3bd900d61b78d6c9" [certificates] Generated Certificate Authority key and certificate. [certificates] Generated API Server key and certificate [certificates] Generated Service Account signing keys [certificates] Created keys and certificates in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 13.527323 seconds [apiclient] Waiting for at least one node to register and become ready [apiclient] First node is ready after 0.503814 seconds [apiclient] Creating a test deployment [apiclient] Test deployment succeeded [token-discovery] Created the kube-discovery deployment, waiting for it to become ready [token-discovery] kube-discovery is ready after 1.503644 seconds [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=531b3f.3bd900d61b78d6c9 10.47.217.91
2、创建calico network
# kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml configmap "calico-config" created daemonset "calico-etcd" created service "calico-etcd" created daemonset "calico-node" created deployment "calico-policy-controller" created job "configure-calico" created
实际创建过程需要一段时间,因为calico需要pull 一些images:
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/calico/node v1.0.0 74bff066bc6a 7 days ago 256.4 MB calico/ctl v1.0.0 069830246cf3 8 days ago 43.35 MB calico/cni v1.5.5 ada87b3276f3 12 days ago 67.13 MB gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 14 months ago 28.19 MB
calico在master node本地创建了两个network device:
# ip a ... ... 47: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 inet 192.168.91.0/32 scope global tunl0 valid_lft forever preferred_lft forever 48: califa32a09679f@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 62:39:10:55:44:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
3、minion node join
执行下面命令,将minion node加入cluster:
# kubeadm join --token=531b3f.3bd900d61b78d6c9 10.47.217.91
calico在minion node上也创建了一个network device:
57988: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1 link/ipip 0.0.0.0 brd 0.0.0.0 inet 192.168.136.192/32 scope global tunl0 valid_lft forever preferred_lft forever
join成功后,我们查看一下cluster status:
# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system calico-etcd-488qd 1/1 Running 0 18m 10.47.217.91 iz25beglnhtz kube-system calico-node-jcb3c 2/2 Running 0 18m 10.47.217.91 iz25beglnhtz kube-system calico-node-zthzp 2/2 Running 0 4m 10.28.61.30 iz2ze39jeyizepdxhwqci6z kube-system calico-policy-controller-807063459-f21q4 1/1 Running 0 18m 10.47.217.91 iz25beglnhtz kube-system dummy-2088944543-rtsfk 1/1 Running 0 23m 10.47.217.91 iz25beglnhtz kube-system etcd-iz25beglnhtz 1/1 Running 0 23m 10.47.217.91 iz25beglnhtz kube-system kube-apiserver-iz25beglnhtz 1/1 Running 0 23m 10.47.217.91 iz25beglnhtz kube-system kube-controller-manager-iz25beglnhtz 1/1 Running 0 23m 10.47.217.91 iz25beglnhtz kube-system kube-discovery-1769846148-51wdk 1/1 Running 0 23m 10.47.217.91 iz25beglnhtz kube-system kube-dns-2924299975-fhf5f 4/4 Running 0 23m 192.168.91.1 iz25beglnhtz kube-system kube-proxy-2s7qc 1/1 Running 0 4m 10.28.61.30 iz2ze39jeyizepdxhwqci6z kube-system kube-proxy-h2qds 1/1 Running 0 23m 10.47.217.91 iz25beglnhtz kube-system kube-scheduler-iz25beglnhtz 1/1 Running 0 23m 10.47.217.91 iz25beglnhtz
所有组件都是ok的。似乎是好兆头!但跨node的pod network是否联通,还需进一步探究。
4、探究跨node的pod network联通性
我们依旧利用上面测试flannel网络的my-nginx-svc.yaml和run-my-nginx.yaml,创建my-nginx service和my-nginx deployment。注意:这之前要先在master node上执行一下”kubectl taint nodes –all dedicated-”,以让master node承载work load。
遗憾的是,结果和flannel很相似,分配到master node上http request得到了nginx的响应;minion node上的pod依旧无法联通。
这次我不想在calico这块过多耽搁,我要快速看看下一个候选者:weave net是否满足要求。
五、weave network for pod
经过上面那么多次尝试,结果是令人扫兴的。Weave network似乎是最后一颗救命稻草了。有了前面的铺垫,这里就不详细列出各种命令的输出细节了。Weave network也有专门的官方文档用于指导如何与kubernetes集群集成,我们主要也是参考它。
1、安装weave network add-on
在kubeadm reset后,我们重新初始化了集群。接下来我们安装weave network add-on:
# kubectl apply -f https://git.io/weave-kube daemonset "weave-net" created
前面无论是Flannel还是calico,在安装pod network add-on时至少都还是顺利的。不过在Weave network这次,我们遭遇“当头棒喝”:(:
# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system dummy-2088944543-4kxtk 1/1 Running 0 42m 10.47.217.91 iz25beglnhtz kube-system etcd-iz25beglnhtz 1/1 Running 0 42m 10.47.217.91 iz25beglnhtz kube-system kube-apiserver-iz25beglnhtz 1/1 Running 0 42m 10.47.217.91 iz25beglnhtz kube-system kube-controller-manager-iz25beglnhtz 1/1 Running 0 42m 10.47.217.91 iz25beglnhtz kube-system kube-discovery-1769846148-pzv8p 1/1 Running 0 42m 10.47.217.91 iz25beglnhtz kube-system kube-dns-2924299975-09dcb 0/4 ContainerCreating 0 42m <none> iz25beglnhtz kube-system kube-proxy-z465f 1/1 Running 0 42m 10.47.217.91 iz25beglnhtz kube-system kube-scheduler-iz25beglnhtz 1/1 Running 0 42m 10.47.217.91 iz25beglnhtz kube-system weave-net-3wk9h 0/2 CrashLoopBackOff 16 17m 10.47.217.91 iz25beglnhtz
安装后,weave-net pod提示:CrashLoopBackOff。追踪其Container log,得到如下错误信息:
docker logs cde899efa0af
time=”2016-12-28T08:25:29Z” level=info msg=”Starting Weaveworks NPC 1.8.2″
ti
install kubernetes 1.6 on centos 7.3
Install kubelet, kubeadm, docker, kubectl and kubernetes-cni
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
see
5. Edit the 10-kubadm.conf vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add the following before “ExeStart=”
Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd"
sudo tee /etc/modules-load.d/overlay.conf <<-'EOF'
overlay
EOF
reboot
lsmod | grep overlay
overlay
vi /etc/sysconfig/docker-storage-setup
add
STORAGE_DRIVER="overlay"
systemctl start docker
systemctl start kubelet
git服务器bitnami-gitlab一键安装
github使用也有一段时间了,github上的内容都是公开的,要私有必须掏钱,github的服务器都在国外,下载也比较慢。如果公司内容使用肯定是不行的,之前没有自己搭建过git的服务器,今天决定试试,在网上找资料也找了半天,发现有一个快速安装github服务器bitnami的一键安装包
废话不多说开始操作
https://bitnami.com/stack/gitlab/installer
在上面的地址下载软件,我下载的版本是bitnami-gitlab-8.2.3-4-linux-installer.run
下载安装
wget https://bitnami.com/redirect/to/87432/bitnami-gitlab-8.2.3-4-linux-installer.run chmod 755 bitnami-gitlab-8.2.3-4-linux-installer.run ./bitnami-gitlab-8.2.3-4-linux-installer.run
开始配置
是否安装PhpPgAdmin,我选择是
上面的选择是否正确,没有问题输入Y
选择安装目录,我直接输入回车,默认安装目录/opt/gitlab-8.2.3-4
创建管理员
输入邮件、用户名、密码,注意密码必须是8位,我只输入了6,会警告,会重新创建
下面设置访问的域名,默认是80端口,第二项选择是否支持邮件,我这选择y
开始邮件配置,我选择是gmail,配置了一个smtp
是否开始安装输入Y 开始安装
使用帮助
在安装目录下有README.txt,里面有详细的使用说明
安装的服务 – GitLab 8.2.3 – Apache 2.4.18 – ImageMagick 6.7.5 – PostgreSQL 9.4.5 – Git 2.6.1 – Ruby 2.1.8 – Rails 4.2.4 – RubyGems 1.8.12 开始关闭重启相关服务 ./ctlscript.sh (start|stop|restart) ./ctlscript.sh (start|stop|restart) postgres ./ctlscript.sh (start|stop|restart) redis ./ctlscript.sh (start|stop|restart) apache ./ctlscript.sh (start|stop|restart) sidekiq
其他的自己看一下
sed和awk等方法实现列转成行
查看修改mysql数据库的字符集
Liunx下修改MySQL字符集:
1.查找MySQL的cnf文件的位置
find / -iname ‘*.cnf’ -print
/usr/share/mysql/my-innodb-heavy-4G.cnf
/usr/share/mysql/my-large.cnf
/usr/share/mysql/my-small.cnf
/usr/share/mysql/my-medium.cnf
/usr/share/mysql/my-huge.cnf
/usr/share/texmf/web2c/texmf.cnf
/usr/share/texmf/web2c/mktex.cnf
/usr/share/texmf/web2c/fmtutil.cnf
/usr/share/texmf/tex/xmltex/xmltexfmtutil.cnf
/usr/share/texmf/tex/jadetex/jadefmtutil.cnf
/usr/share/doc/MySQL-server-community-5.1.22/my-innodb-heavy-4G.cnf
/usr/share/doc/MySQL-server-community-5.1.22/my-large.cnf
/usr/share/doc/MySQL-server-community-5.1.22/my-small.cnf
/usr/share/doc/MySQL-server-community-5.1.22/my-medium.cnf
/usr/share/doc/MySQL-server-community-5.1.22/my-huge.cnf
2. 拷贝 small.cnf、my-medium.cnf、my-huge.cnf、my-innodb-heavy-4G.cnf其中的一个到/etc下,命名为my.cnf
cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
3. 修改my.cnf
vi /etc/my.cnf
在[client]下添加
default-character-set=utf8
在[mysqld]下添加
default-character-set=utf8
4.重新启动MySQL
[root@bogon ~]# /etc/rc.d/init.d/mysql restart
Shutting down MySQL [ 确定 ]
Starting MySQL. [ 确定 ]
[root@bogon ~]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.1.22-rc-community-log MySQL Community Edition (GPL)
Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the buffer.
5.查看字符集设置
mysql> show variables like ‘collation_%’;
+———————-+—————–+
| Variable_name | Value |
+———————-+—————–+
| collation_connection | utf8_general_ci |
| collation_database | utf8_general_ci |
| collation_server | utf8_general_ci |
+———————-+—————–+
3 rows in set (0.02 sec)
mysql> show variables like ‘character_set_%’;
+————————–+—————————-+
| Variable_name | Value |
+————————–+—————————-+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+————————–+—————————-+
8 rows in set (0.02 sec)
mysql>
其他的一些设置方法:
修改数据库的字符集
mysql>use mydb
mysql>alter database mydb character set utf-8;
创建数据库指定数据库的字符集
mysql>create database mydb character set utf-8;
通过配置文件修改:
修改/var/lib/mysql/mydb/db.opt
default-character-set=latin1
default-collation=latin1_swedish_ci
为
default-character-set=utf8
default-collation=utf8_general_ci
重起MySQL:
[root@bogon ~]# /etc/rc.d/init.d/mysql restart
通过MySQL命令行修改:
mysql> set character_set_client=utf8;
Query OK, 0 rows affected (0.00 sec)
mysql> set character_set_connection=utf8;
Query OK, 0 rows affected (0.00 sec)
mysql> set character_set_database=utf8;
Query OK, 0 rows affected (0.00 sec)
mysql> set character_set_results=utf8;
Query OK, 0 rows affected (0.00 sec)
mysql> set character_set_server=utf8;
Query OK, 0 rows affected (0.00 sec)
mysql> set character_set_system=utf8;
Query OK, 0 rows affected (0.01 sec)
mysql> set collation_connection=utf8;
Query OK, 0 rows affected (0.01 sec)
mysql> set collation_database=utf8;
Query OK, 0 rows affected (0.01 sec)
mysql> set collation_server=utf8;
Query OK, 0 rows affected (0.01 sec)
查看:
mysql> show variables like ‘character_set_%’;
+————————–+—————————-+
| Variable_name | Value |
+————————–+—————————-+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+————————–+—————————-+
8 rows in set (0.03 sec)
mysql> show variables like ‘collation_%’;
+———————-+—————–+
| Variable_name | Value |
+———————-+—————–+
| collation_connection | utf8_general_ci |
| collation_database | utf8_general_ci |
| collation_server | utf8_general_ci |
+———————-+—————–+
3 rows in set (0.04 sec)
————————————————————————-
【知识性文章转载】
MYSQL 字符集问题
MySQL的字符集支持(Character Set Support)有两个方面:
字符集(Character set)和排序方式(Collation)。
对于字符集的支持细化到四个层次:
服务器(server),数据库(database),数据表(table)和连接(connection)。
1.MySQL默认字符集
MySQL对于字符集的指定可以细化到一个数据库,一张表,一列,应该用什么字符集。
但是,传统的程序在创建数据库和数据表时并没有使用那么复杂的配置,它们用的是默认的配置,那么,默认的配置从何而来呢? (1)编译MySQL 时,指定了一个默认的字符集,这个字符集是 latin1;
(2)安装MySQL 时,可以在配置文件 (my.ini) 中指定一个默认的的字符集,如果没指定,这个值继承自编译时指定的;
(3)启动mysqld 时,可以在命令行参数中指定一个默认的的字符集,如果没指定,这个值继承自配置文件中的配置,此时 character_set_server 被设定为这个默认的字符集;
(4)当创建一个新的数据库时,除非明确指定,这个数据库的字符集被缺省设定为character_set_server;
(5)当选定了一个数据库时,character_set_database 被设定为这个数据库默认的字符集;
(6)在这个数据库里创建一张表时,表默认的字符集被设定为 character_set_database,也就是这个数据库默认的字符集;
(7)当在表内设置一栏时,除非明确指定,否则此栏缺省的字符集就是表默认的字符集;
简单的总结一下,如果什么地方都不修改,那么所有的数据库的所有表的所有栏位的都用
latin1 存储,不过我们如果安装 MySQL,一般都会选择多语言支持,也就是说,安装程序会自动在配置文件中把
default_character_set 设置为 UTF-8,这保证了缺省情况下,所有的数据库的所有表的所有栏位的都用 UTF-8 存储。
2.查看默认字符集(默认情况下,mysql的字符集是latin1(ISO_8859_1)
通常,查看系统的字符集和排序方式的设定可以通过下面的两条命令:
mysql> SHOW VARIABLES LIKE ‘character%’;
+————————–+———————————+
| Variable_name | Value |
+————————–+———————————+
| character_set_client | latin1 |
| character_set_connection | latin1 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | latin1 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | D:”mysql-5.0.37″share”charsets” |
+————————–+———————————+
mysql> SHOW VARIABLES LIKE ‘collation_%’;
+———————-+—————–+
| Variable_name | Value |
+———————-+—————–+
| collation_connection | utf8_general_ci |
| collation_database | utf8_general_ci |
| collation_server | utf8_general_ci |
+———————-+—————–+
3.修改默认字符集
(1) 最简单的修改方法,就是修改mysql的my.ini文件中的字符集键值,
如 default-character-set = utf8
character_set_server = utf8
修改完后,重启mysql的服务,service mysql restart
使用 mysql> SHOW VARIABLES LIKE ‘character%’;查看,发现数据库编码均已改成utf8
+————————–+———————————+
| Variable_name | Value |
+————————–+———————————+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | D:”mysql-5.0.37″share”charsets” |
+————————–+———————————+
(2) 还有一种修改字符集的方法,就是使用mysql的命令
mysql> SET character_set_client = utf8 ;
MySQL中涉及的几个字符集
character-set-server/default-character-set:服务器字符集,默认情况下所采用的。
character-set-database:数据库字符集。
character-set-table:数据库表字符集。
优先级依次增加。所以一般情况下只需要设置character-set-server,而在创建数据库和表时不特别指定字符集,这样统一采用character-set-server字符集。
character-set-client:客户端的字符集。客户端默认字符集。当客户端向服务器发送请求时,请求以该字符集进行编码。
character-set-results:结果字符集。服务器向客户端返回结果或者信息时,结果以该字符集进行编码。
在客户端,如果没有定义character-set-results,则采用character-set-client字符集作为默认的字符集。所以只需要设置character-set-client字符集。
要处理中文,则可以将character-set-server和character-set-client均设置为GB2312,如果要同时处理多国语言,则设置为UTF8。
关于MySQL的中文问题
解决乱码的方法是,在执行SQL语句之前,将MySQL以下三个系统参数设置为与服务器字符集character-set-server相同的字符集。
character_set_client:客户端的字符集。
character_set_results:结果字符集。
character_set_connection:连接字符集。
设置这三个系统参数通过向MySQL发送语句:set names gb2312
关于GBK、GB2312、UTF8
UTF- 8:Unicode Transformation Format-8bit,允许含BOM,但通常不含BOM。是用以解决国际上字符的一种多字节编码,它对英文使用8位(即一个字节),中文使用24为(三个字节)来编码。UTF-8包含全世界所有国家需要用到的字符,是国际编码,通用性强。UTF-8编码的文字可以在各国支持UTF8字符集的浏览器上显示。如,如果是UTF8编码,则在外国人的英文IE上也能显示中文,他们无需下载IE的中文语言支持包。
GBK是国家标准GB2312基础上扩容后兼容GB2312的标准。GBK的文字编码是用双字节来表示的,即不论中、英文字符均使用双字节来表示,为了区分中文,将其最高位都设定成1。GBK包含全部中文字符,是国家编码,通用性比UTF8差,不过UTF8占用的数据库比GBD大。
GBK、GB2312等与UTF8之间都必须通过Unicode编码才能相互转换:
GBK、GB2312--Unicode--UTF8
UTF8--Unicode--GBK、GB2312
对于一个网站、论坛来说,如果英文字符较多,则建议使用UTF-8节省空间。不过现在很多论坛的插件一般只支持GBK。
GB2312是GBK的子集,GBK是GB18030的子集
GBK是包括中日韩字符的大字符集合
如果是中文的网站 推荐GB2312 GBK有时还是有点问题
为了避免所有乱码问题,应该采用UTF-8,将来要支持国际化也非常方便
UTF-8可以看作是大字符集,它包含了大部分文字的编码。
使用UTF-8的一个好处是其他地区的用户(如香港台湾)无需安装简体中文支持就能正常观看你的文字而不会出现乱码。
gb2312是简体中文的码
gbk支持简体中文及繁体中文
big5支持繁体中文
utf-8支持几乎所有字符
首先分析乱码的情况
1.写入数据库时作为乱码写入
2.查询结果以乱码返回
究竟在发生乱码时是哪一种情况呢?
我们先在mysql 命令行下输入
show variables like ‘%char%’;
查看mysql 字符集设置情况:
mysql> show variables like ‘%char%’;
+————————–+—————————————-+
| Variable_name | Value |
+————————–+—————————————-+
| character_set_client | gbk |
| character_set_connection | gbk |
| character_set_database | gbk |
| character_set_filesystem | binary |
| character_set_results | gbk |
| character_set_server | gbk |
| character_set_system | utf8 |
| character_sets_dir | /usr/local/mysql/share/mysql/charsets/ |
+————————–+—————————————-+
在查询结果中可以看到mysql 数据库系统中客户端、数据库连接、数据库、文件系统、查询
结果、服务器、系统的字符集设置
在这里,文件系统字符集是固定的,系统、服务器的字符集在安装时确定,与乱码问题无关
乱码的问题与客户端、数据库连接、数据库、查询结果的字符集设置有关
*注:客户端是看访问mysql 数据库的方式,通过命令行访问,命令行窗口就是客户端,通
过JDBC 等连接访问,程序就是客户端
我们在向mysql 写入中文数据时,在客户端、数据库连接、写入数据库时分别要进行编码转
换
在执行查询时,在返回结果、数据库连接、客户端分别进行编码转换
现在我们应该清楚,乱码发生在数据库、客户端、查询结果以及数据库连接这其中一个或多
个环节
接下来我们来解决这个问题
在登录数据库时,我们用mysql –default-character-set=字符集-u root -p 进行连接,这时我们
再用show variables like ‘%char%’;命令查看字符集设置情况,可以发现客户端、数据库连接、
查询结果的字符集已经设置成登录时选择的字符集了
如果是已经登录了,可以使用set names 字符集;命令来实现上述效果,等同于下面的命令:
set character_set_client = 字符集
set character_set_connection = 字符集
set character_set_results = 字符集
如果碰到上述命令无效时,也可采用一种最简单最彻底的方法:
一、Windows
1、中止MySQL服务
2、在MySQL的安装目录下找到my.ini,如果没有就把my-medium.ini复制为一个my.ini即可
3、打开my.ini以后,在[client]和[mysqld]下面均加上default-character-set=utf8,保存并关闭
4、启动MySQL服务
要彻底解决编码问题,必须使
| character_set_client | gbk |
| character_set_connection | gbk |
| character_set_database | gbk |
| character_set_results | gbk |
| character_set_server | gbk |
| character_set_system | utf8
这些编码相一致,都统一。
如果是通过JDBC 连接数据库,可以这样写URL:
URL=jdbc:mysql://localhost:3306/abs?useUnicode=true&characterEncoding=字符集
JSP 页面等终端也要设置相应的字符集
数据库的字符集可以修改mysql 的启动配置来指定字符集,也可以在create database 时加上
default character set 字符集来强制设置database 的字符集
通过这样的设置,整个数据写入读出流程中都统一了字符集,就不会出现乱码了
为什么从命令行直接写入中文不设置也不会出现乱码?
可以明确的是从命令行下,客户端、数据库连接、查询结果的字符集设置没有变化
输入的中文经过一系列转码又转回初始的字符集,我们查看到的当然不是乱码
但这并不代表中文在数据库里被正确作为中文字符存储
举例来说,现在有一个utf8 编码数据库,客户端连接使用GBK 编码,connection 使用默认
的ISO8859-1(也就是mysql 中的latin1),我们在客户端发送“中文”这个字符串,客户端
将发送一串GBK 格式的二进制码给connection 层,connection 层以ISO8859-1 格式将这段
二进制码发送给数据库,数据库将这段编码以utf8 格式存储下来,我们将这个字段以utf8
格式读取出来,肯定是得到乱码,也就是说中文数据在写入数据库时是以乱码形式存储的,
在同一个客户端进行查询操作时,做了一套和写入时相反的操作,错误的utf8 格式二进制
码又被转换成正确的GBK 码并正确显示出来。
共襄盛举,帝联科技成为2017亚太CDN峰会金牌赞助商
机遇与挑战并存。用这句话来形容近两年的CDN行业再合适不过:一方面流量持续爆发,前景看好;另一方面竞争加剧,行业一片红海。这样的行业背景下,即将召开的2017年亚太CDN峰会具有特殊的意义。
“GFIC——2017亚太CDN峰会”由DVBCN&AsiaOTT(众视网)主办,将于2017年4月12日至13日在北京凯宾斯基大酒店(亮马桥50号)2楼举行。帝联科技作为CDN行业的领军企业之一,已经连续第五年深度参与亚太CDN峰会,自首届起从未缺席。
本次峰会,帝联科技将作为金牌赞助商,携重量级产品与技术亮相并发表演讲,与众多行业同仁一起,分享其对于行业发展的见解。
历届亚太CDN峰会全球顶级CDN合作伙伴
作为一家创立12年的老牌网络平台服务商,帝联科技成长、蜕变以及云服务平台形成的内在逻辑,本就代表着CDN行业生态化、精细化和智能化的方向,具有行业指导性意义。
帝联科技依靠分布式数据中心(IDC)起步,积累了广泛的客户群体,为开展CDN业务打下坚实基础。随着行业发展,公司业务重心逐步转向CDN行业。2007年,帝联科技推出7*24全国统一400客户服务热线,CDN平台储备带宽超250G,这也标志着帝联的第一次转型完成,CDN业务占据主导。
随后几年,借着互联网行业流量爆发的热潮,帝联科技资源和技术实力高速提升。截止2016年,公司500+CDN节点遍布全国,海外节点64个,带宽储备超过6.5T,形成了覆盖广、稳定强、价格优的高品质互联网资源,综合实力跃居行业前列。
随着流媒体业务剧增以及行业竞争的日趋激烈,传统CDN服务面临着前所未有的挑战。流量爆发之下,CDN平台构建必须进行升级。帝联科技投入了巨额资金,配置了大量优质设备和带宽资源,聘请了以CTO为首的业内知名技术人才,优化技术团队。
经过一年的努力,帝联科技逐步做到了资源部署精细化和网络链路动态选择智能化。其中,DnDns系统更是较大程度地实现了运维自动化,引领了CDN行业的技术趋势。
据帝联科技CTO洪克柱介绍:DnDns系统由帝联科技投入大量研发资源,完全自主开发而成。DnDns系统基础解系包含UDP/TCP,IPV4/6,泛域名,edns0-client-subnet,高性能Zone文件解析等;智能解析则包含精准动态更新的地址库,高效区域(view)拓扑分级策略,灵活简易的区域配置:个位数的配置文件,DNS切量迅速收敛:IP Ratio等。各项技术指标均处于行业前列。
从央视春晚直播到315晚会,再到近期火热的世界杯预选赛,DnDns系统已经在热点网络视频直播中发挥重要作用,大大的缩短了响应时间,为客户带来更流畅更快捷的服务。
十二年来,一步一个脚印的发展使帝联科技形成了稳固的资源、技术以及人才优势,也积累了超3000 家客户,其中包括腾讯、百度、阿里巴巴、360、京东、搜狐、新浪、网易、央视国际、广电集团、苏宁集团、中国移动、中国安全部门等一系列行业代表企业。并以良好的服务态度与技术升级赢得了客户赞许,在腾讯供应商评分中位居第一。
基础扎实是起飞的前提。帝联云存储、帝联弹性计算平台、帝联监控报警、帝联运营一体化系统等产品和系统方案将会为帝联科技向前发展持续提供动力。未来,帝联科技正在逐步成长为一家集数据存储、分发、计算、安全为一身的云服务平台。
关于2017亚太CDN峰会
“GFIC——2017亚太CDN峰会”由DVBCN&AsiaOTT(众视网)主办,是GFIC系列峰会中,围绕CDN如何为“宽带中国战略”做好基础服务,为电子政务,移动直播,在线视频,虚拟现实,智慧城市,人工智能,互联网金融行业等提供加速传输,云计算,云存储,大数据,安全防护,应用推广,流量变现等专业服务为主题的亚太区最大规模年度盛会。
亚太CDN峰会汇聚了全球多个国家,500多家企业,数千人参与。伴随着互联网高速发展的红利,“亚太CDN峰会”专注于聚焦内容分发领域,邀请来自Akamai、Fastly、Level3、Limelight、Telefonica、Orange、TWC、网宿、蓝汛、帝联、UCloud、云帆加速、阿里云、腾讯云、百度云、乐视、爱奇艺、PPTV聚力、高德地图、京东云、东方明珠、芒果TV、华数传媒等行业内的领袖企业,共同探索下一个互联网时代的内容分发新格局。
注册谷歌账户时最后一步验证账户输入手机号说“此电话号码无法用于进行验证”解决办法
方法一:
说一个既不用翻墙又不要切换地区的注册方法吧,下载一个网易邮箱大师,然后添加账号,先随便敲一个@gmail的地址,在跳转的页面里直接注册账号就行了。更新一下哈,这个方法直接在手机端下载App按照步骤来做就行了(我用的是iPhone版的,Android 版的没试过)。
方法二:
尝试N种方法,换N个地方的VPN,最后发现,打开QQ邮箱,选择“添加账户”-》gmail-》然后正常注册就可以使用中国电话收短信。毫无障碍可言,不用vpn,快的一B。第一次觉得企鹅也不是只看人民币玩家。。。。
方法三:
亲试有用!!!
苹果手机可以通过以下步骤解决注册谷歌账号时手机号码无法通过验证的方法:
1、成功连接vpn;
2、设置➡️邮件➡️添加账号➡️Google➡️更多选项➡️创建新账号;
3、填完资料➡️填完手机号即可收取验证码➡️注册成功!!!
本人之前网页注册一直显示手机号码无法通过验证,用了此方法,一次注册成功!!!
方法四:
PHONE 亲测有效:
苹果手机(挂VPN)–设置–邮件、同学录、日历—添加账户—选“google”— 进到google登录界面选“其他选项”—填写姓名—到填写电话那里了!!高潮来了!!! 往下看!!
不管你挂哪里的VPN!!啥都不用改!!!
直接在后边写 +86你的手机号码!!
别改前边儿的自动选区号!!!自己在后边儿加 “+86手机号”!!!
后边儿就正常填了,能受到验证码就一马平川了!
方法五:
一个偏方,先谷歌“gmail香港”,然后点击进入 “登入- Google 帳戶” (应该是搜索结果页面的第二个),然后从这里开始注册,流程和页面和直接用谷歌注册一样,但是字体自动变成了繁体。
这个办法也不是100%成功,大家可以尝试一下。