kubernetes (k8s)

k8s基础

k8s基本介绍

  • k8s是谷歌在2014年开源的容器化集群管理系统
  • 使用k8s进行容器化应用部署
  • 使用k8s利于应用扩展
  • k8s让部署容器化应用更加简洁和高效

k8s特性

  • 自动装箱 (根据对应用运行环境的资源配置自动部署应用容器)
  • 自我修复 (当容器失败时,会自动重启)
  • 水平扩展 (应用副本数据增减)
  • 服务发现
  • 滚动更新
  • 版本回退
  • 密钥和配置管理 (热部署)
  • 存储编排
  • 批处理

k8s集群架构组件

  • master node(主控节点)
    • api server 集群统一入口,以restful方式交给etcd存储
    • scheduler 节点调度
    • controller-manager 处理集群中后台任务,一个资源对应一个控制器
    • etcd 存储系统,用于保存集群相关数据
  • worker node(工作节点)
    • kubelet master派到node节点的代码,管理本机容器
    • kube-proxy 提供网络代理,负载均衡等操作

k8s核心概念

  • pod
    • k8s中最小部署单元
    • 一组容器的集合
    • 共享网络
    • 短暂的生命周期
  • controller
    • 确保预期的pod副本的数量
    • 无状态应用部署
    • 有状态应用部署
    • 确保所有的node运行同一个pod
    • 一次性任务和定时任务
  • service
    • 定义一组pod的访问规则
  • ingress
  • deployment

集群搭建

平台规划

  • 单master集群
  • 多master集群(高可用)

服务器硬件配置要求

测试环境

建议

cpu 内存 硬盘
master 2c 4G 30G
node 4c 8G 40G

生产环境

cpu 内存 硬盘
master
node

搭建方式

  • kubeadm
  • 二进制包

kubeadm方式

centos7: CentOS-7-x86_64-Minimal-2009.iso

docker: 18.06.1-ce (19.03.13-ce)

Kubernetes v1.21.0

使用虚拟机(virtualbox)

主机名 ip cpu 内存 硬盘
k8smaster 192.168.0.110 2 【最少】 2G 50G
k8snode111 192.168.0.111 1 2G 50G
k8snode112 192.168.0.112 1 2G 50G
# 如无特殊说明,需要在所有机器上执行
yum install vim lrzsz wget ntpdate -y

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名(见上面表格)
# hostnamectl set-hostname k8smaster
# hostnamectl set-hostname k8snode111
# hostnamectl set-hostname k8snode112

# 可以只在master添加hosts
cat >> /etc/hosts << EOF
192.168.0.110 k8smaster
192.168.0.111 k8snode111
192.168.0.112 k8snode112
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
ntpdate time.windows.com

# 此时可以备份一下系统快照

# 若不能ping通百度,就echo 'nameserver 192.168.0.1'>>/etc/resolv.conf
ping -c 5 www.baidu.com
# 安装docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
# sudo yum install -y docker-ce-19.03.13-3.el7 docker-ce-cli-19.03.13-3.el7

systemctl enable docker && systemctl start docker
docker --version
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

if [ ! $(getent group docker) ];
then 
    sudo groupadd docker;
else
    echo "docker user group already exists"
fi

sudo systemctl daemon-reload
sudo systemctl restart docker
docker info

# 此时可以备份一下系统快照

# 添加阿里云YUM软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubeadm,kubelet和kubectl(由于版本更新频繁,这里指定版本号部署)
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
systemctl enable kubelet

docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0

# 此时可以备份一下系统快照
# 复制出另外两台机器并修改cpu核心数为1,memory为3072,ip/hostname
# hostnamectl set-hostname k8snode111
# hostnamectl set-hostname k8snode112

# 部署Kubernetes Master
# 【在k8smaster上执行】,注意apiserver-advertise-address的值是k8smaster机器ip,其它可不变
kubeadm init --apiserver-advertise-address=192.168.0.110 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.21.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

# 上面kubeadm init命令成功后会有提示
# Your Kubernetes control-plane has initialized successfully!

# 成功后会打印如下,需要在master上执行一下,复制打印的就行,不用复制下面的
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 在master查看节点
kubectl get nodes

#ntpdate time.windows.com
ntpdate ntp3.aliyun.com

# 在k8snode111和k8snode112节点上执行kubeadm join命令,将k8snode111和k8snode112加入集群
# 执行join前注意查看三台机器的时间是否一致【坑】
kubeadm join 192.168.0.110:6443 --token nrn4m5.4xvcn3q9iemo8hbb --discovery-token-ca-cert-hash sha256:344c467e12210b41b74190f2bc25d6ce72dbc4184fbe5931d1fae260be022cff 

# 此时kubectl get nodes会显示NotReady,在master中执行如下即可
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 等待一会儿,再次执行kubectl get nodes会显示Ready
kubectl get pods -n kube-system
kubectl get nodes
测试k8s-nginx

# 测试kubernetes集群
kubectl create deployment nginx --image=nginx --replicas=2
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc -owide
会返回如下
NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        16m
service/nginx        NodePort    10.100.1.84   <none>        80:31762/TCP   26s

# 看deployment在哪里
访问地址:http://192.168.0.110:31762 / http://192.168.0.111:31762 / http://192.168.0.112:31762 均可看到nginx页面。
curl http://192.168.0.110:31762
curl http://192.168.0.111:31762
curl http://192.168.0.112:31762

# 测试完nginx后删除,避免对以后的命令产生影响
k delete service nginx
k delete deployment nginx
测试k8s-ingress-nginx
ka 02-ingress-controller.yml
kgp -n ingress-nginx -owide
# 若上面的命令显示 ingress-nginx-controller-xxx的状态为 pending,且kdp 后
# 有0/3 nodes are available: 3 node(s) didn't match Pod's node affinity/selector.错误原因
# 就执行如下
kubectl label nodes k8smaster ingress-ready='true'

# 在k8smaster上执行如下即可看到返回值
curl localhost:80
# 在windows主机上执行curl k8smaster:80同样可以看到结果
问题记录
# 若出现registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login'
# 就执行如下两条命令
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0

# kubeadm init时hostname设置出错导致节点名称不对 【坑】
# 需要删除节点
# kubectl get nodes
# kubectl delete node xxx
# kubeadm reset
# rm -fr ~/.kube

# 默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token
# kubeadm token create --print-join-command

二进制包方式

centos7: CentOS-7-x86_64-Minimal-2009.iso

docker: 18.06.1-ce

Kubernetes v1.18.0

主机名 ip cpu 内存 硬盘 组件
k8smaster 192.168.0.110 2 【最少】 2G 50G kube-apiserver, kube-controller-manager, kube-scheduler, etcd
k8snode111 192.168.0.111 1 2G 50G kubelet, kube-proxy, docker, etcd
k8snode112 192.168.0.112 1 2G 50G kubelet, kube-proxy, docker, etcd

一、创建虚拟机

二、配置centos7系统

见上面的脚本

三、为etcd和apiserver自签证书

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

mkdir -p ~/TLS/{etcd,k8s}

cat > ~/TLS/etcd/ca-config.json<< EOF
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "www": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF

cat > ~/TLS/etcd/ca-csr.json<< EOF
{"CN": "etcd CA",
  "key": {"algo": "rsa", "size": 2048},
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem

cat > ~/TLS/etcd/server-csr.json<< EOF
{
    "CN": "etcd",
    "hosts": [
        "192.168.0.110",
        "192.168.0.111",
        "192.168.0.112"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem

四、部署etcd集群

wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

mkdir -p /opt/etcd/{bin,cfg,ssl}
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
# ETCD_NAME:节点名称,集群中唯一
ETCD_NAME="etcd-110"
# 数据目录
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
# 集群通信监听地址
ETCD_LISTEN_PEER_URLS="https://192.168.0.110:2380"
# 客户端访问监听地址
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.110:2379"
#[Clustering]
# 集群通告地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.110:2380"
# 客户端通告地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.110:2379"
# 集群节点地址
ETCD_INITIAL_CLUSTER="etcd-
1=https://192.168.0.110:2380,etcd-2=https://192.168.0.111:2380,etcd-3=https://192.168.0.112:2380"
# 集群 Token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#加入集群的当前状态,new 是新集群,existing 表示加入已有集群
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

cp ~/TLS/etcd/ca*pem /opt/etcd/ssl/
cp ~/TLS/etcd/server*pem /opt/etcd/ssl/

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

# 复制到k8snode111和k8snode112
scp -r /opt/etcd/ root@192.168.0.111:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.0.111:/usr/lib/systemd/system/

scp -r /opt/etcd/ root@192.168.0.112:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.0.112:/usr/lib/systemd/system/


然后在k8snode111和k8snode112分别修改 etcd.conf 配置文件中的节点名称和当前服务器 IP,除ETCD_INITIAL_CLUSTER的ip不需要变动,其它ip改成本南ip即可
vim /opt/etcd/cfg/etcd.conf
再分别加重新加载配置、启动并配置开机启动


# 查看etcd集群状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.110:2379,https://192.168.0.111:2379,https://192.168.0.112:2379" endpoint health

# https://192.168.0.112:2379 is healthy: successfully committed proposal: took = 21.565661ms
# https://192.168.0.110:2379 is healthy: successfully committed proposal: took = 21.504695ms
# https://192.168.0.111:2379 is healthy: successfully committed proposal: took = 21.183527ms
# 如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

五、部署master组件

自签证书颁发机构(CA)

cat > /root/TLS/k8s/ca-config.json<< EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
EOF

cat > /root/TLS/k8s/ca-csr.json<< EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cd /root/TLS/k8s
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ls *pem

cat > /root/TLS/k8s/server-csr.json<< EOF
{
  "CN": "kubernetes",
  "hosts": [
    "10.0.0.1",
    "127.0.0.1",
    "192.168.0.110",
    "192.168.0.111",
    "192.168.0.112",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
ls server*pem

view https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183 and down kubernetes-server-linux-amd64.tar.gz

或者

wget https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz

操作:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

#  部署 kube-apiserver
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.0.110:2379,https://192.168.0.111:2379,https://192.168.0.112:2379 \\
--bind-address=192.168.0.110 \\
--secure-port=6443 \\
--advertise-address=192.168.0.110 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

如下是说明
–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd 集群地址
–bind-address:监听地址
–secure-port:https 安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service 虚拟 IP 地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用 RBAC 授权和节点自管理
–enable-bootstrap-token-auth:启用 TLS bootstrap 机制
–token-auth-file:bootstrap token 文件
–service-node-port-range:Service nodeport 类型默认分配端口范围
–kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
–tls-xxx-file:apiserver https 证书
–etcd-xxxfile:连接 Etcd 集群证书
–audit-log-xxx:审计日志

cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF


cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

# 所有组件都已经启动成功,通过 kubectl 工具查看当前集群组件状态
kubectl get cs
# NAME                 STATUS    MESSAGE             ERROR
# scheduler            Healthy   ok                  
# etcd-1               Healthy   {"health":"true"}   
# etcd-2               Healthy   {"health":"true"}   
# controller-manager   Healthy   ok                  
# etcd-0               Healthy   {"health":"true"}
# 如上输出说明 Master 节点组件运行正常。

六、部署node组件

# 在k8smaster上执行
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
cp ~/kubernetes/server/bin/kubelet /opt/kubernetes/bin
cp ~/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF

如下是说明
–hostname-override:显示名称,集群中唯一
–network-plugin:启用 CNI
–kubeconfig:空路径,会自动生成,后面用于连接 apiserver
–bootstrap-kubeconfig:首次启动向 apiserver 申请证书
–config:配置参数文件
–cert-dir:kubelet 证书生成目录
–pod-infra-container-image:管理 Pod 网络容器的镜像

cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

KUBE_APISERVER="https://192.168.0.110:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与 token.csv 里保持一致
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
cp bootstrap.kubeconfig /opt/kubernetes/cfg

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

# 如下命令无效
kubectl get csr

cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF



cat > /root/TLS/k8s/kube-proxy-csr.json<< EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
cd /root/TLS/k8s/
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*pem

KUBE_APISERVER="https://192.168.0.110:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

七、部署网络插件CNI

wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

mkdir /opt/cni/bin -p
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml

kubectl命令

kubectl get pod,svc
kubectl get cs
kubectl get node
kubectl get nodes
kubectl api-resources
kubectl api-versions

kubectl replace --force -f

核心技术

pod

基本概念

  • 最小部署单元
  • 包含一个或多个容器
  • 一个pod中容器共享网络
  • pod是短暂的

存在意义

  • 创建容器使用docker,一个docker对应一个容器,一个容器运行一个应用程序
  • pod是多进程的,运行多个应用程序
  • pod存在为了亲密性应用(多个应用之间进行交互)

实现机制

  • 共享网络 (通过pause容器,把其它业务容器加入到pause容器中,让所有业务容器在同一个命名空间中)
  • 共享存储 (引入数据卷概念volumn,来进行持久化存储)

镜像拉取策略

imagePullPolicy

  • IfNotPresent (默认)
  • Always
  • Never

重启策略

restartPolicy

  • Alawys (默认)
  • OnFailure
  • Never

资源限制

resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

健康检查

  • livenessProbe 存活检查 (如果检查失败,将杀死容器,根据pod的restartPolicy来操作)
  • readinessProbe 就绪检查(如果检查失败,k8s会把pod从service endpoints中移除)

probe支持如下三种方式

  • httGet (发送get请求,返回码为200~400为成功)
  • exec (shell命令返回状态码是0为成功)
  • tcpSocket (tcp socket建立成功)

controller

基本概念

在集群上管理和运行容器的对象。

pod与controller的关系

pod是通过controller实现应用的运维(如伸缩,滚动升级等)

pod和controller之间通过lables标签建立关系

deployment控制器应用场景

  • 部署无状态应用
  • 管理pod和ReplicaSet
  • 部署,滚动升级

应用场景:web服务、微服务

使用deployment部署应用(无状态应用)

kubectl create deployment web --image=nginx --dry-run -o yaml > web.yaml

kubectl apply -f web.yaml
kubectl get pods
kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=svc1 -o yaml > svc1.yaml
kubectl apply -f svc1.yaml
kubectl get pods,svc

应用的升级回滚和弹性伸缩

kubectl set image deployment web nginx=nginx:1.15

kubectl scale deployment web --replicas=5

service

定义一组pod的访问规则

存在意义

  • 防止pod失联(服务发现)
  • 定义一组pod访问策略(负载均衡)

pod和service的关系

selector:
app: nginx

labels:
app: nginx

常用service类型

  • ClusterIP 集群内部使用
  • NodePort 对外访问使用
  • NodeBalancer 对外访问使用(公有云)

部署有状态应用

无状态和有状态

无状态
  • 认为pod都是一样的
  • 没有顺序要求
  • 不考虑在哪个node运行
  • 随意伸缩和扩展
有状态
  • 如上因素都要考虑到
  • 让每个pod独立,保持pod启动顺序和唯一
  • 唯一的网络标识符,持久存储
  • 有序,比如mysql主从

部署有状态应用

  • 无头service
    • ClusterIP: None
    • 使用域名来访问pod
StatefulSet部署有状态应用

sts.yaml

deployment和statefulset区别:有唯一身份
  • 根据主机名+按照一定规则生成域名

每个pod有唯一主机名称

主机名称.service名称.命名空间.svc.cluster.local

DaemonSet部署守护进程

ds.yaml

job(一次性任务)

运行结束后状态会变成Completed

cron(定时任务)

每次运行都会生成一个pod

配置管理 secret

作用:加密数据存放在etcd中,让pod以挂载的方式
场景:凭证

echo 'admin' | base64

  1. 创建secret加密数据
  2. 以变量形式挂载到pod中
  3. 以Volume形式挂载到pod中

配置管理 ConfigMap

作用:存储不加密数据到etcd,让Pod以变量或Volume挂载到容器中
场景:配置文件

  1. 创建配置文件
  2. 创建ConfigMap
  3. 以变量形式挂载到pod中
  4. 以Volume形式挂载到pod中

集群安全机制

略(需要后续再补充)

Ingress

  • 把端口号对外开放,通过ip:port进行访问
  • NodePort缺陷
    • 每个端口只能用一次,一个端口对应一个服务
pod和Ingress关联
  • 二者通过service关联
  • ingress作为统一入口,由service关联一组pod
Ingress工作流程
使用Ingress
  • 部署ingress controller
  • 创建ingress规则
使用Ingress对外开放应用
  • 创建nginx应用,对外开放端口(使用NodePort)
  • 部署ingress controller
  • 创建ingress规则
  • 在windows系统中配置hosts,并访问域名

helm

持久化存储

PV和PVC

PersistentVolume,PersistentVolumeClaim

PV: 持久化存储,对存储资源进行抽象,对外提供可以调用的地方(生产者)
PVC: 用于调用,不需要关心内部实现细节(消费者)

POD使用PVC

PVC绑定到PV


kind:

  • Service
  • Deployment

version

Kubernetes v1.18.3
Docker 19.03.5
minikube v1.11.0

install via minikube & start

  • install docker
  • install kubernetes-client(kubectl)
  • install minikube
  • start
sudo yum install -y wget
wget https://dl.k8s.io/v1.20.2/kubernetes-client-linux-amd64.tar.gz
tar zxvf kubernetes-client-linux-amd64.tar.gz
sudo cp kubernetes/client/bin/kubectl /usr/local/bin/

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo chmod +x minikube
sudo mv minikube /usr/local/bin/

sudo yum install -y conntrack
minikube start --registry-mirror=https://registry.docker-cn.com --driver=none --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers

install via kind (k8s in docker) & start

  • 前置条件
    • docker
    • kubectl (建议安装,Kind 本身不需要 kubectl,安装 kubectl 可以在本机直接管理 Kubernetes 群集。)
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/bin/
  • kind create cluster
  • kind create cluster --name my-cluster

使用配置文件创建集群 kind create cluster --config my-kind.yml
my-kind.yml文件在./files/k8s/kind/my-kind.yml

多群集切换
(切换到群集kind) kubectl cluster-info --context kind-kind

install via k3d

https://k3d.io
https://github.com/rancher/k3d/releases/tag/v4.4.8

wget https://github.com/rancher/k3d/releases/download/v4.4.8/k3d-linux-amd64
chmod +x k3d-linux-amd64
mv k3d-linux-amd64 k3d
mv k3d /usr/local/bin/
k3d version
# k3d version v4.4.8
# k3s version v1.21.3-k3s1 (default)
  • 前置条件
    • docker 18.06.1-ce
    • kubectl (建议安装,Kind 本身不需要 kubectl,安装 kubectl 可以在本机直接管理 Kubernetes 群集。)
wget https://dl.k8s.io/v1.21.3/kubernetes-client-linux-amd64.tar.gz
tar zxvf kubernetes-client-linux-amd64.tar.gz
sudo cp kubernetes/client/bin/kubectl /usr/local/bin/
ln -s /usr/local/bin/kubectl /usr/local/bin/k

usage

  • kubectl get nodes

  • kubectl cluster-info
    kubectl run nginx-deployment --image=nginx:1.7.9
    kubectl get pods
    kubectl describe pod nginx-deployment
    kubectl exec -it nginx-deployment -n default -- /bin/bash

  • kubectl get pods --all-namespaces

  • kubectl get pods -n NAMESPACE

  • kubectl logs service-abc-0 -n NAMESPACE

  • kubectl delete deployment --all

  • kubectl delete svc nginx

  • kubectl exec -it POD_NAME -- bash

  • kubectl get configmap/cm

  • 调整副本数 kubectl patch -n NAMESPACE sts APP_NAME -p '{"spec": {"replicas":3}}'

  • 删除 kubectl delete pod ?? -n NAMESPACE

  • kubectl label nodes =
    kubectl get node --show-labels
    kubectl label nodes -
    kubectl label node 192.168.1.205 mem-
    上面命令中的mem-中的mem为标签的键,后面的减号表示将该标签删除

kubectl get endpoints -n test

alias

is_env_set=`cat ~/.bashrc | grep "alias k='kubectl'"`
if [ "" == "${is_env_set}" ]; then
cat >> ~/.bashrc <<EOF
# kubectl alias begin
alias k='kubectl'
alias kgn='k get nodes'
alias ka='kubectl apply -f '
alias kreplace='kubectl replace --force -f '
alias kga='kubectl get all'
alias kdel='kubectl delete -f '
alias ktail='kubectl logs -f --tail 10 '
alias klogs='kubectl logs -f --tail 10 '
alias klogf='kubectl logs -f --tail 10 '
alias klog='kubectl logs -f --tail 10 '
alias kd='kubectl describe'
alias ke='kubectl explain'
alias kgp='kubectl get pod'
alias kgpo='kubectl get pod -owide'
alias kgpa='kubectl get pods --all-namespaces'
alias kgs='kubectl get service'
alias kgps='kubectl get pod,service'
alias kgsa='kubectl get services --all-namespaces'
alias kds='kubectl describe services'
alias kdp='kubectl describe pod'
# kubectl alias end
EOF
fi

. ~/.bashrc

namespace

  • create a file named 'my-namespace.yml', see ./files/k8s/my-namespace.yml

  • k apply -f my-namespace.yml

  • k get pod -o wide -n my-namespace

  • k get namespaces

  • k get namespaces --show-labels

  • k get deployment -n my-namespace

  • k delete -f my-namespace.yml

  • k describe pod springboot-simple-app-7d499f6986-n7lp7 -n my-namespace

  • k exec springboot-simple-app-7d499f6986-n7lp7 -n my-namespace -ti -- bash

  • 同时查看多个pod的日志 k logs -f --tail=1111 -l app=springboot-simple-app-svc-statefulset --all-containers -n test

  • telnet kube-dns.kube-system.svc.cluster.local 53

configmap

create a file named 'my-configmap.yml', see ./files/k8s/my-configmap.yml

  • k apply -f my-configmap.yml

deployment

deployment

  • create a file named 'my-deployment.yml', see ./files/k8s/my-deployment.yml

k apply -f my-deployment.yml

使用命令进入容器

k exec springboot-simple-app-7d595f9cfb-w5vmh -n my-namespace -ti -- bash

可以看下/workspace下有init和package文件夹

service

create a file named 'my-service.yml', see ./files/k8s/my-service.yml

ingress-controller

create a file named 'ingress-controller.yml', see ./files/k8s/ingress-controller.yml

k apply -f ingress-controller.yml

ingress

create a file named 'my-ingress.yml', see ./files/k8s/my-ingress.yml

k apply -f my-ingress.yml

k get pods -n ingress-nginx -o wide

NAME IP
nginx-ingress-controller-766fb9f77-nqmrq 192.168.0.111

在windows的hosts中配置
192.168.0.111 simple.sample.com

question:

  • /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
  • --> echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
  • The connection to the server localhost:8080 was refused - did you specify the right host or port? / unable to recognize "kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused ---> 如果在root用户下执行的,就不会报错 mkdir -p $HOME/.kube && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && sudo chown $(id -u):$(id -g) $HOME/.kube/config

refer

other

  • k3s
  • k3d
  • microK8s
  • kind
作者:admin  创建时间:2024-12-26 19:44
最后编辑:admin  更新时间:2025-09-19 10:08