您好,欢迎来到站长目录(28sn.com)!


Centos8部署Kubernetes集群及Harbor安装配置

来源:网络整理 浏览:152次 时间:2020-10-30


近期一直关注k8s,五一放假时间,没有外出,所有通过VMware workstation在测试环境搭建了k8s集群及Harbo创库的安装配置,希望对大家有帮助,也同时提高自己对K8S的理解。


1 准备阶段

Habor :创库创建

K8s-master01 ,k8s-node01,k8s-node02 kubeadm


2 集群安装IP分配

192.168.253.167 k8s-master 192.168.253.168 k8s-node01 192.168.253.169 k8s-node02


3 设置系统主机名以及Host文件相互解析

hostnamectl set-hostname --static k8s-master
hostnamectl set-hostname --static k8s-node01
hostnamectl set-hostname --static k8s-node02
vim /etc/hosts
192.168.253.167 k8s-master
192.168.253.168 k8s-node01
192.168.253.169 k8s-node02

安装依赖关系

yum install -y vim wget net-tools git
yum install lrzsz  --nogpgcheck

设置防火墙为Iptables并设置空规则

systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables.services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save;(redhat7使用)

关闭swap分区和selinux ---执行成功

swapoff  -a  &&  sed -i '  / swap / s/^\(.*\)$/#\1/g'  /etc/fstab
setenforce 0 && sed -i  's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

调整内核参数,对于k8s

cat > kubernetes.conf << EOF
net.bridge.bridge-nf-call-iptables=1  #未执行
net.bridge.bridge-nf-call-ip6tables=1 #未执行
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0 #未执行
vm.swappiness=0 #禁止使用swap空间,只有当系统OOM时才可以使用它
vm.overcommit_memory=1 #不检查物理内存是否够用
vm.panic_on_oom=0 #开启OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

cp  kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p  /etc/sysctl.d/kubernetes.conf

调整系统时区---执行成功

#设置系统时区为中国/上海
```
timedateclt set-timezone Asia/Shanghai
```

#将当前的UTC时间写入硬件时钟
```
timedatectl set-local-rtc 0
```

#重启依赖于系统时间的服务
```
systemctl restart rsyslog
systemctl restart crond
```

关闭系统不需要的服务

systemctl stop postfix && systemctl disable postfix #未执行

设置rsyslogd 和systemd journald---未执行成功

mkdir /var/log/journal 持久化保存日志目录
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
#持久化保存到磁盘
Storage-persistent
#压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitBurst=1000
#最大占用空间10G
SystemMaxUse=10G
单日志文件最大200M
SystemMaxFileSize=200M
日志保存时间2周
MaxRetentionSec=2week
#不将日志转发到syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-jounald

升级内核版本:---未执行

rpm -Uvh https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el8/x86_64/RPMS/elrepo-release-8.1-1.el8.elrepo.noarch.rpm
#安装完成后检查/boot/grub2/grub.cfg中对应的内核menuentry 中是否包含initrd16配置,如果没有,再安装一次。
yum --enablerepo=elrepo-kernel install -y kernel-lt
#设置开机从新内核启动
grub2-set-default "Centos Linux (4.4.182-1.el7.elrepo.x86_64) 7 (CORE)"


kube-proxy开启ipvs的前置条件---执行成功

在所有节点上安装ipset软件包,为了方便查看ipvs规则我们要安装ipvsadm(可选)
yum install ipset -y
yum install ipvsadm -y

modprobe br_netfilter
cat  >/etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs-rr
modprobe -- ip_vs-wrr
modprobe -- ip_vs-sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack_ipv4

安装docker软件--执行成功

# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
先安装containerd.io
yum install -y containerd.io
dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
#sudo yum -y install docker-ce
# Step 4: 开启Docker服务
sudo service docker start && systemctl enable docker

# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ee.repo
#   将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
#   Loading mirror speeds from cached hostfile
#   Loaded plugins: branch, fastestmirror, langpacks
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
#   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
#   Available Packages
# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
# sudo yum -y install docker-ce-[VERSION]
docker version

创建/etc/docker目录

mkdir /etc/docker

cat  >/etc/docker/daemon.json << EOF
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "log-driver": "json-file",
 "log-opts": {
  "max-size": "100m"
 }
}
EOF
mkdir -p /etc/systemd/system/docker.service.d

重启Docker服务

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

安装kubeadm (主从配置 )

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安装
systemctl enable kubelet.service

初始化主节点:

 查看镜像版本列表
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

拉起镜像:

科学上网拉起镜像:
docker  pull k8s.gcr.io/kube-apiserver:v1.18.2
docker  pull k8s.gcr.io/kube-controller-manager:v1.18.2
docker  pull k8s.gcr.io/kube-scheduler:v1.18.2
docker  pull  k8s.gcr.io/kube-proxy:v1.18.2
docker pull  k8s.gcr.io/pause:3.2
docker pull  k8s.gcr.io/etcd:3.4.3-0
docker pull  k8s.gcr.io/coredns:1.6.7

保存镜像:

docker save k8s.gcr.io/kube-proxy -o kube-proxy.tar
docker save k8s.gcr.io/kube-apiserver -o kube-apiserver.tar
docker save k8s.gcr.io/kube-scheduler -o kube-scheduler.tar
docker save k8s.gcr.io/kube-controller-manager -o kube-kube-controller-manager.tar
docker save k8s.gcr.io/pause -o pause.tar
docker save k8s.gcr.io/etcd -o /etcd.tar
docker save k8s.gcr.io/etcd -o etcd.tar
docker save k8s.gcr.io/coredns -o coredns.tar

docker save quay.io/coreos/flannel  -o flannel.tar
vim load-images.sh
#!/bin/bash
ls /root/kubeadm-basic.images > /tmp/images_list.txt
cd /root/kubeadm-basic.images
for i in $( cat /tmp/images_list.txt )
do
  docker load -i $i
done
rm -rf /tmp/images_list.txt
chmod a+x load-images.sh
bash load-images.sh

初始化主节点:

创建init配置文件:
kubeadm config print init-defaults > kubeadm-config.yaml

编辑配置文件(打印出来 vim):
vim kubeadm-config.yaml
localAPIEndpoint:
 advertiseAddress: 192.168.253.167 #修改advertiseAddress: 192.168.253.167 为master地址

kubernetesVersion: v1.18.2
 podSubnet: "10.244.0.0/16"

apiVersion: kubeproxy.config.k8s.io/v1alpha1  #修改为ipvs模式
kind: KubeProxyConfiguration
featureGates:
   SupportIPVSProxyMode: true
mode: ipvs

初始化(初始化,需要docker在启动状态):
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
报错信息1:
警告,docker非开机启动根据提示的命令执行一下即可。
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
systemctl enable docker.service

报错信息2:
# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Error: unknown flag: --experimental-upload-certs

报错信息3: docker 的驱动程序是cgroupfs推荐设置为systemd

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ 

添加下面的配置即可
#vim /etc/docker/daemon.json
{"registry-mirrors": ["https://wv1h618x.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"}}
重启
#systemctl daemon-reload && systemctl restart docker && systemctl enable docker 


# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
W0502 20:39:04.447334   29833 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.253.167]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.253.167 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.253.167 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0502 20:39:18.493122   29833 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0502 20:39:18.502456   29833 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.032749 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
e04dd9c52332d23bab221c18d59016fb3789d7f39c8529681b7cc476707c1380
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.253.167:6443 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:289495fa530177884fc6606727728625102df14b8dd042586cb8de61051da0e8
   
启动成功,注意在启动成功的时候下面会有提示信息让我们进行相关的操作,并且进入node的命令也在里面:
   
加入主节点以及其余工作节点:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果有些生成的配置被我们改乱掉了也可以重新来过。
kubeadm reset

查看生产的证书:
[root@k8s-master ~]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# ls
apiserver.crt              apiserver-etcd-client.key  apiserver-kubelet-client.crt  ca.crt  etcd                front-proxy-ca.key      front-proxy-client.key  sa.pub
apiserver-etcd-client.crt  apiserver.key              apiserver-kubelet-client.key  ca.key  front-proxy-ca.crt  front-proxy-client.crt  sa.key

查看node节点:
[root@k8s-master pki]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   6m33s   v1.18.2
注意:发现master 为 notready 查看pod后发现是网络错误,也就是flannel没有安装,我们拉取一下flannel的镜像。

网络flannel部署:

k8的网络插件kube-flannel.yml 最新版本的yaml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f kube-flannel.yml

[root@k8s-master flannel]# kubectl create -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

[root@k8s-master flannel]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   36m   v1.18.2
[root@k8s-master flannel]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-4pzqn             1/1     Running   0          35m
coredns-66bff467f8-bw2b4             1/1     Running   0          35m
etcd-k8s-master                      1/1     Running   0          36m
kube-apiserver-k8s-master            1/1     Running   0          36m
kube-controller-manager-k8s-master   1/1     Running   0          36m
kube-flannel-ds-amd64-kf7j7          1/1     Running   0          18m
kube-proxy-g2hlg                     1/1     Running   0          35m
kube-scheduler-k8s-master            1/1     Running   0          36m

查看POD配置:
kubectl describe pod kube-flannel-ds-amd64-k92bk -n kube-system

加入其它节点:

准备工作相同,在node节点我们也将k8s下载好,之后使用我们在init 时候自动生成的 join命令添加到node中即可:

kubeadm join 192.168.253.167:6443 --token abcdef.0123456789abcdef \
   --discovery-token-ca-cert-hash sha256:289495fa530177884fc6606727728625102df14b8dd042586cb8de61051da0e8

# kubectl get pod -n kube-system -o wide
# kubectl get node
# kubectl get pod -n kube-system
# kubectl get pod -n kube-system -w
初始时错误:因为无镜像
kube-flannel-ds-amd64-k92bk          0/1     Init:0/1   0          4m3s
   kube-flannel-ds-amd64-k92bk          0/1     Init:ImagePullBackOff   0          5m15s
   kube-flannel-ds-amd64-k92bk          0/1     Init:ErrImagePull       0          6m39s

2. Harbor安装

Harbor安装--仓库配置: Harbor是一个开源的镜像仓库,harbor官网:https://goharbor.io/

初始设置:
hostnamectl set-hostname --static k8s-habor
yum install -y vim wget net-tools git
yum install lrzsz  --nogpgcheck
systemctl stop firewalld && systemctl disable firewalld
swapoff  -a  &&  sed -i '  / swap / s/^\(.*\)$/#\1/g'  /etc/fstab
setenforce 0 && sed -i  's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

编辑docker配置:
vim /etc/docker/daemon.json

master:
cat  >/etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://y7guluho.mirror.aliyuncs.com"],
"insecure-registries": ["https://hub.51geeks.com"]
}
EOF  

nodes:
cat  >/etc/docker/daemon.json << EOF
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "log-driver": "json-file",
 "log-opts": {
  "max-size": "100m"
 },
 "insecure-registries": ["https://hub.51geeks.com"]
}
EOF
3台服务器都需要添加

准备条件:

环境软件版本下载地址备注系统Centos8.1


docker19.03.8


docker-componse1.25.5https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)

harbor1.10.2https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz

安装docker

$ yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine            
$ yum install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ yum-config-manager --enable docker-ce-edge
$ yum install -y docker-ce
$ systemctl start docker
$ systemctl enable docker

安装docker-componse

安装帮助:https://docs.docker.com/compose/install/

https://github.com/docker/compose/

  1. sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose
    sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
    docker-compose --version

安装harbor

wget -c  https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz
tar zxvf harbor-offline-installer-v1.10.2.tgz
cd harbor

配置harbor.yml


$vim harbor.yml
hostname: hub.51geeks.com
http:
port: 80
https:
port: 443
   certificate: /data/cert/server.crt
   private_key: /data/cert/server.key
harbor_admin_password: Harbor12345 #  Web端admin用户密码
database:
  password: root123
data_volumn: /data

创建证书:

创建私钥:
openssl genrsa -des3 -out server.key 2048
创建证书请求:
openssl req -new -key server.key -out server.csr
备份私钥:
cp server.key server.key.org
退出密码:
openssl rsa -in  server.key.org -out server.key
证书签名:
openssl x509 -req -days 365 -in server.csr -signkey  server.key -out server.crt
chmod a+x *
mkdir /data/cert
chmod -R 777 /data/cert

追加解析1个master,2nodes,1harbor:
echo "192.168.253.170 hub.51geeks.com" >> /etc/hosts
192.168.253.167 k8s-master
192.168.253.168 k8s-node01
192.168.253.169 k8s-node02
192.168.253.170 hub.51geeks.com

安装harbor

$ ./install.sh

[root@k8s-habor harbor]# ./install.sh

[Step 0]: checking if docker is installed ...

Note: docker version: 19.03.8

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.25.5

[Step 2]: loading Harbor images ...
ad1dca7cdecb: Loading layer [==================================================>]   34.5MB/34.5MB
fe0efe3b32dc: Loading layer [==================================================>]  63.56MB/63.56MB
5504ea8a1c89: Loading layer [==================================================>]  58.39MB/58.39MB
e5fe51919fa7: Loading layer [==================================================>]  5.632kB/5.632kB
5591c247d2e6: Loading layer [==================================================>]  2.048kB/2.048kB
db6a70d4a66e: Loading layer [==================================================>]   2.56kB/2.56kB
a898589079d4: Loading layer [==================================================>]   2.56kB/2.56kB
a45af9651ff3: Loading layer [==================================================>]   2.56kB/2.56kB
be9c1b049bcc: Loading layer [==================================================>]  10.24kB/10.24kB
Loaded image: goharbor/harbor-db:v1.10.2
346fb2bd57a4: Loading layer [==================================================>]  8.435MB/8.435MB
2e3e5d2fc1dd: Loading layer [==================================================>]  6.239MB/6.239MB
ef4f6d3760d4: Loading layer [==================================================>]  16.04MB/16.04MB
c72e6e471644: Loading layer [==================================================>]  28.25MB/28.25MB
8ef2ab5918ad: Loading layer [==================================================>]  22.02kB/22.02kB
8c6f27a03a6c: Loading layer [==================================================>]  50.52MB/50.52MB
Loaded image: goharbor/notary-server-photon:v1.10.2
6d0fd267be6a: Loading layer [==================================================>]  115.2MB/115.2MB
cc6a0cb3722a: Loading layer [==================================================>]  12.14MB/12.14MB
2df571d6ea95: Loading layer [==================================================>]  3.072kB/3.072kB
9971e5655191: Loading layer [==================================================>]  49.15kB/49.15kB
10c405f9f0e2: Loading layer [==================================================>]  3.584kB/3.584kB
6861c00be6c7: Loading layer [==================================================>]  13.02MB/13.02MB
Loaded image: goharbor/clair-photon:v1.10.2
1826656409e9: Loading layer [==================================================>]  10.28MB/10.28MB
8cdf4e864764: Loading layer [==================================================>]  7.697MB/7.697MB
15824ca72188: Loading layer [==================================================>]  223.2kB/223.2kB
16130654d1d1: Loading layer [==================================================>]  195.1kB/195.1kB
f3ed25db3f03: Loading layer [==================================================>]  15.36kB/15.36kB
3580b56fee01: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: goharbor/harbor-portal:v1.10.2
a6d6e26561c2: Loading layer [==================================================>]  12.21MB/12.21MB
86ec36cec073: Loading layer [==================================================>]   42.5MB/42.5MB
a834e5c5df07: Loading layer [==================================================>]  5.632kB/5.632kB
d74d9eba8546: Loading layer [==================================================>]  40.45kB/40.45kB
6d5eed6f3419: Loading layer [==================================================>]   42.5MB/42.5MB
484994b6bc3f: Loading layer [==================================================>]   2.56kB/2.56kB
Loaded image: goharbor/harbor-core:v1.10.2
8b67d91d471e: Loading layer [==================================================>]  12.21MB/12.21MB
2584449c95d0: Loading layer [==================================================>]  49.37MB/49.37MB
Loaded image: goharbor/harbor-jobservice:v1.10.2
b23fa00ea843: Loading layer [==================================================>]  8.441MB/8.441MB
b2c0f9d70915: Loading layer [==================================================>]  3.584kB/3.584kB
b503c86a04d4: Loading layer [==================================================>]  21.76MB/21.76MB
b360fa5431c1: Loading layer [==================================================>]  3.072kB/3.072kB
eb575ebe03ac: Loading layer [==================================================>]  8.662MB/8.662MB
80fb2b0f0315: Loading layer [==================================================>]  31.24MB/31.24MB
Loaded image: goharbor/harbor-registryctl:v1.10.2
1358663a68ec: Loading layer [==================================================>]  82.23MB/82.23MB
711a7d4ecee3: Loading layer [==================================================>]  3.072kB/3.072kB
5bb647da1c5e: Loading layer [==================================================>]   59.9kB/59.9kB
57ea330779ba: Loading layer [==================================================>]  61.95kB/61.95kB
Loaded image: goharbor/redis-photon:v1.10.2
dd582a00d0e4: Loading layer [==================================================>]  10.28MB/10.28MB
Loaded image: goharbor/nginx-photon:v1.10.2
f4ce9d4c5979: Loading layer [==================================================>]   8.44MB/8.44MB
4df17639d73c: Loading layer [==================================================>]   42.3MB/42.3MB
06a92309fcf7: Loading layer [==================================================>]  3.072kB/3.072kB
6961179c06b3: Loading layer [==================================================>]  3.584kB/3.584kB
24058aa4795e: Loading layer [==================================================>]  43.12MB/43.12MB
Loaded image: goharbor/chartmuseum-photon:v1.10.2
28bdd74b7611: Loading layer [==================================================>]  49.82MB/49.82MB
312844c67ef0: Loading layer [==================================================>]  3.584kB/3.584kB
97ff7939d09c: Loading layer [==================================================>]  3.072kB/3.072kB
fe1ca6ca62b1: Loading layer [==================================================>]   2.56kB/2.56kB
807185e8884e: Loading layer [==================================================>]  3.072kB/3.072kB
7014ac08f821: Loading layer [==================================================>]  3.584kB/3.584kB
b9a09e8231aa: Loading layer [==================================================>]  12.29kB/12.29kB
Loaded image: goharbor/harbor-log:v1.10.2
5fc142634b19: Loading layer [==================================================>]  8.441MB/8.441MB
6d25b55ca036: Loading layer [==================================================>]  3.584kB/3.584kB
470e0bc7c886: Loading layer [==================================================>]  3.072kB/3.072kB
6deec48d670d: Loading layer [==================================================>]  21.76MB/21.76MB
4b0f50c1f9a2: Loading layer [==================================================>]  22.59MB/22.59MB
Loaded image: goharbor/registry-photon:v1.10.2
7c0c9681bb5c: Loading layer [==================================================>]  14.61MB/14.61MB
f8f5185485f0: Loading layer [==================================================>]  28.25MB/28.25MB
7aa4e440ddd4: Loading layer [==================================================>]  22.02kB/22.02kB
1bf5d3e32ab4: Loading layer [==================================================>]  49.09MB/49.09MB
Loaded image: goharbor/notary-signer-photon:v1.10.2
e5f331e45d1c: Loading layer [==================================================>]  337.3MB/337.3MB
e0d97714dc5d: Loading layer [==================================================>]  135.2kB/135.2kB
Loaded image: goharbor/harbor-migrator:v1.10.2
6b5627387d23: Loading layer [==================================================>]  77.91MB/77.91MB
6d898f9318cc: Loading layer [==================================================>]  48.28MB/48.28MB
3e9ed699ea3e: Loading layer [==================================================>]   2.56kB/2.56kB
3bc549d11dcc: Loading layer [==================================================>]  1.536kB/1.536kB
74fd1d3f8fa2: Loading layer [==================================================>]  157.2kB/157.2kB
547fd9c0c9c5: Loading layer [==================================================>]   2.81MB/2.81MB
Loaded image: goharbor/prepare:v1.10.2
9d7087c5277a: Loading layer [==================================================>]  8.441MB/8.441MB
c0f8862cab3f: Loading layer [==================================================>]   9.71MB/9.71MB
a9e3fbb9bcfc: Loading layer [==================================================>]   9.71MB/9.71MB
Loaded image: goharbor/clair-adapter-photon:v1.10.2
[Step 3]: preparing environment ...
[Step 4]: preparing harbor configs ...
prepare base dir is set to /usr/local/src/harbor
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /secret/keys/secretkey
Generated certificate, key file: /secret/core/private_key.pem, cert file: /secret/registry/root.crt
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir

[Step 5]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-portal ... done
Creating registry      ... done
Creating harbor-db     ... done
Creating registryctl   ... done
Creating redis         ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done
✔ ----Harbor has been installed and started successfully.----

服务启动完成自动创建nginx和db等容器服务

$ docker-compose ps         
[root@k8s-habor harbor]# docker-compose ps    
     Name                     Command                  State                          Ports                  
---------------------------------------------------------------------------------------------------------------
harbor-core         /harbor/harbor_core              Up (healthy)                                              
harbor-db           /docker-entrypoint.sh            Up (healthy)   5432/tcp                                  
harbor-jobservice   /harbor/harbor_jobservice  ...   Up (healthy)                                              
harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp                  
harbor-portal       nginx -g daemon off;             Up (healthy)   8080/tcp                                  
nginx               nginx -g daemon off;             Up (healthy)   0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp
redis               redis-server /etc/redis.conf     Up (healthy)   6379/tcp                                  
registry            /home/harbor/entrypoint.sh       Up (healthy)   5000/tcp                                  
registryctl         /home/harbor/start.sh            Up (healthy)            

  登陆界面                                                                                                                              

image-20200503131913874image.png

用户名:admin 密码:Harbor12345

k8s-node01登陆harbor

[root@k8s-node01 ~]# docker login https://hub.51geeks.com 
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

[root@k8s-master ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
nginx                                latest              602e111c06b6        9 days ago          127MB
httpd                                latest              b2c2ab6dcf2e        10 days ago         166MB

删除镜像:
docker rmi -f hub.51geeks.com/library/nginx:latest

推送镜像的Docker命令
docker tag nginx hub.51geeks.com/library/nginx
docker push hub.51geeks.com/library/nginx

在项目中标记镜像:
docker tag SOURCE_IMAGE[:TAG] hub.51geeks.com/library/IMAGE[:TAG]

推送镜像到当前项目:
docker push hub.51geeks.com/library/IMAGE[:TAG]

K8S与集群的使用:
kubectl  run --help
kubectl apply -f nginx-deployment.yaml
查看 Pods
kubectl get pods
kubectl get deployment
kubectl get  rs
kubectl get nodes 检查本地的环境信息

端口映射,向外部暴露服务,在Kubernetes中Pod有其自己的生命周期,Node发生故障时,ReplicationController或者ReplicationSet会将Pod迁移到其他节点中以保持用户希望的状态。
#kubectl expose deployment nginx-deployment --port=80 --type=LoadBalancer
查看服务状态(查看service被映射到哪个端口):kubectl get services
[root@k8s-master ~]# kubectl get service
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP        19h
nginx-deployment   LoadBalancer   10.102.49.213   <pending>     80:32682/TCP   89m

创建部署之后,可以看到容器已经运行了,但是默认情况下,容器只能内部互相访问,如果需要对外提供服务,有以下几种方式:

ClusterIP,默认的方式,通过集群IP来对外提供服务,这种方式只能在集群内部访问。
NodePort,利用NAT技术在Node的指定端口上提供对外服务。外部应用通过:的方式访问。
LoadBalancer,利用外部的负载均衡设施进行服务的访问。
ExternalName,这是1.7版本之后 kube-dns 提供的功能。

[root@k8s-master ~]# kubectl get  pod  -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-7789b77975-fcbmg   1/1     Running   0          2m31s   10.244.2.6   k8s-node02   <none>           <none>
nginx-deployment-7789b77975-m85sx   1/1     Running   0          2m31s   10.244.2.7   k8s-node02   <none>           <none>

[root@k8s-node02 ~]#  docker ps -a |grep nginx

删除pod:
kubectl delete pod nginx-deployment-7789b77975-fcbmg
[root@k8s-master ~]# kubectl get  svc
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP        17h
nginx-deployment   LoadBalancer   10.102.49.213   <pending>     80:32682/TCP   10m

ipvsadm  -Ln


K8S的使用:

K8S 1.18版本创建pod都是基于YAML文件创建部署

vim tomcat.yaml
apiVersion: v1
kind: Pod
metadata:                           #元数据信息
 name: tomcat-c                    #kubectl get  pods 和 登陆容器显示的名字
 labels:                           #标签,可以作为查询条件 kubectl get pods -l
   app=tomcat
   node=devops-103
spec:                #规格
 containers:                       #容器
 - name: tomcat                    #容器名称
   image: docker.io/tomcat         #使用的镜像
   ports:
     - containerPort: 8080
   env:              #设置env,登陆到容器中查看环境变量, DEME_GREETING 的值是 "hello from the enviroment"
   - name:GREETING
     value: "hello from the environment"
创建pod:
kubectl create -f tomcat.yaml
kubectl get pods
kubectl get nodes
kubectl scale deployments/tomcat --replicas=3
kubectl get deployments
kubectl get pods
kubectl describe pod tomcat-858b8c476d-cfrtt
kubectl scale deployments/tomcat --replicas=2
kubectl describe deployment
kubectl get pods -l app=tomcat
kubectl get services -l app=tomcat
kubectl label --overwrite  pod tomcat-858b8c476d-vnm98 node=devops-102
# 这里用了--overwrite属性是因为之前标错了
kubectl describe pods tomcat-858b8c476d-vnm98


[root@k8s-master ~]# kubectl describe pods nginx
Name:         nginx-deployment-7789b77975-m85sx
Namespace:    default
Priority:     0
Node:         k8s-node02/192.168.253.169
Start Time:   Sun, 03 May 2020 14:22:25 +0800
Labels:       app=nginx            #创建部署的时候,kubectl会自动帮我们打一个标签,这里是app=nginx。
             pod-template-hash=7789b77975
Annotations:  <none>
Status:       Running
IP:           10.244.2.7
IPs:
 IP:           10.244.2.7
Controlled By:  ReplicaSet/nginx-deployment-7789b77975
Containers:
 nginx:
   Container ID:   docker://be642684912e5662a8bdd3b10e5e1be28936c045ea1df07421dac106b07cbef1
   Image:          hub.51geeks.com/library/nginx:latest
   Image ID:       docker-pullable://hub.51geeks.com/library/nginx@sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422
   Port:           80/TCP
   Host Port:      0/TCP
   State:          Running
     Started:      Sun, 03 May 2020 14:22:40 +0800
   Ready:          True
   Restart Count:  0
   Environment:    <none>
   Mounts:
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9jpf (ro)
Conditions:
 Type              Status
 Initialized       True
 Ready             True
 ContainersReady   True
 PodScheduled      True
Volumes:
 default-token-q9jpf:
   Type:        Secret (a volume populated by a Secret)
   SecretName:

推荐站点

  • 我爱发烧音乐我爱发烧音乐

    我爱发烧音乐囊括了从流行音乐到古典音乐多个类型的音乐作品,专栏推荐最新的音乐,提供音乐排名榜单!可供免费线上收听音乐,歌曲流畅,音效极佳! 网站提供的钢琴以及二胡专栏,可供收听者,陶冶情操,改善心情,是难得的轻音乐典藏!

    www.520fs.com
  • 世纪音乐网世纪音乐网

    世纪音乐网是专业的在线音乐试听MP3下载网站。歌曲总计30余万首,收录了网上最新歌曲和流行音乐,DJ舞曲,非主流音乐,经典老歌,劲舞团歌曲,搞笑歌曲,儿童歌曲,英文歌曲等。是您上网听歌的最佳网站。

    www.ssjj.com
  • 怒江大峡谷网怒江大峡谷网

    怒江大峡谷网内容包括:新闻、要闻、怒江报、视频、文化、民俗、人文、音乐、政务、公告、政策等地方信息。

    www.nujiang.cn
  • 杭州网杭州网

      杭州网是杭州地区唯一的新闻门户网站,由中共杭州市委宣传部、杭州日报报业集团和杭州广播电视集团共同组建的杭州网络传媒有限公司运营。

    www.hangzhou.com.cn
  • 深圳在线深圳在线

      深圳在线 www.szol.net是深圳本地最大、最早的地方生活资讯网站之一,网站名“深圳在线www.szol.net”由南方报业传媒集团编辑委员会总编辑、南方日报社总编辑、南方都市报总编辑、南方书画院名誉院长王春芙亲笔题名,深圳在线www.szol.net团队与深圳热线www.szonline.net、奥一网www.oeeee.com都源于全国最早成立于1996年的知名网络公司——深圳万用网。

    www.szol.net

鄂公网安备 42062502000001号