0%

K8S汇总

集群采用kubeadm搭建

iTerm2多窗口命令同时输入

K8S Xmind思维导图

集群环境搭建准备

环境规划

采用腾讯云ECS机器搭建

IP System Role Specs
172.16.16.9 CentOS 7.6 64位(腾讯云ECS) Master 2核 4GB
192.168.0.4 CentOS 7.6 64位(腾讯云ECS) Node 2核 4GB
10.168.0.10 CentOS 7.6 64位(腾讯云ECS) Node 2核 8GB

所有节点配置yum源

1
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo ;curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

主机名解析

1
2
3
172.16.16.9   k8s-master
192.168.0.4 k8s-node1
10.168.0.10 k8s-node2

修改主机名

分别在三台ECS执行下面中的一条命令:

1
2
3
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

之后重启:

1
reboot

时间同步

1
2
systemctl start chronyd
systemctl enable chronyd

禁用防火墙

1
2
systemctl stop firewalld
systemctl disable firewalld

禁用selinux

SELinux 主要作用就是最大限度地减小系统中服务进程可访问的资源(最小权限原则)

1
setenforce 0

显示setenforce: SELinux is disabled表明已经关闭,未关闭建议先关闭:

1
2
sed -ri '/^SELINUX=/c SELINUX=disabled' /etc/sysconfig/selinux
sed -ri '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config

每个节点都禁用swapp分区

1
vim /etc/fstab

带有swap字眼的都注释掉,没有可以忽略:

1
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
1
2
3
4
[root@k8s-master ~]# free -h
total used free shared buff/cache available
Mem: 3.7G 1.7G 169M 1.2M 1.8G 1.7G
Swap: 0B 0B 0B

修改linux内核参数

添加网桥过滤和地址转发功能

1
2
3
4
vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

加载配置:

1
sysctl -p

加载网桥过滤模块:

1
modprobe br_netfilter

查看是否加载成功:

1
2
3
[root@k8s-master ~]# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter

配置ipvs功能

在kubernetes中service有两种代理模式,一种是基于iptables的,一种是基于ipvs的
ipvs的性能比iptables要高,因此采用ipvs,如果要用ipvs,需要手动载入ipvs模块

安装ipset和ipvsadm:

1
yum -y install ipset ipvsadmin

配置加载模块脚本:

1
2
3
4
5
6
7
vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

增加执行权限:

1
chmod a+x /etc/sysconfig/modules/ipvs.modules

执行脚本文件:

1
sh /etc/sysconfig/modules/ipvs.modules

查看是否加载成功:

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master ~]# lsmod | grep -E 'ip_vs|nf_conntrack'
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_netlink 36354 0
nfnetlink 14519 2 nf_conntrack_netlink
nf_conntrack_ipv4 15053 2
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 139224 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c 12644 3 ip_vs,nf_nat,nf_conntrack

重启服务器

1
reboot

部署kubernetes集群

所有节点部署docker

准备镜像源:

1
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

查看当前支持的docker版本:

1
2
3
4
docker version
# 如果已经安装了docker,一般默认会安装docker-ce,不需要重新安装
# 未安装的话可以直接安装最新版本
yum list docker-ce --showduplicates

安装和自己docker版本一致的的docker-ce(--setopt=obsoletes=0不指定的话,否则yum会自动安装更高版本):

1
yum -y install --setopt=obsoletes=0 docker-ce-20.10.7-3.el7

创建docker配置文件,docker默认情况下使用cgroup driver作为cgroupfs,而k8s推荐使用systemd来代替cgroupfs,更换数据源:

1
2
3
4
5
6
7
8
mkdir /etc/docker
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://zggyaen3.mirror.aliyuncs.com"]
}
systemctl restart docker
systemctl enable docker

所有节点安装kubernetes组件

准备k8s镜像源:

1
2
3
4
5
6
7
8
9
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
gpgcheck=0
enabled=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpgp
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

查询kubeadm可用的版本,选择和Docker版本对应的:

1
yum list kubeadm --showduplicates

安装kubeadm、kubelet、kubectl:

1
yum -y install --setopt=obsoletes=0 kubeadm-1.20.8-0 kubelet-1.20.8-0 kubectl-1.20.8-0 --downloaddir=/root/k8s

配置kubelet的cgroup以及使用ipvs转发:

1
2
3
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

设置kubelet开机自启:

1
2
3
4
# 安装好kubelet后先不用启动,当集群初始化的时候会自动启动kubelet,选择启动kubelet会报错
# 开机自启服务
systemctl enable kubelet
# kubelet先不用启动,启动会报错

所有节点准备集群组件镜像包

在安装kubernetes集群之前,必须要提前准备好集群需要的镜像

查看镜像包列表

1
2
3
4
5
6
7
8
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.20.8
k8s.gcr.io/kube-controller-manager:v1.20.8
k8s.gcr.io/kube-scheduler:v1.20.8
k8s.gcr.io/kube-proxy:v1.20.8
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

下载镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
images=(
kube-apiserver:v1.20.8
kube-controller-manager:v1.20.8
kube-scheduler:v1.20.8
kube-proxy:v1.20.8
pause:3.2
etcd:3.4.13-0
coredns:1.7.0
)

for imageName in ${images[@]}
do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

master节点初始化

初始化成功后,kubelet组件会自动启动,使用kubectl get node命令也可以看到集群中已有master节点

1
2
3
4
5
6
7
8
9
kubeadm init --kubernetes-version=v1.20.8 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=172.16.16.9                      
# 注意将生成的token
# 如果过期可先执行此命令 重新生成token
# kubeadm token create
# 列出token
# kubeadm token list | awk -F" " '{print $1}' |tail -n
# 获取CA公钥的哈希值
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/de
# kubeadm join 192.168.40.8:6443 --token token填这里 --discovery-token-ca-cert-hash sha256:哈希值填这里

创建kubectl必要的文件:

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u).$(id -g) $HOME/.kube/config

查看master是否加入集群:

1
2
3
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 56m v1.20.8

node节点初始化

node加入master

1
kubeadm join 172.16.16.9:6443 --token 1juqzg.d242y66inoo9t98u --discovery-token-ca-cert-hash sha256:03c94995123d1483e6739419acf34ee9e0756f395b2b7c0d4bd1e168ec0d6f78

在master节点查看集群节点信息

1
2
3
4
5
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 19h v1.20.8
k8s-node1 NotReady <none> 4m39s v1.20.8
k8s-node2 NotReady <none> 8s v1.20.8

集群配置flannel网络

创建文件夹:

1
/root/k8s/yaml

上传kube-flannel.yaml:

1
2
# 可以直接在https://github.com/flannel-io/flannel/edit/master/Documentation/kube-flannel.yml复制
sshpass -p XXX scp -r ~/Downloads/kube-flannel.yaml root@106.55.145.49:~/k8s/yaml

将文件中quay.io仓库改为quay-mirror.qiniu.com:

1
sed -ri 's#quay.io#quay-mirror.qiniu.com#g' kube-flannel.yaml

启动flannel网络:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@k8s-master yaml]# kubectl apply -f kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

flannel网络有时pull会失败可以从这里提取:

1
2
3
4
# 链接:https://pan.baidu.com/s/1AqubJhTMIshf66ofOCkubQ 
# 提取码:ypbh
docker load -i flanneld-v0.12.0-amd64.docker
docker tag quay.io/coreos/flannel:v0.12.0-amd64 quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64

查看组件pod状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-6nbjl 0/1 Pending 0 19h
coredns-74ff55c5b-6sd8b 0/1 Pending 0 19h
etcd-k8s-master 1/1 Running 1 19h
kube-apiserver-k8s-master 1/1 Running 1 19h
kube-controller-manager-k8s-master 1/1 Running 2 19h
kube-flannel-ds-amd64-997w8 0/1 Init:ImagePullBackOff 0 26m
kube-flannel-ds-amd64-gv45v 0/1 Init:ImagePullBackOff 0 26m
kube-flannel-ds-amd64-hg5s8 0/1 Init:ImagePullBackOff 0 26m
kube-proxy-hlvwm 1/1 Running 1 19h
kube-proxy-rvfms 1/1 Running 0 45m
kube-proxy-tnc7c 1/1 Running 0 41m
kube-scheduler-k8s-master 1/1 Running 2 19h

部署一个nginx容器验证集群可用性

创建一个deployment资源的nginx:

1
2
[root@k8s-master yaml]# kubectl create deployment kiwi-ui --image=nginx
deployment.apps/kiwi-ui created

对外暴露端口:

1
2
[root@k8s-master yaml]# kubectl expose deployment kiwi-ui --port=80 --type=NodePort
service/kiwi-ui exposed

查看pod资源:

1
2
3
[root@k8s-master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kiwi-ui-556656cb8c-d28hm 1/1 Running 0 3m39s 10.244.1.4 k8s-node1 <none> <none>

查看所有资源:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@k8s-master yaml]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/kiwi-ui-556656cb8c-d28hm 1/1 Running 0 4m9s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kiwi-ui NodePort 10.103.116.177 <none> 80:30685/TCP 5m23s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kiwi-ui 1/1 1 1 8m39s

NAME DESIRED CURRENT READY AGE
replicaset.apps/kiwi-ui-556656cb8c 1 1 1 8m39s

访问Master节点的30685端口:

k8s命令开启tab功能

1
2
3
4
yum install -y bash-completion
source /usr/share/bash-completion/bash_completio
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

部署rancher管理k8s

1
2
3
4
5
6
7
8
9
10
# 拉取镜像
docker pull rancher/rancher:latest
# 准备挂载点
mkdir -p /docker_volume/rancher_home/rancher
mkdir -p /docker_volume/rancher_home/auditlog
# 运行rancher容器
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 \
-v /docker_volume/rancher_home/rancher:/var/lib/rancher \
-v /docker_volume/rancher_home/auditlog:/var/log/auditlog \
--name rancher rancher/rancher:latest

资源管理

在kubernetes中,所有的内容都抽象称为资源,用户需要通过操作资源来管理k8s。
kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种服务,所谓的部署,就是在k8s集群中运行一个个的容器,将执行的程序运行在容器中。
kubernetes的最小管理单元是pod而不是容器,所以只能将容器放在pod中,而k8s一般也不会直接管理pod,而是通过pod控制器来管理pod。
pod可以提供服务,就要考虑如何访问pod中的服务,kubernetes提供了service资源来实现这个功能。
当pod数据需要持久化,k8s也提供了很多种存储系统。

资源操作流程(我们所学习的重点就是如何在集群中对各种资源的操作):

YAML

kubernetes中资源配置文件都是基于YAML语言编写

语法

  • 大小写敏感
  • 使用缩进表示层级关系(低版本不允许使用Tab,高版本可以)
  • 冒号后面至少有一个空格
  • #表示注释
  • 多段yaml写在同个文件里面要用一行---分隔开

YAML 语言教程

yaml转换json工具

资源管理方式

  • 命令式对象管理:直接使用命令对资源进行操作
    1
    kubectl run nginx-pod --image=nginx --port=80
  • 命令式对象配置:通过命令指定配置文件去操作资源
    1
    kubectl create -f nginx-pod.yaml
  • 声明式对象配置:通过apply命令和配置文件去操作资源,相当于更新(如果Pod存在则更新,不存在则创建)
    1
    2
    # 该模式只有一个命令apply
    kubectl apply -f nginx-pod.yaml

Command Demo:

1
2
3
创建更新资源:kubectl apply -f xxx.yaml kubect create -f xxx.yaml
删除资源:kubectl delete -f xxx.yaml
查看资源:kubect get|describe 资源名

命令式对象管理

创建一个kiwi的namespace:

1
2
[root@k8s-master ~]# kubectl create ns kiwi
namespace/kiwi created

查看所有的namespace:

1
2
3
4
5
6
7
[root@k8s-master ~]# kubectl get ns
NAME STATUS AGE
default Active 24h
kiwi Active 71s
kube-node-lease Active 24h
kube-public Active 24h
kube-system Active 24h

运行一个nginx的pod:

1
2
3
4
5
[root@k8s-master ~]# kubectl run pod --image=nginx -n kiwi
pod/pod created
[root@k8s-master ~]# kubectl get pod -n kiwi
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 21s

删除一个pod:

1
2
[root@k8s-master ~]# kubectl delete pod pod -n kiwi
pod "pod" deleted

删除一个namespace:

1
2
[root@k8s-master ~]# kubectl delete ns kiwi
namespace "kiwi" deleted

声明式对象配置

在自己的kiwi项目创建kiwi-ui.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: Namespace
metadata:
name: kiwi

---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: kiwi
spec:
containers:
- name: kiwi-ui
image: nginx

使用kubectl create创建:

1
2
3
[root@k8s-master yaml]# kubectl create -f kiwi-ui.yaml
namespace/kiwi created
pod/nginx-pod created

更新yaml文件中镜像版本,nginx更改为1.15:

1
2
3
4
5
[root@k8s-master yaml]# kubectl apply -f kiwi-ui.yaml
Warning: resource namespaces/kiwi is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/kiwi configured
Warning: resource pods/nginx-pod is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/nginx-pod configured

查看更新过程:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[root@k8s-master yaml]# kubectl describe pod nginx-pod -n kiwi
Name: nginx-pod
Namespace: kiwi
Priority: 0
Node: k8s-node1/192.168.0.4
Start Time: Sun, 20 Jun 2021 16:51:32 +0800
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.8
IPs:
IP: 10.244.1.8
Containers:
kiwi-ui:
Container ID: docker://be02457c14715ef891c6d6468b7525f735a97c77a23891f6be8ef3ca4b2a5eed
Image: nginx:1.15
Image ID: docker-pullable://nginx@sha256:6d75c99af15565a301e48297fa2d121e15d80ad526f8369c526324f0f7ccb750
Port: <none>
Host Port: <none>
State: Running
Started: Sun, 20 Jun 2021 16:51:51 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jthzv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-jthzv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jthzv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 101s default-scheduler Successfully assigned kiwi/nginx-pod to k8s-node1
Normal Pulling 100s kubelet Pulling image "nginx"
Normal Pulled 82s kubelet Successfully pulled image "nginx" in 18.06408301s
Normal Created 82s kubelet Created container kiwi-ui
Normal Started 82s kubelet Started container kiwi-ui
Normal Killing 23s kubelet Container kiwi-ui definition changed, will be restarted
Normal Pulling 23s kubelet Pulling image "nginx:1.15"

kubectl命令

kubectl概述

Kubernetes kubectl 命令表

一般不建议在node节点也开启kubectl命令

KiwiDict的各种资源yaml

创建namespace分割资源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Namespace
metadata:
name: kiwi-be-infrastructure

---

apiVersion: v1
kind: Namespace
metadata:
name: kiwi-be-business

---

apiVersion: v1
kind: Namespace
metadata:
name: kiwi-fe

安装nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: kiwi-fe
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/local/nginx/html/"
name: nginx-vol
volumes:
- name: nginx-vol
hostPath:
path: ~/docker/ui/dist/

安装rabbitmq

1
2
3
kubectl run rabbitmq --image=rabbitmq:management --expose --port=15672 --restart=Never --dry-run -o yaml > rabbitmq.yml
kubectl create -f rabbitmq.yml
kubectl expose po rabbitmq --port 15672

安装es单机版

参考

创建k8s-es.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: elasticsearch
namespace: kiwi-be-infrastructure

spec:
type: NodePort
ports:
- name: db
nodePort: 30921
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
nodePort: 30922
port: 9300
protocol: TCP
targetPort: 9300
selector:
run: elasticsearch
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: elasticsearch
name: elasticsearch
namespace: kiwi-be-infrastructure

spec:
replicas: 1
selector:
matchLabels:
run: elasticsearch
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: elasticsearch
spec:
containers:
- env:
- name: discovery.type
value: single-node
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
name: elasticsearch
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: tramsport
protocol: TCP
resources:
limits:
cpu: 500m
memory: 2Gi
requests:
cpu: 500m
memory: 2Gi
status: {}

测试es可用:

使用Kompose将docker-compose迁移至K8S

Kompose命令安装和说明

将原本的docker-compose配置文件做转换:

1
2
kompose --file docker-compose-base.yml convert
kompose --file docker-compose-service.yml convert

也可以不做转换直接:

1
2
kompose --file docker-compose-base.yml up
kompose --file docker-compose-service.yml up

AWS Cloud Training

C2

aws_setup.bat脚本报错,或者走不到step 9的原因:

  • Service Now的权限申请记得选择Stuff ID后面不带-AWS的Account
  • python从Service Now下载的只有3.9版本,3.9的貌似会报错,需要Raise a ticket,远程让人帮忙换3.6的python。
  • 确认%JAVA_HOME%配置正确
  • 确认Python配置到Path环境变量(父目录和Scripts都配置到Path里面)
  • 确认ASW acount的权限已经下来,可以访问到控制台
  • 确认XXXXXX_password.bat在.aws目录下有自动生成,不然可以删除整个.aws重新来过
  • 确认bat脚本是否放在Temp目录下执行

AWS Cloud Training Springboot 项目

spring-boot-starter-parent的pom.xml配置version要改成2.1.3.RELEASE,不然会报错。

Springboot常用注解

Springboot Xmind笔记