部署方式
ks可插拔
ks-installer -> servicemesh 改为true
ks-install 源码阅读
ks-installer。 本质上是shell-operator
升级后knative的访问问题
原因: infermng通过knative的client创建VS时, 报no found错误Internal error occurred: failed calling webhook "validation.istio.io": Post "https://istiod.istio-system.svc:443/validate?timeout=30s": service "istiod" not found”},奕龙新增istiod解决
解决:
- 注入已安装的指定版本的istio, 如1-9-8, 注入方式多样, 这里采用namespace增加istio.io/rev=1-9-8的label
k label namespace infer-system istio-injection- istio.io/rev=1-9-8
- 验证发现mysql 报 connection refuse错误, 根据之前又彬网关的类似错误,怀疑是istio-proxy的启动顺序问题
- istio 1.7+ 可以通过加annotations解决。
proxy.istio.io/config: holdApplicationUntilProxyStarts: true
- 修改ValidatingWebhookConfiguration的Service name。 这是个bug, 参考issue
(base) [deepwisdom@private-xm-dev-app1 ~]$ sudo kubectl get ValidatingWebhookConfiguration istiod-istio-system -o yaml webhooks: - admissionReviewVersions: - v1beta1 ... service: name: istiod-1-9-8 # 这里指定对应版本的istio namespace: istio-system path: /validate port: 443 failurePolicy: Ignore matchPolicy: Exact name: validation.istio.io namespaceSelector: {} objectSelector: {} ...
istioctl
版本istio-1.9.5
控制平面安装
# 安装
<!-- ./istioctl install --set hub=istio --set values.global.proxy.autoInject=disabled --set meshConfig.defaultConfig.tracing.zipkin.address="jaeger-collector.istio-system.svc:9411" --set values.sidecarInjectorWebhook.enableNamespacesByDefault=true --set values.global.imagePullPolicy=IfNotPresent --set revision=1-9-5 注:此方式有问题,无法通过给ns加label的方式注入-->
./istioctl install --set revision=1-9-5
# 查看并存的istiod版本
[root@master bin]# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-668bbd55b7-m7t7l 1/1 Running 0 80s # gw 也是安装的1-9-5, 其它的还是老的版本
istiod-1-11-2-5466d7cfbf-vx9mw 1/1 Running 1 5h20m
istiod-1-6-10-665f5fff9f-nn4mg 1/1 Running 1 5h13m
istiod-1-9-5-7994494c44-76jzg 1/1 Running 0 103s
# sidecar inject 配置
[root@master bin]# kubectl get mutatingwebhookconfigurations
NAME WEBHOOKS AGE
istio-sidecar-injector-1-11-2 2 5d20h
istio-sidecar-injector-1-6-10 1 5d21h
istio-sidecar-injector-1-9-5 1 4m16s
# ValidatingWebhook配置
[root@master bin]# kubectl get ValidatingWebhookConfiguration
NAME WEBHOOKS AGE
istio-validator-1-11-2-istio-system 2 5d20h
istiod-istio-system 1 6m10s #这里的service name 还是istiod, 可能出现failed calling webhook \"validation.istio.io\": Post \"https://istiod.istio-system.svc:443/validate?timeout=30s\": service \"istiod\" not found"} 的错误
数据平面安装
# 升级sdk/appmng
kubectl label namespace test istio-injection- istio.io/rev=1-9-5 # 这里要把istio-injection的label去掉,因为优先级比较高
# 去掉istio1-9-5的istio-injection
[root@master bin]# kubectl get pods -n istio-system --show-labels|grep 1-9-5
istio-ingressgateway-668bbd55b7-m7t7l 1/1 Running 0 42m app=istio-ingressgateway,chart=gateways,heritage=Tiller,install.operator.istio.io/owning-resource=unknown,istio.io/rev=1-9-5,istio=ingressgateway,operator.istio.io/component=IngressGateways,pod-template-hash=668bbd55b7,release=istio,service.istio.io/canonical-name=istio-ingressgateway,service.istio.io/canonical-revision=1-9-5,sidecar.istio.io/inject=false
istiod-1-9-5-7994494c44-76jzg 1/1 Running 0 43m app=istiod,install.operator.istio.io/owning-resource=unknown,istio.io/rev=1-9-5,istio=istiod,operator.istio.io/component=Pilot,pod-template-hash=7994494c44,sidecar.istio.io/inject=false
[root@master bin]# kubectl label pod istiod-1-9-5-7994494c44-76jzg -n istio-system sidecar.istio.io/inject-
pod/istiod-1-9-5-7994494c44-76jzg labeled
# appmng生效
kubectl -n test scale deployment appmng-v1 --replicas=0 && kubectl -n test scale deployment appmng-v1 --replicas=1
[root@master bin]# kubectl describe pods appmng-v1-78fb786d65-cdxnn -n test
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m21s default-scheduler Successfully assigned test/appmng-v1-78fb786d65-cdxnn to worker2
Normal Pulling 2m20s kubelet Pulling image "docker.io/istio/proxyv2:1.9.5"
Normal Pulled 2m17s kubelet Successfully pulled image "docker.io/istio/proxyv2:1.9.5" in 3.016337207s
Normal Created 2m17s kubelet Created container istio-init
Normal Started 2m17s kubelet Started container istio-init
Normal Pulling 2m17s kubelet Pulling image "docker.io/istio/proxyv2:1.9.5"
Normal Pulled 2m11s (x2 over 2m17s) kubelet Container image "ccr.deepwisdomai.com/sdk/appmng:master255d7708" already present on machine
Normal Created 2m11s (x2 over 2m17s) kubelet Created container container-eeh4an
Normal Started 2m11s (x2 over 2m17s) kubelet Started container container-eeh4an
Normal Pulled 2m11s kubelet Successfully pulled image "docker.io/istio/proxyv2:1.9.5" in 5.335666682s
Normal Created 2m11s kubelet Created container istio-proxy
Normal Started 2m11s kubelet Started container istio-proxy
# 验证访问正常
升级
分两种,基于ks的升级, 和基于istioctl的升级
ks
- istio的版本升级和ks的版本绑定, 目前ks-v3.2.1, 绑定的ks版本为istiod-1-11-2
- 当ks从3.1升到3.2时, ks保留了原来的老版本的istiod-1-6-10
- 老的Pod还是用老版本的istio, 需要滚动升级后,才能升级到新版本istio
这里我们做个验证, 通过修改命名空间的lable, 实现更新该命名空间下istio的版本,并进行流量的访问测试
istio版本修改
# 设置test命名空间的的istio版本
k label namespace test istio-injection- istio.io/rev=1-6-10
# 重新调度appauth-rpc
# 查看appauth-rpc的istio-proxy版本
...
Containers:
container-p7zluk:
Container ID: docker://4306e7be9eb3d022f6b6956560a0ce7731fc9b6e9b5d995d946c36d55104fb6c
Image: ccr.deepwisdomai.com/sdk/appauth-rpc:master075104a5
Image ID: docker-pullable://ccr.deepwisdomai.com/sdk/appauth-rpc@sha256:8537615401d6e0ad2c9f97b2e0d12b372ffe8f4f7cd31b070b126e980b2c136c
Port: 8090/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 17 Jan 2022 11:33:36 +0800
Ready: True
Restart Count: 0
Environment:
ENV: test
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pjlbb (ro)
istio-proxy:
Container ID: docker://8d83ef623109b7c2184f452c61ec31aa622422f0991941697acc6035456aaf96
Image: istio/proxyv2:1.6.10
Image ID: docker-pullable://istio/proxyv2@sha256:3af8ac17829834469b5ec7011736a8789860e27ce6711c6d01ed85db54dbb1b2
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--serviceCluster
appauth-rpc.$(POD_NAMESPACE)
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--trust-domain=cluster.local
--concurrency
2
State: Running
Started: Mon, 17 Jan 2022 11:33:38 +0800
...
istio-ingressgateway验证
ks的网关走K8S的Ingress(kubectl get Ingress -n test -o yaml
), 这里我们主要验证istio-gw(istio-proxy版本为1.11)
# 新建gateway,使用istio默认的ingressgateway
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sdk-gateway
namespace: istio-system # 问题1 注意需要指定命名的空间
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sdk
namespace: test # 注意需要指定命名的空间
spec:
hosts:
- "*"
gateways:
- istio-system/sdk-gateway # 问题2 指定新建的gw的命名空间
http:
- match:
- uri:
prefix: /token
route:
- destination:
host: appmng
port:
number: 8080
EOF
当集群规模比较大的时候,也可以部署多个istio-gw, 参考单集群多Gateway
istioctl
升级/降级 同安装
金丝雀升级
同时保留两个版本的istio, 只修改要升级的namespace的lable(如istio.io/rev=1-9-8), 然后重新调度该namespace下的Pod
常用命令
# 查看资源的标签
kubectl get ns --show-labels
kubectl get pods --show-labels -n istio-system
# 打标签/去除标签
k label namespace test istio.io/rev=1-9-8 istio-injection-
# 修改docker的代理,并reload
rm /etc/systemd/system/docker.service.d/http-proxy.conf -rf
systemctl daemon-reload
systemctl restart docker
kubectl -n istio-system scale deployment istiod-1-9-5 --replicas=0 && kubectl -n istio-system scale deployment istiod-1-9-5 --replicas=1
待确认问题
- 若新建应用,开启服务治理功能, 则istio-proxy的版本是和当前ks绑定的istio版本
- 猜测是ks在创建Pod时直接注入的,后续看源码确认
refer
- https://blog.frognew.com/2021/07/learning-istio-1.10-07.html
- https://jimmysong.io/istio-handbook/concepts/traffic-management-basic.html
- https://www.servicemesher.com/istio-handbook/practice/upgrade.html
- https://www.cuoshuo.com/blog/215148.html