517.【kubernetes】创建 minio 多集群版

写在前头

  1. 集群需要大于等于 4 个节点,3个节点玩不转
  2. minio operator 安装详见506.【kubernetes】在 k8s 集群上部署 Minio Operator 和 Minio Plugin

一、进入 minio operator 管理界面

  1. 开启代理端口
[root@k8s0 kubernetes]# kubectl minio proxy -n minio-operator 
Starting port forward of the Console UI.

To connect open a browser and go to http://localhost:9090

Current JWT to login: eyJhbGciOiJSUzI1NiIsImtpZCI6Inl6cFRBM25HUTJrNUNsck1CR2FpNmVlaHlNaWVHV1lVdEkwOGtrZjBMckkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaW5pby1vcGVyYXRvciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjb25zb2xlLXNhLXNlY3JldCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjb25zb2xlLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZmQ5ZTZjYjMtMDBiOS00NTA5LTgzMzctMzBlNjUyZWNlMDU2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1pbmlvLW9wZXJhdG9yOmNvbnNvbGUtc2EifQ.djbnoBplYXwaV2dfUC0BqSZ1rCRFP85SxZYHy3Z1l0rzvKpcUn_B4O6jg6gt4noJh2c8712t_vsHTnlgUKu4ptif2vVFfcCzX21G6ILcZ-X0b6PzQ58rFeFG6RzB-0E1TZXEcaXWrXfXoplwywtVscdyvDNXSXCvwSquyrHs4o4oP2nsxhL0OLblLFT2RiA-6jpwvnaWUEBaO3paZzyvCiUcBoPgQd2HHHRih2MXRWG72w3lCjICa8z60ihKq3NTqbTB5HUJouRoDluI5ykYcajMZ6sXMDFEDG30Wa5q5NE_pXH9XtMOBFXlFiuJZhfq72aD3-snlaGoDQajMuZ_ng

Forwarding from 0.0.0.0:9090 -> 9090
Handling connection for 9090
Handling connection for 9090
Handling connection for 9090
Handling connection for 9090
Handling connection for 9090
  1. 浏览器进入管理界面点击创建

  2. 配置 Setup 页

  • Name、Namespace 随便写
  • Storage Class 按实际情况写(我这里是direct-csi-min-io)
  1. 配置 Audit Log 页

  • Log Search Storage Class 不要选默认,(我这里是direct-csi-min-io),选默认会出现文末的问题
  1. 配置 Monitoring 页

  • Storage Class 也不要选默认。

验证安装

  1. 命令验证
[root@k8s0 ~]# kubectl get all -n my-minio-tenant
NAME                                                  READY   STATUS    RESTARTS        AGE
pod/my-minio-tenant-log-0                             1/1     Running   0               5m4s
pod/my-minio-tenant-log-search-api-86448fc7c9-9kgwg   1/1     Running   4 (4m19s ago)   5m3s
pod/my-minio-tenant-pool-0-0                          1/1     Running   0               5m4s
pod/my-minio-tenant-pool-0-1                          1/1     Running   0               5m4s
pod/my-minio-tenant-pool-0-2                          1/1     Running   0               3m4s
pod/my-minio-tenant-pool-0-3                          1/1     Running   0               3m4s

NAME                                     TYPE           CLUSTER-IP        EXTERNAL-IP   PORT(S)          AGE
service/minio                            LoadBalancer   169.169.34.170         443:17303/TCP    6m5s
service/my-minio-tenant-console          LoadBalancer   169.169.104.218        9443:64620/TCP   6m5s
service/my-minio-tenant-hl               ClusterIP      None                      9000/TCP         6m5s
service/my-minio-tenant-log-hl-svc       ClusterIP      None                      5432/TCP         5m4s
service/my-minio-tenant-log-search-api   ClusterIP      169.169.228.134           8080/TCP         5m3s

NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-minio-tenant-log-search-api   1/1     1            1           5m3s

NAME                                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/my-minio-tenant-log-search-api-86448fc7c9   1         1         1       5m3s

NAME                                      READY   AGE
statefulset.apps/my-minio-tenant-log      1/1     5m4s
statefulset.apps/my-minio-tenant-pool-0   4/4     5m5s
  • 可以看都已经正常了(里面的两个pending不影响使用)
  1. k8s集群管理界面验证

  • 可以看到所有的容器状态都是正常的
  1. minio tenant 界面验证
    通过浏览器访问 https://ip:port 可以访问

    登录进入之后可以看到各种存储桶了

  2. ok,完成

[附加]Storage Class 选择默认出现的问题

0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.

解决方法: 删除集群,重新创建集群,Storage Class 不要选默认即可。

版权声明:
作者:lichengxin
链接:https://www.techfm.club/p/55557.html
来源:TechFM
文章版权归作者所有,未经允许请勿转载。

THE END
分享
二维码
< <上一篇
下一篇>>