三、Helm部署Zookeeper集群

3.1、helm准备

# Helm客户端安装文档
https://helm.sh/docs/intro/install/

# 添加bitnami和官方helm仓库:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add stable https://charts.helm.sh/stable

# 更新仓库
helm  repo update

3.1、部署Zookeeper、Kafka集群

# sc
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: infra-nfs-zk
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"   # 设置为"false"时删除PVC不会保留数据,"true"则保留数据
  
# pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-zk
  namespace: infra
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: infra-nfs-zk
  • 安装方式一:先下载后安装
# 查看版本
[root@k8s-master01 helm]# helm search repo zookeeper
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/zookeeper               10.2.3          3.8.0           Apache ZooKeeper provides a reliable, centraliz...
bitnami/dataplatform-bp1        12.0.2          1.0.1           DEPRECATED This Helm chart can be used for the ...
bitnami/dataplatform-bp2        12.0.5          1.0.1           DEPRECATED This Helm chart can be used for the ...
bitnami/kafka                   19.0.0          3.3.1           Apache Kafka is a distributed streaming platfor...
bitnami/schema-registry         6.0.0           7.2.2           Confluent Schema Registry provides a RESTful in...
bitnami/solr                    6.2.2           9.0.0           Apache Solr is an extremely powerful, open sour...

# 查看zookeeper包的历史版本
helm search repo zookeeper -l

# pull 包
helm pull bitnami/zookeeper

# 解压
[root@k8s-master01 helm]# tar -xf zookeeper-10.2.3.tgz  && cd zookeeper/

# 更改配置文件
# sc name
persistence.storageClass:"infra-nfs-zk"
dataLogDir.existingClaim: "pvc-zk"
replicaCount: 3
# tls.client.enabled: false 默认关闭

# 修改values.yaml相应配置:副本数、auth、持久化
[root@k8s-master01 zookeeper]# helm install -n infra zookeeper .
NAME: zookeeper
LAST DEPLOYED: Wed Oct 19 23:01:23 2022
NAMESPACE: infra
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 10.2.3
APP VERSION: 3.8.0

** Please be patient while the chart is being deployed **

ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:

    zookeeper.infra.svc.cluster.local

To connect to your ZooKeeper server run the following commands:

    export POD_NAME=$(kubectl get pods --namespace infra -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
    kubectl exec -it $POD_NAME -- zkCli.sh

To connect to your ZooKeeper server from outside the cluster execute the following commands:

    kubectl port-forward --namespace infra svc/zookeeper 2181:2181 &
    zkCli.sh 127.0.0.1:2181
    
# 查看部署结果
[root@k8s-master01 helm]# kubectl get po -n infra 
NAME                                      READY   STATUS    RESTARTS   AGE
zookeeper-0                               1/1     Running   0          3m49s
zookeeper-1                               1/1     Running   0          3m53s
zookeeper-2                               1/1     Running   0          3m49s
  • 安装方式二:直接安装kafka
[root@k8s-master01 helm]# helm install kafka1 bitnami/kafka --set zookeeper.enabled=false --set replicaCount=3 --set externalZookeeper.servers=zookeeper --set persistence.enabled=false -n infra
NAME: kafka1
LAST DEPLOYED: Wed Oct 19 23:34:33 2022
NAMESPACE: infra
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 19.0.0
APP VERSION: 3.3.1

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka1.infra.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka1-0.kafka1-headless.infra.svc.cluster.local:9092
    kafka1-1.kafka1-headless.infra.svc.cluster.local:9092
    kafka1-2.kafka1-headless.infra.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka1-client --restart='Never' --image docker.io/bitnami/kafka:3.3.1-debian-11-r1 --namespace infra --command -- sleep infinity
    kubectl exec --tty -i kafka1-client --namespace infra -- bash

    PRODUCER:
        kafka-console-producer.sh \
            
            --broker-list kafka1-0.kafka1-headless.infra.svc.cluster.local:9092,kafka1-1.kafka1-headless.infra.svc.cluster.local:9092,kafka1-2.kafka1-headless.infra.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            
            --bootstrap-server kafka1.infra.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
            
# 查看部署结果
[root@k8s-master01 ~]# kubectl get po -n infra 
NAME                                      READY   STATUS              RESTARTS   AGE
kafka1-0                                  1/1     Running             0          14m
kafka1-1                                  0/1     Running             0          14m
kafka1-2                                  1/1     Running             0          14m
# Kafka验证
kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:2.8.0-debian-10-r30 --namespace
public-service --command -- sleep infinity
kubectl exec --tty -i kafka-client --namespace public-service -- bash

# 生产者:
kafka-console-producer.sh \
--broker-list kafka-0.kafka-headless.public-service.svc.cluster.local:9092,kafka-1.kafka-headless.public-
service.svc.cluster.local:9092,kafka-2.kafka-headless.public-service.svc.cluster.local:9092 \
--topic test

# 消费者:
kafka-console-consumer.sh \
--bootstrap-server kafka.public-service.svc.cluster.local:9092 \
--topic test \
--from-beginning

原文地址:http://www.cnblogs.com/hsyw/p/16808282.html

1. 本站所有资源来源于用户上传和网络,如有侵权请邮件联系站长! 2. 分享目的仅供大家学习和交流,请务用于商业用途! 3. 如果你也有好源码或者教程,可以到用户中心发布,分享有积分奖励和额外收入! 4. 本站提供的源码、模板、插件等等其他资源,都不包含技术服务请大家谅解! 5. 如有链接无法下载、失效或广告,请联系管理员处理! 6. 本站资源售价只是赞助,收取费用仅维持本站的日常运营所需! 7. 如遇到加密压缩包,默认解压密码为"gltf",如遇到无法解压的请联系管理员! 8. 因为资源和程序源码均为可复制品,所以不支持任何理由的退款兑现,请斟酌后支付下载 声明:如果标题没有注明"已测试"或者"测试可用"等字样的资源源码均未经过站长测试.特别注意没有标注的源码不保证任何可用性