手把手教你使用velero做k8s上kafka应用的数据备份和恢复

内容索引

引言

Kafka是由LinkedIn公司采用Scala语言开发的一个多分区、多副本的分布式消息系统,可以架设在k8s平台上基于ZooKeeper协调,达到高吞吐、持久化、水平扩展、支持流数据处理等多种特性,因而被广泛使用在互联网大数据和金融等领域。并且越来越多的开源分布式处理系统如Cloudera、Storm、Spark、Flink等都支持与Kafka集成。

Kafka的消息持久化功能和多副本机制,有效地降低了数据丢失的风险,因此也可以作为长期的数据存储系统来使用。那么如何为部署在k8s中的kafka数据上保险,进一步加强数据保护,挽救数据于黑客攻击、恶意篡改等事件,本文使用开源方案velero带大家一窥究竟。

实验环境

Linux版本:

cat version
Linux version 3.10.0-1127.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Tue Mar 31 23:36:51 UTC 2020

Kubernetes版本

kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
remote-master   Ready    master   84d   v1.18.9
worker-2        Ready    <none>   84d   v1.18.9

安装velero1.7.0
下载velero v1.7.0的安装包,解压。
其余步骤参考
https://velero.cn/d/7-velero-cephcsi

中环境准备部分

部署zookeeper

本文中我们将使用statefulset在kafka-test的namespace中创建2副本的zookeeper和kafka。

第一步,使用kubectl创建kafka-test

kubectl create namespace kafka-test

第二步,在kafka-test中部署zookeeper

kubectl -n kafka-test apply -f ./zookeeper-deployment.yaml

zookeeper-deployment.yaml 文件内容参考
https://github.com/jibutech/docs/blob/main/examples/workload/kafka/zookeeper-deployment.yaml

具体参数如namespace,replicas和storageClassName等根据需要修改

---
apiVersion: v1
kind: Service
metadata:
  name: zk-svc
  labels:
    app: zk-svc
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: zk-cm
data:
  jvm.heap: "1G"
  tick: "2000"
  init: "10"
  sync: "5"
  client.cnxns: "60"
  snap.retain: "3"
  purge.interval: "0"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  serviceName: zk-svc
  replicas: 2
  selector:
    matchLabels:
      app: zk
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: k8szk
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8szk:v3
        resources:
          requests:
            memory: "2Gi"
            cpu: "500m"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        env:
        - name : ZK_REPLICAS
          value: "2"
        - name : ZK_HEAP_SIZE
          valueFrom:
            configMapKeyRef:
                name: zk-cm
                key: jvm.heap
        - name : ZK_TICK_TIME
          valueFrom:
            configMapKeyRef:
                name: zk-cm
                key: tick
        - name : ZK_INIT_LIMIT
          valueFrom:
            configMapKeyRef:
                name: zk-cm
                key: tick
        - name : ZK_MAX_CLIENT_CNXNS
          valueFrom:
            configMapKeyRef:
                name: zk-cm
                key: client.cnxns
        - name: ZK_SNAP_RETAIN_COUNT
          valueFrom:
            configMapKeyRef:
                name: zk-cm
                key: snap.retain
        - name: ZK_PURGE_INTERVAL
          valueFrom:
            configMapKeyRef:
                name: zk-cm
                key: purge.interval
        - name: ZK_CLIENT_PORT
          value: "2181"
        - name: ZK_SERVER_PORT
          value: "2888"
        - name: ZK_ELECTION_PORT
          value: "3888"
        command:
        - sh
        - -c
        - zkGenConfig.sh && zkServer.sh start-foreground
        readinessProbe:
          exec:
            command:
            - "zkOk.sh"
          initialDelaySeconds: 30
          timeoutSeconds: 10
        livenessProbe:
          exec:
            command:
            - "zkOk.sh"
          initialDelaySeconds: 30
          timeoutSeconds: 10
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
      storageClassName: managed-nfs-storage

第三步,验证zookeeper集群和域名。

for i in {0..1}; do kubectl -n kafka-test exec zk-$i -- hostname -f; done
zk-0.zk-svc.kafka-test.svc.cluster.local
zk-1.zk-svc.kafka-test.svc.cluster.local

第四步,暴露zookeeper外部服务。

for i in {0..1}; do kubectl -n kafka-test label pod zk-$i zkInst=$i; done

for i in {0..1}; do kubectl -n kafka-test expose po zk-$i --port=2181 --target-port=2181 --name=zk-$i --selector=zkInst=$i --type=NodePort; done

第五步,等待zk pod都ready后,进入pod查看zookeeper是否处于正常服务状态

kubectl -n kafka-test get pod
NAME      READY   STATUS    RESTARTS   AGE
zk-0      1/1     Running   0          18m
zk-1      1/1     Running   0          18m

kubectl -n kafka-test exec -it zk-0 bash

kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
zookeeper@zk-0:/$ echo stat|nc 127.0.0.1 2181
Zookeeper version: 3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
Clients:
 /10.100.199.209:47010[1](queued=0,recved=33475,sent=33481)
 /127.0.0.1:41870[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/64
Received: 46663
Sent: 46668
Connections: 2
Outstanding: 0
Zxid: 0x400000096
Mode: follower
Node count: 135
zookeeper@zk-0:/$

部署kafka

等待zookeeper服务正常后,开始部署kafka。

第一步,在kafka-test中部署kafka

kubectl -n kafka-test apply -f ./kafka-deployment.yaml

kafka-deployment.yaml 文件内容参考
https://github.com/jibutech/docs/blob/main/examples/workload/kafka/kafka-deployment.yaml

具体参数如namespace,replicas和storageClassName等根据需要修改

---
apiVersion: v1
kind: Service
metadata:
  name: kafka-svc
  labels:
    app: kafka
spec:
  ports:
  - port: 9093
    name: server
  clusterIP: None
  selector:
    app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: kafka-pdb
spec:
  selector:
    matchLabels:
      app: kafka
  minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
spec:
  serviceName: kafka-svc
  replicas: 2
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - kafka
              topologyKey: "kubernetes.io/hostname"
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
             - weight: 1
               podAffinityTerm:
                 labelSelector:
                    matchExpressions:
                      - key: "app"
                        operator: In
                        values:
                        - zk
                 topologyKey: "kubernetes.io/hostname"
      terminationGracePeriodSeconds: 300
      containers:
      - name: k8skafka
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8skafka:v1
        resources:
          requests:
            memory: "1Gi"
            cpu: 500m
        ports:
        - containerPort: 9093
          name: server
        command:
        - sh
        - -c
        - "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
          --override listeners=PLAINTEXT://:9093 \
          --override zookeeper.connect=zk-0.zk-svc.kafka-test.svc.cluster.local:2181,zk-1.zk-svc.kafka-test.svc.cluster.local:2181 \
          --override log.dirs=/var/lib/kafka \
          --override auto.create.topics.enable=true \
          --override auto.leader.rebalance.enable=true \
          --override background.threads=10 \
          --override compression.type=producer \
          --override delete.topic.enable=false \
          --override leader.imbalance.check.interval.seconds=300 \
          --override leader.imbalance.per.broker.percentage=10 \
          --override log.flush.interval.messages=10 \
          --override log.flush.interval.ms=100 \
          --override log.flush.offset.checkpoint.interval.ms=6000 \
          --override log.flush.scheduler.interval.ms=600 \
          --override log.retention.bytes=-1 \
          --override log.retention.hours=168 \
          --override log.roll.hours=168 \
          --override log.roll.jitter.hours=0 \
          --override log.segment.bytes=1073741824 \
          --override log.segment.delete.delay.ms=60000 \
          --override message.max.bytes=1000012 \
          --override min.insync.replicas=1 \
          --override num.io.threads=8 \
          --override num.network.threads=3 \
          --override num.recovery.threads.per.data.dir=1 \
          --override num.replica.fetchers=1 \
          --override offset.metadata.max.bytes=4096 \
          --override offsets.commit.required.acks=-1 \
          --override offsets.commit.timeout.ms=5000 \
          --override offsets.load.buffer.size=5242880 \
          --override offsets.retention.check.interval.ms=600000 \
          --override offsets.retention.minutes=1440 \
          --override offsets.topic.compression.codec=0 \
          --override offsets.topic.num.partitions=50 \
          --override offsets.topic.replication.factor=3 \
          --override offsets.topic.segment.bytes=104857600 \
          --override queued.max.requests=500 \
          --override quota.consumer.default=9223372036854775807 \
          --override quota.producer.default=9223372036854775807 \
          --override replica.fetch.min.bytes=1 \
          --override replica.fetch.wait.max.ms=500 \
          --override replica.high.watermark.checkpoint.interval.ms=5000 \
          --override replica.lag.time.max.ms=10000 \
          --override replica.socket.receive.buffer.bytes=65536 \
          --override replica.socket.timeout.ms=30000 \
          --override request.timeout.ms=30000 \
          --override socket.receive.buffer.bytes=102400 \
          --override socket.request.max.bytes=104857600 \
          --override socket.send.buffer.bytes=102400 \
          --override unclean.leader.election.enable=true \
          --override zookeeper.session.timeout.ms=6000 \
          --override zookeeper.set.acl=false \
          --override broker.id.generation.enable=true \
          --override connections.max.idle.ms=600000 \
          --override controlled.shutdown.enable=true \
          --override controlled.shutdown.max.retries=3 \
          --override controlled.shutdown.retry.backoff.ms=5000 \
          --override controller.socket.timeout.ms=30000 \
          --override default.replication.factor=1 \
          --override fetch.purgatory.purge.interval.requests=1000 \
          --override group.max.session.timeout.ms=300000 \
          --override group.min.session.timeout.ms=6000 \
          --override inter.broker.protocol.version=0.10.2-IV0 \
          --override log.cleaner.backoff.ms=15000 \
          --override log.cleaner.dedupe.buffer.size=134217728 \
          --override log.cleaner.delete.retention.ms=86400000 \
          --override log.cleaner.enable=true \
          --override log.cleaner.io.buffer.load.factor=0.9 \
          --override log.cleaner.io.buffer.size=524288 \
          --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
          --override log.cleaner.min.cleanable.ratio=0.5 \
          --override log.cleaner.min.compaction.lag.ms=0 \
          --override log.cleaner.threads=1 \
          --override log.cleanup.policy=delete \
          --override log.index.interval.bytes=4096 \
          --override log.index.size.max.bytes=10485760 \
          --override log.message.timestamp.difference.max.ms=9223372036854775807 \
          --override log.message.timestamp.type=CreateTime \
          --override log.preallocate=false \
          --override log.retention.check.interval.ms=300000 \
          --override max.connections.per.ip=2147483647 \
          --override num.partitions=1 \
          --override producer.purgatory.purge.interval.requests=1000 \
          --override replica.fetch.backoff.ms=1000 \
          --override replica.fetch.max.bytes=1048576 \
          --override replica.fetch.response.max.bytes=10485760 \
          --override reserved.broker.max.id=1000 "
        env:
        - name: KAFKA_HEAP_OPTS
          value : "-Xmx512M -Xms512M"
        - name: KAFKA_OPTS
          value: "-Dlogging.level=INFO"
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/kafka
        readinessProbe:
          exec:
           command:
            - sh
            - -c
            - "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"
        livenessProbe:
          initialDelaySeconds: 10
          timeoutSeconds: 5
          exec:
           command:
            - sh
            - -c
            - "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi
      storageClassName: managed-nfs-storage

第二步,暴露kafka外部服务。

for i in {0..1}; do kubectl -n kafka-test label pod kafka-$i kafkaInst=$i; done

for i in {0..1}; do kubectl -n kafka-test expose po kafka-$i --port=9093 --target-port=9093 --name=kafka-$i --selector=kafkaInst=$i --type=NodePort; done

第三步,等待kafka pod都ready后,分别进入两个pod验证kafka日志系统是否正常工作

kubectl -n kafka-test get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP               NODE            NOMINATED NODE   READINESS GATES
kafka-0   1/1     Running   0          18s   10.100.199.224   remote-master   <none>           <none>
kafka-1   1/1     Running   0          10s   10.100.133.219   worker-2        <none>           <none>
zk-0      1/1     Running   0          27m   10.100.199.230   remote-master   <none>           <none>
zk-1      1/1     Running   0          27m   10.100.133.254   worker-2        <none>           <none>

kubectl -n kafka-test exec -it kafka-0 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
kafka@kafka-0:/$ kafka-console-consumer.sh --topic test --bootstrap-server localhost:9093 --from-beginning
aaa
bbb
ccc
ddd
eee
fff
ggg
hhh
iii
jjj
kkk
^CProcessed a total of 11 messages
kafka@kafka-0:/$ exit

kubectl -n kafka-test exec -it kafka-1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
kafka@kafka-1:/$ kafka-console-producer.sh --topic test --broker-list localhost:9093
lll
mmm
nnn
^Ckafka@kafka-1:/$ kafka-console-consumer.sh --topic test --bootstrap-server localhost:9093 --from-beginning
aaa
bbb
ccc
ddd
eee
fff
ggg
hhh
iii
jjj
kkk
lll
mmm
nnn
^CProcessed a total of 14 messages

对kafka进行备份和恢复

  1. 创建备份
# velero backup create kafka-backup1 --include-namespaces=kafka-test --default-volumes-to-restic --volume-snapshot-locations default
Backup request "kafka-backup1" submitted successfully.
Run `velero backup describe kafka-backup1` or `velero backup logs kafka-backup1` for more details.

这里备份命令只设置了namespace,并没有对资源进行任何选择。
另外,--default-volumes-to-restic是velero v1.5新引入的参数,这里使用restic文件复制方式,具体备份方式详解参见https://velero.cn/d/8-velero

  1. 查看备份状态和详细信息
velero backup describe kafka-backup1 --details
Name:         kafka-backup1
Namespace:    qiming-backend
Labels:       velero.io/storage-location=default
Annotations:  velero.io/source-cluster-k8s-gitversion=v1.18.9
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=18

Phase:  Completed

Errors:    0
Warnings:  0

Namespaces:
  Included:  kafka-test
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Storage Location:  default

Velero-Native Snapshot PVs:  auto

TTL:  720h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2021-12-13 16:06:27 +0800 CST
Completed:  2021-12-13 16:06:58 +0800 CST

Expiration:  2022-01-12 16:06:27 +0800 CST

Total items to be backed up:  40
Items backed up:              40

Resource List:
  apps/v1/ControllerRevision:
    - kafka-test/kafka-6d44c778b6
    - kafka-test/zk-d8566c99
  apps/v1/StatefulSet:
    - kafka-test/kafka
    - kafka-test/zk
  discovery.k8s.io/v1beta1/EndpointSlice:
    - kafka-test/kafka-0-8w26n
    - kafka-test/kafka-1-8nbk7
    - kafka-test/kafka-svc-p8z6c
    - kafka-test/zk-0-7zp8f
    - kafka-test/zk-1-lb7nw
    - kafka-test/zk-svc-wwhlt
  policy/v1beta1/PodDisruptionBudget:
    - kafka-test/kafka-pdb
    - kafka-test/zk-pdb
  v1/ConfigMap:
    - kafka-test/zk-cm
  v1/Endpoints:
    - kafka-test/kafka-0
    - kafka-test/kafka-1
    - kafka-test/kafka-svc
    - kafka-test/zk-0
    - kafka-test/zk-1
    - kafka-test/zk-svc
  v1/Namespace:
    - kafka-test
  v1/PersistentVolume:
    - pvc-823ffe85-f4d3-40f3-8c17-6675cb42a92c
    - pvc-86059764-1ed6-4b15-9c6b-01106b2e144f
    - pvc-a377d3de-cdbc-44c3-aa8c-ba39f62caa95
    - pvc-ad56d5c0-8987-4e84-8ed0-12c43e7aa322
  v1/PersistentVolumeClaim:
    - kafka-test/datadir-kafka-0
    - kafka-test/datadir-kafka-1
    - kafka-test/datadir-zk-0
    - kafka-test/datadir-zk-1
  v1/Pod:
    - kafka-test/kafka-0
    - kafka-test/kafka-1
    - kafka-test/zk-0
    - kafka-test/zk-1
  v1/Secret:
    - kafka-test/default-token-87b2l
  v1/Service:
    - kafka-test/kafka-0
    - kafka-test/kafka-1
    - kafka-test/kafka-svc
    - kafka-test/zk-0
    - kafka-test/zk-1
    - kafka-test/zk-svc
  v1/ServiceAccount:
    - kafka-test/default

Velero-Native Snapshots: <none included>

Restic Backups:
  Completed:
    kafka-test/kafka-0: datadir
    kafka-test/kafka-1: datadir
    kafka-test/zk-0: datadir
    kafka-test/zk-1: datadir

等待 Phase: Completed,表示备份完成

  1. 等待备份任务成功后,在集群中删除kafka-test这个namespace。

    kubectl delete ns kafka-test
  2. 创建恢复并查看详情

    # velero create restore --from-backup kafka-backup1
    Restore request "kafka-backup1-20211213163910" submitted successfully.
    Run `velero restore describe kafka-backup1-20211213163910` or `velero restore logs kafka-backup1-20211213163910` for more details.
    
    # velero restore describe kafka-backup1-20211213163910
    Name:         kafka-backup1-20211213163910
    Namespace:    qiming-backend
    Labels:       <none>
    Annotations:  <none>
    
    Phase:                       Completed
    Total items to be restored:  40
    Items restored:              40
    
    Started:    2021-12-13 16:39:10 +0800 CST
    Completed:  2021-12-13 16:39:50 +0800 CST
    
    Backup:  kafka-backup1
    
    Namespaces:
      Included:  all namespaces found in the backup
      Excluded:  <none>
    
    Resources:
      Included:        *
      Excluded:        nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
      Cluster-scoped:  auto
    
    Namespace mappings:  <none>
    
    Label selector:  <none>
    
    Restore PVs:  auto
    
    Restic Restores (specify --details for more information):
      Completed:  4
    
    Preserve Service NodePorts:  auto

    等待 Phase: Completed,表示恢复完成

  3. 等待恢复任务成功后,检查kafka-test中pod的恢复情况,并进入kafka pod检查数据。

    # kubectl -n kafka-test get pod -o wide
    NAME      READY   STATUS    RESTARTS   AGE     IP               NODE            NOMINATED NODE   READINESS GATES
    kafka-0   1/1     Running   4          4m58s   10.100.133.233   worker-2        <none>           <none>
    kafka-1   1/1     Running   4          4m58s   10.100.199.255   remote-master   <none>           <none>
    zk-0      1/1     Running   0          4m58s   10.100.199.246   remote-master   <none>           <none>
    zk-1      1/1     Running   0          4m58s   10.100.133.254   worker-2        <none>           <none>
    
    # kubectl -n kafka-test exec -it kafka-1 bash
    kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
    kafka@kafka-1:/$ kafka-console-consumer.sh --topic test --bootstrap-server localhost:9093 --from-beginning
    aaa
    bbb
    ccc
    ddd
    eee
    fff
    ggg
    hhh
    iii
    jjj
    kkk
    lll
    mmm
    nnn
    ^CProcessed a total of 14 messages
    kafka@kafka-1:/$ 

小结

本文利用开源方案velero, 实现了多副本kafka和zookeeper在k8s容器中的备份和恢复操作, 并成功恢复了kafka持久卷的数据。

    说点什么吧...