服务器之家:专注于服务器技术及软件下载分享
分类导航

云服务器|WEB服务器|FTP服务器|邮件服务器|虚拟主机|服务器安全|DNS服务器|服务器知识|Nginx|IIS|Tomcat|

服务器之家 - 服务器技术 - 服务器知识 - Kubernetes 上免费的容器存储及容灾备份恢复方案

Kubernetes 上免费的容器存储及容灾备份恢复方案

2021-12-09 22:16奇妙的Linux世界李大仁 服务器知识

有没有更适合 k8s 的备份恢复方案?传统的数据备份方案,一种是利用存储数据的服务端实现定期快照的备份,另一种是在每台目标服务器上部署专有备份 agent 并指定备份数据目录,定期把数据远程复制到外部存储上。这两种方式

Kubernetes 上免费的容器存储及容灾备份恢复方案

云原生时代为什么还需要本地存储?

云原生时代,对于有存储应用的容器化上云,一般的解决思路是“计算存储分离”,计算层通过容器技术实现弹性伸缩,而相应的存储层也需要配合进行“动态挂载”,要实现动态挂载能力,使用基于网络的存储系统可能是最佳选择。然而,“网络”存储的磁盘 IO 性能差、自带高可用能力的中间件系统不需要存储层的“动态挂载” 等种种原因,业界对本地存储还是“青睐有加”。因此类似 rabbitmq、kafka 这样的中间件系统,优先使用本地盘,然后通过 k8s 增强自动化运维能力,解决原来磁盘手动管理问题,实现动态分配、扩容、隔离。

有没有更适合 k8s 的备份恢复方案?

传统的数据备份方案,一种是利用存储数据的服务端实现定期快照的备份,另一种是在每台目标服务器上部署专有备份 agent 并指定备份数据目录,定期把数据远程复制到外部存储上。这两种方式均存在“备份机制固化”、“数据恢复慢”等问题,无法适应容器化后的弹性、池化部署场景。我们需要更贴合 k8s 容器场景的备份恢复能力,实现一键备份、快速恢复。

整体计划

  1. 准备一个 k8s 集群,master 节点不跑 workload,最好能有 2 个 worker 节点;
  2. 部署 carina 云原生本地容器存储方案,测试本地盘自动化管理能力
  3. 部署 velero 云原生备份方案,测试数据备份和恢复能力

k8s 环境

  • 版本:v1.19.14
  • 集群规模:1master 2worker
  • 磁盘挂载情况:除了根目录使用了一块独立盘,初始状态未使用其他的磁盘

部署 carina

1、部署脚本

可以参考官方文档[1],部署方式区分 1.22 版本前后,大概率由于 1.22 版本很多 API 发生变更。

2、本地裸盘准备:

为每个 worker 挂载一块裸盘,建议至少 20G,carina 默认会占用 10G 磁盘空间,作为存储管理元数据。由于我自己使用了谷歌云,可以通过如下步骤实现:第一个是 ssd,第二个是 hdd。

Kubernetes 上免费的容器存储及容灾备份恢复方案

gcloud apply ssd

Kubernetes 上免费的容器存储及容灾备份恢复方案

gcloud apply hdd

3、确认 carina 组件能够搜索并读取到新挂载的裸盘,需要通过修改 carina 默认扫描本地盘的策略:

  1. # 通过以下命令查看配置扫描策略的configmap
  2. # 官网说明文档:https://github.com/carina-io/carina/blob/main/docs/manual/disk-manager.md
  3. > kubectl describe cm carina-csi-config -n kube-system
  4. Name: carina-csi-config
  5. Namespace: kube-system
  6. Labels: class=carina
  7. Annotations: <none>
  8. Data
  9. ====
  10. config.json:
  11. ----
  12. {
  13. # 根据google cloud磁盘的命名规则作了更改,匹配以sd开头的磁盘
  14. "diskSelector": ["sd+"],
  15. # 扫描周期180秒
  16. "diskScanInterval": "180",
  17. "diskGroupPolicy": "type",
  18. "schedulerStrategy": "spradout"
  19. }

4、通过查看 carina 组件状态,确认本地盘已经被识别

  1. > kubectl get node u20-w1 -o template --template={{.status.capacity}}
  2. map[carina.storage.io/carina-vg-hdd:200 carina.storage.io/carina-vg-ssd:20 cpu:2 ephemeral-storage:30308240Ki hugepages-1Gi:0 hugepages-2Mi:0 memory:4022776Ki pods:110]
  3. > kubectl get node u20-w1 -o template --template={{.status.allocatable}}
  4. map[carina.storage.io/carina-vg-hdd:189 carina.storage.io/carina-vg-ssd:1 cpu:2 ephemeral-storage:27932073938 hugepages-1Gi:0 hugepages-2Mi:0 memory:3920376Ki pods:110]
  5. # 可以看到hdd容量已经变成200了、SSD容量变成20,为什么能区分ssd和hdd呢?这里先按下不表
  6. # 这里也能看到预留了10G空间不可使用,因为200G新盘刚加入只有189G(考虑到有磁盘单位换算带来的误差)可用。
  7. # 这些信息很重要,当pv创建时会从该node信息中获取当前节点磁盘容量,然后根据pv调度策略进行调度
  8. # 还有个集中查看磁盘使用情况的入口
  9. > kubectl get configmap carina-node-storage -n kube-system -o json | jq .data.node
  10. [
  11. {
  12. "allocatable.carina.storage.io/carina-vg-hdd":"189",
  13. "allocatable.carina.storage.io/carina-vg-ssd":"1",
  14. "capacity.carina.storage.io/carina-vg-hdd":"200",
  15. "capacity.carina.storage.io/carina-vg-ssd":"20",
  16. "nodeName":"u20-w1"
  17. },
  18. {
  19. "allocatable.carina.storage.io/carina-vg-hdd":"189",
  20. "allocatable.carina.storage.io/carina-vg-ssd":"0",
  21. "capacity.carina.storage.io/carina-vg-hdd":"200",
  22. "capacity.carina.storage.io/carina-vg-ssd":"0",
  23. "nodeName":"u20-w2"
  24. }
  25. ]

5、为什么能自动识别 hdd 和 sdd 呢?

  1. # carina-node服务启动时会自动将节点上磁盘按照SSD和HDD进行分组并组建成vg卷组
  2. # 使用命令lsblk --output NAME,ROTA查看磁盘类型,ROTA=1为HDD磁盘 ROTA=0为SSD磁盘
  3. # 支持文件存储及块设备存储,其中文件存储支持xfs和ext4格式
  4. # 下面是事前声明的storageclass,用来自动创建pv
  5. > k get sc csi-carina-sc -o json | jq .metadata.annotations
  6. {
  7. "kubectl.kubernetes.io/last-applied-configuration":{
  8. "allowVolumeExpansion":true,
  9. "apiVersion":"storage.k8s.io/v1",
  10. "kind":"StorageClass",
  11. "metadata":{
  12. "annotations":{},
  13. "name":"csi-carina-sc"
  14. },
  15. "mountOptions":[
  16. "rw"
  17. ],
  18. "parameters":{
  19. "csi.storage.k8s.io/fstype":"ext4"
  20. },
  21. "provisioner":"carina.storage.io",
  22. "reclaimPolicy":"Delete",
  23. "volumeBindingMode":"WaitForFirstConsumer"
  24. }
  25. }

测试 carina 自动分配 PV 能力

1、想要 carina 具备自动创建 PV 能力,需要先声明并创建 storageclass。

  1. # 创建storageclass的yaml
  2. ---
  3. apiVersion: storage.k8s.io/v1
  4. kind: StorageClass
  5. metadata:
  6. name: csi-carina-sc
  7. provisioner: carina.storage.io # 这是该CSI驱动的名称,不允许更改
  8. parameters:
  9. # 支持xfs,ext4两种文件格式,如果不填则默认ext4
  10. csi.storage.k8s.io/fstype: ext4
  11. # 这是选择磁盘分组,该项目会自动将SSD及HDD磁盘分组
  12. # SSD:ssd HDD: hdd
  13. # 如果不填会随机选择磁盘类型
  14. #carina.storage.io/disk-type: hdd
  15. reclaimPolicy: Delete
  16. allowVolumeExpansion: true # 支持扩容,定为true便可
  17. # WaitForFirstConsumer表示被容器绑定调度后再创建pv
  18. volumeBindingMode: WaitForFirstConsumer
  19. # 支持挂载参数设置,这里配置为读写模式
  20. mountOptions:
  21. - rw

kubectl apply 后,可以通过以下命令确认:

  1. > kubectl get sc csi-carina-sc -o json | jq .metadata.annotations
  2. {
  3. "kubectl.kubernetes.io/last-applied-configuration":{
  4. "allowVolumeExpansion":true,
  5. "apiVersion":"storage.k8s.io/v1",
  6. "kind":"StorageClass",
  7. "metadata":{
  8. "annotations":{},
  9. "name":"csi-carina-sc"
  10. },
  11. "mountOptions":[
  12. "rw"
  13. ],
  14. "parameters":{
  15. "csi.storage.k8s.io/fstype":"ext4"
  16. },
  17. "provisioner":"carina.storage.io",
  18. "reclaimPolicy":"Delete",
  19. "volumeBindingMode":"WaitForFirstConsumer"
  20. }
  21. }

2、部署带存储的测试应用

测试场景比较简单,使用简单 nginx 服务,挂载数据盘,存放自定义 html 页面。

  1. # pvc for nginx html
  2. ---
  3. apiVersion: v1
  4. kind: PersistentVolumeClaim
  5. metadata:
  6. name: csi-carina-pvc-big
  7. namespace: default
  8. spec:
  9. accessModes:
  10. - ReadWriteOnce
  11. resources:
  12. requests:
  13. storage: 10Gi
  14. # 指定carina的storageclass名称
  15. storageClassName: csi-carina-sc
  16. volumeMode: Filesystem
  17. # nginx deployment yaml
  18. ---
  19. apiVersion: apps/v1
  20. kind: Deployment
  21. metadata:
  22. name: carina-deployment-big
  23. namespace: default
  24. labels:
  25. app: web-server-big
  26. spec:
  27. replicas: 1
  28. selector:
  29. matchLabels:
  30. app: web-server-big
  31. template:
  32. metadata:
  33. labels:
  34. app: web-server-big
  35. spec:
  36. containers:
  37. - name: web-server
  38. image: nginx:latest
  39. imagePullPolicy: "IfNotPresent"
  40. volumeMounts:
  41. - name: mypvc-big
  42. mountPath: /usr/share/nginx/html # nginx默认将页面内容存放在这个文件夹
  43. volumes:
  44. - name: mypvc-big
  45. persistentVolumeClaim:
  46. claimName: csi-carina-pvc-big
  47. readOnly: false

查看测试应用运行情况:

  1. # pvc
  2. > kubectl get pvc
  3. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  4. csi-carina-pvc-big Bound pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 10Gi RWO csi-carina-sc 109s
  5. # pv
  6. > kubectl get pv
  7. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  8. pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 10Gi RWO Delete Bound default/csi-carina-pvc-big csi-carina-sc 109s
  9. # nginx pod
  10. > kubectl get po -l app=web-server-big -o wide
  11. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  12. carina-deployment-big-6b78fb9fd-mwf8g 1/1 Running 0 3m48s 10.0.2.69 u20-w2 <none> <none>
  13. # 查看相关node的磁盘使用情况,可分配的大小已经发生变化,缩小了10
  14. > kubectl get node u20-w2 -o template --template={{.status.allocatable}}
  15. map[carina.storage.io/carina-vg-hdd:179 carina.storage.io/carina-vg-ssd:0 cpu:2 ephemeral-storage:27932073938 hugepages-1Gi:0 hugepages-2Mi:0 memory:3925000Ki pods:110]

登录运行测试服务的 node 节点,查看磁盘挂载情况:

  1. # 磁盘分配情况如下,有meta和pool两个lvm
  2. > lsblk
  3. ...
  4. sdb 8:16 0 200G 0 disk
  5. ├─carina--vg--hdd-thin--pvc--74e683f9--d2a4--40a0--95db--85d1504fd961_tmeta
  6. │ 253:0 0 12M 0 lvm
  7. │ └─carina--vg--hdd-thin--pvc--74e683f9--d2a4--40a0--95db--85d1504fd961-tpool
  8. │ 253:2 0 10G 0 lvm
  9. │ ├─carina--vg--hdd-thin--pvc--74e683f9--d2a4--40a0--95db--85d1504fd961
  10. │ │ 253:3 0 10G 1 lvm
  11. │ └─carina--vg--hdd-volume--pvc--74e683f9--d2a4--40a0--95db--85d1504fd961
  12. │ 253:4 0 10G 0 lvm /var/lib/kubelet/pods/57ded9fb-4c82-4668-b77b-7dc02ba05fc2/volumes/kubernetes.io~
  13. └─carina--vg--hdd-thin--pvc--74e683f9--d2a4--40a0--95db--85d1504fd961_tdata
  14. 253:1 0 10G 0 lvm
  15. └─carina--vg--hdd-thin--pvc--74e683f9--d2a4--40a0--95db--85d1504fd961-tpool
  16. 253:2 0 10G 0 lvm
  17. ├─carina--vg--hdd-thin--pvc--74e683f9--d2a4--40a0--95db--85d1504fd961
  18. │ 253:3 0 10G 1 lvm
  19. └─carina--vg--hdd-volume--pvc--74e683f9--d2a4--40a0--95db--85d1504fd961
  20. 253:4 0 10G 0 lvm /var/lib/kubelet/pods/57ded9fb-4c82-4668-b77b-7dc02ba05fc2/volumes/kubernetes.io~
  21. # vgs
  22. > vgs
  23. VG #PV #LV #SN Attr VSize VFree
  24. carina-vg-hdd 1 2 0 wz--n- <200.00g 189.97g
  25. # lvs
  26. > lvs
  27. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  28. thin-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 carina-vg-hdd twi-aotz-- 10.00g 2.86 11.85
  29. volume-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 carina-vg-hdd Vwi-aotz-- 10.00g thin-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 2.86
  30. # 使用lvm命令行工具查看磁盘挂载信息
  31. > pvs
  32. PV VG Fmt Attr PSize PFree
  33. /dev/sdb carina-vg-hdd lvm2 a-- <200.00g 189.97g
  34. > pvdisplay
  35. --- Physical volume ---
  36. PV Name /dev/sdb
  37. VG Name carina-vg-hdd
  38. PV Size <200.00 GiB / not usable 3.00 MiB
  39. Allocatable yes
  40. PE Size 4.00 MiB
  41. Total PE 51199
  42. Free PE 48633
  43. Allocated PE 2566
  44. PV UUID Wl6ula-kD54-Mj5H-ZiBc-aHPB-6RHI-mXs9R9
  45. > lvs
  46. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
  47. thin-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 carina-vg-hdd twi-aotz-- 10.00g 2.86 11.85
  48. volume-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 carina-vg-hdd Vwi-aotz-- 10.00g thin-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 2.86
  49. > lvdisplay
  50. --- Logical volume ---
  51. LV Name thin-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961
  52. VG Name carina-vg-hdd
  53. LV UUID kB7DFm-dl3y-lmop-p7os-3EW6-4Toy-slX7qn
  54. LV Write Access read/write (activated read only)
  55. LV Creation host, time u20-w2, 2021-11-09 05:31:18 +0000
  56. LV Pool metadata thin-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961_tmeta
  57. LV Pool data thin-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961_tdata
  58. LV Status available
  59. # open 2
  60. LV Size 10.00 GiB
  61. Allocated pool data 2.86%
  62. Allocated metadata 11.85%
  63. Current LE 2560
  64. Segments 1
  65. Allocation inherit
  66. Read ahead sectors auto
  67. - currently set to 256
  68. Block device 253:2
  69. --- Logical volume ---
  70. LV Path /dev/carina-vg-hdd/volume-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961
  71. LV Name volume-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961
  72. VG Name carina-vg-hdd
  73. LV UUID vhDYe9-KzPc-qqJk-2o1f-TlCv-0TDL-643b8r
  74. LV Write Access read/write
  75. LV Creation host, time u20-w2, 2021-11-09 05:31:19 +0000
  76. LV Pool name thin-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961
  77. LV Status available
  78. # open 1
  79. LV Size 10.00 GiB
  80. Mapped size 2.86%
  81. Current LE 2560
  82. Segments 1
  83. Allocation inherit
  84. Read ahead sectors auto
  85. - currently set to 256
  86. Block device 253:4

进入磁盘,添加自定义内容,查看存储情况

  1. # 进入容器内,创建自定义页面
  2. > kubectl exec -ti carina-deployment-big-6b78fb9fd-mwf8g -- /bin/bash
  3. /# cd /usr/share/nginx/html/
  4. /usr/share/nginx/html# ls
  5. lost+found
  6. /usr/share/nginx/html# echo "hello carina" > index.html
  7. /usr/share/nginx/html# curl localhost
  8. hello carina
  9. /usr/share/nginx/html# echo "test carina" > test.html
  10. /usr/share/nginx/html# curl localhost/test.html
  11. test carina
  12. # 登录node节点,进入挂载点,查看上面刚刚创建的内容
  13. > df -h
  14. ...
  15. /dev/carina/volume-pvc-74e683f9-d2a4-40a0-95db-85d1504fd961 9.8G 37M 9.7G 1% /var/lib/kubelet/pods/57ded9fb-4c82-4668-b77b-7dc02ba05fc2/volumes/kubernetes.io~csi/pvc-74e683f9-d2a4-40a0-95db-85d1504fd961/mount
  16. > cd /var/lib/kubelet/pods/57ded9fb-4c82-4668-b77b-7dc02ba05fc2/volumes/kubernetes.io~csi/pvc-74e683f9-d2a4-40a0-95db-85d1504fd961/mount
  17. > ll
  18. total 32
  19. drwxrwsrwx 3 root root 4096 Nov 9 05:54 ./
  20. drwxr-x--- 3 root root 4096 Nov 9 05:31 ../
  21. -rw-r--r-- 1 root root 13 Nov 9 05:54 index.html
  22. drwx------ 2 root root 16384 Nov 9 05:31 lost+found/
  23. -rw-r--r-- 1 root root 12 Nov 9 05:54 test.html

部署 velero

1、下载 velero 命令行工具,本次使用的是 1.7.0 版本

  1. > wget <https://github.com/vmware-tanzu/velero/releases/download/v1.7.0/velero-v1.7.0-linux-amd64.tar.gz>
  2. > tar -xzvf velero-v1.7.0-linux-amd64.tar.gz
  3. > cd velero-v1.7.0-linux-amd64 && cp velero /usr/local/bin/
  4. > velero
  5. Velero is a tool for managing disaster recovery, specifically for Kubernetes
  6. cluster resources. It provides a simple, configurable, and operationally robust
  7. way to back up your application state and associated data.
  8. If you're familiar with kubectl, Velero supports a similar model, allowing you to
  9. execute commands such as 'velero get backup' and 'velero create schedule'. The same
  10. operations can also be performed as 'velero backup get' and 'velero schedule create'.
  11. Usage:
  12. velero [command]
  13. Available Commands:
  14. backup Work with backups
  15. backup-location Work with backup storage locations
  16. bug Report a Velero bug
  17. client Velero client related commands
  18. completion Generate completion script
  19. create Create velero resources
  20. debug Generate debug bundle
  21. delete Delete velero resources
  22. describe Describe velero resources
  23. get Get velero resources
  24. help Help about any command
  25. install Install Velero
  26. plugin Work with plugins
  27. restic Work with restic
  28. restore Work with restores
  29. schedule Work with schedules
  30. snapshot-location Work with snapshot locations
  31. uninstall Uninstall Velero
  32. version Print the velero version and associated image

2、部署 minio 对象存储,作为 velero 后端存储

为了部署 velero 服务端,需要优先准备好一个后端存储。velero 支持很多类型的后端存储,详细看这里:https://velero.io/docs/v1.7/supported-providers/[2]。只要遵循 AWS S3 存储接口规范的,都可以对接,本次使用兼容 S3 接口的 minio 服务作为后端存储,部署 minio 方式如下,其中就使用到了 carina storageclass 提供磁盘创建能力:

  1. # 参考文档:https://velero.io/docs/v1.7/contributions/minio/
  2. # 统一部署在minio命名空间
  3. ---
  4. apiVersion: v1
  5. kind: Namespace
  6. metadata:
  7. name: minio
  8. # 为minio后端存储申请8G磁盘空间,走的就是carina storageclass
  9. ---
  10. apiVersion: v1
  11. kind: PersistentVolumeClaim
  12. metadata:
  13. name: minio-storage-pvc
  14. namespace: minio
  15. spec:
  16. accessModes:
  17. - ReadWriteOnce
  18. resources:
  19. requests:
  20. storage: 8Gi
  21. storageClassName: csi-carina-sc # 指定carina storageclass
  22. volumeMode: Filesystem
  23. ---
  24. apiVersion: apps/v1
  25. kind: Deployment
  26. metadata:
  27. namespace: minio
  28. name: minio
  29. labels:
  30. component: minio
  31. spec:
  32. strategy:
  33. type: Recreate
  34. selector:
  35. matchLabels:
  36. component: minio
  37. template:
  38. metadata:
  39. labels:
  40. component: minio
  41. spec:
  42. volumes:
  43. - name: storage
  44. persistentVolumeClaim:
  45. claimName: minio-storage-pvc
  46. readOnly: false
  47. - name: config
  48. emptyDir: {}
  49. containers:
  50. - name: minio
  51. image: minio/minio:latest
  52. imagePullPolicy: IfNotPresent
  53. args:
  54. - server
  55. - /storage
  56. - --config-dir=/config
  57. - --console-address ":9001" # 配置前端页面的暴露端口
  58. env:
  59. - name: MINIO_ACCESS_KEY
  60. value: "minio"
  61. - name: MINIO_SECRET_KEY
  62. value: "minio123"
  63. ports:
  64. - containerPort: 9000
  65. - containerPort: 9001
  66. volumeMounts:
  67. - name: storage
  68. mountPath: "/storage"
  69. - name: config
  70. mountPath: "/config"
  71. # 使用nodeport创建svc,提供对外服务能力,包括前端页面和后端API
  72. ---
  73. apiVersion: v1
  74. kind: Service
  75. metadata:
  76. namespace: minio
  77. name: minio
  78. labels:
  79. component: minio
  80. spec:
  81. # ClusterIP is recommended for production environments.
  82. # Change to NodePort if needed per documentation,
  83. # but only if you run Minio in a test/trial environment, for example with Minikube.
  84. type: NodePort
  85. ports:
  86. - name: console
  87. port: 9001
  88. targetPort: 9001
  89. - name: api
  90. port: 9000
  91. targetPort: 9000
  92. protocol: TCP
  93. selector:
  94. component: minio
  95. # 初始化创建velero的bucket
  96. ---
  97. apiVersion: batch/v1
  98. kind: Job
  99. metadata:
  100. namespace: minio
  101. name: minio-setup
  102. labels:
  103. component: minio
  104. spec:
  105. template:
  106. metadata:
  107. name: minio-setup
  108. spec:
  109. restartPolicy: OnFailure
  110. volumes:
  111. - name: config
  112. emptyDir: {}
  113. containers:
  114. - name: mc
  115. image: minio/mc:latest
  116. imagePullPolicy: IfNotPresent
  117. command:
  118. - /bin/sh
  119. - -c
  120. - "mc --config-dir=/config config host add velero <http://minio:9000> minio minio123 && mc --config-dir=/config mb -p velero/velero"
  121. volumeMounts:
  122. - name: config
  123. mountPath: "/config"

部署 minio 完成后如下所示:

  1. > kubectl get all -n minio
  2. NAME READY STATUS RESTARTS AGE
  3. pod/minio-686755b769-k6625 1/1 Running 0 6d15h
  4. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  5. service/minio NodePort 10.98.252.130 <none> 36369:31943/TCP,9000:30436/TCP 6d15h
  6. NAME READY UP-TO-DATE AVAILABLE AGE
  7. deployment.apps/minio 1/1 1 1 6d15h
  8. NAME DESIRED CURRENT READY AGE
  9. replicaset.apps/minio-686755b769 1 1 1 6d15h
  10. replicaset.apps/minio-c9c844f67 0 0 0 6d15h
  11. NAME COMPLETIONS DURATION AGE
  12. job.batch/minio-setup 1/1 114s 6d15h

打开 minio 页面,使用部署时定义的账号登录:

Kubernetes 上免费的容器存储及容灾备份恢复方案

minio login

Kubernetes 上免费的容器存储及容灾备份恢复方案

minio dashboard

3、安装 velero 服务端

  1. # 注意添加--use-restric,开启pv数据备份
  2. # 注意最后一行,使用了k8s svc域名服务,<http://minio.minio.svc:9000>
  3. velero install
  4. --provider aws
  5. --plugins velero/velero-plugin-for-aws:v1.2.1
  6. --bucket velero
  7. --secret-file ./minio-cred
  8. --namespace velero
  9. --use-restic
  10. --use-volume-snapshots=false
  11. --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.minio.svc:9000

安装完成后如下所示:

  1. # 除了velero之外,
  2. # 还有restic,它是负责备份pv数据的核心组件,需要保证每个节点上的pv都能备份,因此它使用了daemonset模式
  3. # 详细参考:<https://velero.io/docs/v1.7/restic/>
  4. > kubectl get all -n velero
  5. NAME READY STATUS RESTARTS AGE
  6. pod/restic-g5q5k 1/1 Running 0 6d15h
  7. pod/restic-jdk7h 1/1 Running 0 6d15h
  8. pod/restic-jr8f7 1/1 Running 0 5d22h
  9. pod/velero-6979cbd56b-s7v99 1/1 Running 0 5d21h
  10. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  11. daemonset.apps/restic 3 3 3 3 3 <none> 6d15h
  12. NAME READY UP-TO-DATE AVAILABLE AGE
  13. deployment.apps/velero 1/1 1 1 6d15h
  14. NAME DESIRED CURRENT READY AGE
  15. replicaset.apps/velero-6979cbd56b 1 1 1 6d15h

使用 velero 备份应用及其数据

1. 使用 velero backup 命令备份带 pv 的测试应用

  1. # 通过--selector选项指定应用标签
  2. # 通过--default-volumes-to-restic选项显式声明使用restric备份pv数据
  3. > velero backup create nginx-backup --selector app=web-server-big --default-volumes-to-restic
  4. I1109 09:14:31.380431 1527737 request.go:655] Throttling request took 1.158837643s, request: GET:<https://10.140.0.8:6443/apis/networking.istio.io/v1beta1?timeout=32s>
  5. Backup request "nginx-pv-backup" submitted successfully.
  6. Run `velero backup describe nginx-pv-backup` or `velero backup logs nginx-pv-backup` for more details.
  7. # 无法查看执行备份的日志,说是无法访问minio接口,很奇怪
  8. > velero backup logs nginx-pv-backup
  9. I1109 09:15:04.840199 1527872 request.go:655] Throttling request took 1.146201139s, request: GET:<https://10.140.0.8:6443/apis/networking.k8s.io/v1beta1?timeout=32s>
  10. An error occurred: Get "<http://minio.minio.svc:9000/velero/backups/nginx-pv-backup/nginx-pv-backup-logs.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minio%2F20211109%2Fminio%2Fs3%2Faws4_request&X-Amz-Date=20211109T091506Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=b48c1988101b544329effb67aee6a7f83844c4630eb1d19db30f052e3603b9b2>": dial tcp: lookup minio.minio.svc on 127.0.0.53:53: no such host
  11. # 可以看到详细的备份执行信息,
  12. # 这个备份是会过期的,默认是1个月有效期
  13. > velero backup describe nginx-pv-backup
  14. I1109 09:15:25.834349 1527945 request.go:655] Throttling request took 1.147122392s, request: GET:<https://10.140.0.8:6443/apis/pkg.crossplane.io/v1beta1?timeout=32s>
  15. Name: nginx-pv-backup
  16. Namespace: velero
  17. Labels: velero.io/storage-location=default
  18. Annotations: velero.io/source-cluster-k8s-gitversion=v1.19.14
  19. velero.io/source-cluster-k8s-major-version=1
  20. velero.io/source-cluster-k8s-minor-version=19
  21. Phase: Completed
  22. Errors: 0
  23. Warnings: 1
  24. Namespaces:
  25. Included: *
  26. Excluded: <none>
  27. Resources:
  28. Included: *
  29. Excluded: <none>
  30. Cluster-scoped: auto
  31. Label selector: app=web-server-big
  32. Storage Location: default
  33. Velero-Native Snapshot PVs: auto
  34. TTL: 720h0m0s
  35. Hooks: <none>
  36. Backup Format Version: 1.1.0
  37. Started: 2021-11-09 09:14:32 +0000 UTC
  38. Completed: 2021-11-09 09:15:05 +0000 UTC
  39. Expiration: 2021-12-09 09:14:32 +0000 UTC
  40. Total items to be backed up: 9
  41. Items backed up: 9
  42. Velero-Native Snapshots: <none included>
  43. Restic Backups (specify --details for more information):
  44. Completed: 1

2、打开 minio 页面,查看备份数据的明细

在**backups文件夹下有个nginx-pv-backup**文件夹,里面有很多压缩文件,有机会可以分析下。

Kubernetes 上免费的容器存储及容灾备份恢复方案

在 restric 文件夹下,产生了一堆数据,查了资料,它是加密保存的,因此无法显性看出备份 pv 的数据。接下来我们尝试下恢复能力,就能验证其数据备份能力。

Kubernetes 上免费的容器存储及容灾备份恢复方案

使用 velero 恢复应用及其数据

1、删除测试应用及其数据

  1. # delete nginx deployment
  2. > kubectl delete deploy carina-deployment-big
  3. # delete nginx pvc
  4. # 检查pv是否已经释放
  5. > kubectl delete pvc csi-carina-pvc-big

2、通过 velero restore 恢复测试应用和数据

  1. # 恢复
  2. > velero restore create --from-backup nginx-pv-backup
  3. Restore request "nginx-pv-backup-20211109094323" submitted successfully.
  4. Run `velero restore describe nginx-pv-backup-20211109094323` or `velero restore logs nginx-pv-backup-20211109094323` for more details.
  5. root@u20-m1:/home/legendarilylwq# velero restore describe nginx-pv-backup-20211109094323
  6. I1109 09:43:50.028122 1534182 request.go:655] Throttling request took 1.161022334s, request: GET:<https://10.140.0.8:6443/apis/networking.k8s.io/v1?timeout=32s>
  7. Name: nginx-pv-backup-20211109094323
  8. Namespace: velero
  9. Labels: <none>
  10. Annotations: <none>
  11. Phase: InProgress
  12. Estimated total items to be restored: 9
  13. Items restored so far: 9
  14. Started: 2021-11-09 09:43:23 +0000 UTC
  15. Completed: <n/a>
  16. Backup: nginx-pv-backup
  17. Namespaces:
  18. Included: all namespaces found in the backup
  19. Excluded: <none>
  20. Resources:
  21. Included: *
  22. Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
  23. Cluster-scoped: auto
  24. Namespace mappings: <none>
  25. Label selector: <none>
  26. Restore PVs: auto
  27. Restic Restores (specify --details for more information):
  28. New: 1
  29. Preserve Service NodePorts: auto
  30. > velero restore get
  31. NAME BACKUP STATUS STARTED COMPLETED ERRORS WARNINGS CREATED SELECTOR
  32. nginx-pv-backup-20211109094323 nginx-pv-backup Completed 2021-11-09 09:43:23 +0000 UTC 2021-11-09 09:44:14 +0000 UTC 0 2 2021-11-09 09:43:23 +0000 UTC <none>

3、验证测试应用和数据是否恢复

  1. # 查看po、pvc、pv是否自动恢复创建
  2. > kubectl get po
  3. NAME READY STATUS RESTARTS AGE
  4. carina-deployment-big-6b78fb9fd-mwf8g 1/1 Running 0 93s
  5. kubewatch-5ffdb99f79-87qbx 2/2 Running 0 19d
  6. static-pod-u20-w1 1/1 Running 15 235d
  7. > kubectl get pvc
  8. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  9. csi-carina-pvc-big Bound pvc-e81017c5-0845-4bb1-8483-a31666ad3435 10Gi RWO csi-carina-sc 100s
  10. > kubectl get pv
  11. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  12. pvc-a07cac5e-c38b-454d-a004-61bf76be6516 8Gi RWO Delete Bound minio/minio-storage-pvc csi-carina-sc 6d17h
  13. pvc-e81017c5-0845-4bb1-8483-a31666ad3435 10Gi RWO Delete Bound default/csi-carina-pvc-big csi-carina-sc 103s
  14. > kubectl get deploy
  15. NAME READY UP-TO-DATE AVAILABLE AGE
  16. carina-deployment-big 1/1 1 1 2m20s
  17. kubewatch 1/1 1 1 327d
  18. # 进入容器,验证自定义页面是否还在
  19. > kubectl exec -ti carina-deployment-big-6b78fb9fd-mwf8g -- /bin/bash
  20. /# cd /usr/share/nginx/html/
  21. /usr/share/nginx/html# ls -l
  22. total 24
  23. -rw-r--r-- 1 root root 13 Nov 9 05:54 index.html
  24. drwx------ 2 root root 16384 Nov 9 05:31 lost+found
  25. -rw-r--r-- 1 root root 12 Nov 9 05:54 test.html
  26. /usr/share/nginx/html# curl localhost
  27. hello carina
  28. /usr/share/nginx/html# curl localhost/test.html
  29. test carina

4、仔细分析恢复过程

下面是恢复后的测试应用的详细信息,有个新增的 init container,名叫**restric-wait**,它自身使用了一个磁盘挂载。

  1. > kubectl describe po carina-deployment-big-6b78fb9fd-mwf8g
  2. Name: carina-deployment-big-6b78fb9fd-mwf8g
  3. Namespace: default
  4. Priority: 0
  5. Node: u20-w2/10.140.0.13
  6. Start Time: Tue, 09 Nov 2021 09:43:47 +0000
  7. Labels: app=web-server-big
  8. pod-template-hash=6b78fb9fd
  9. velero.io/backup-name=nginx-pv-backup
  10. velero.io/restore-name=nginx-pv-backup-20211109094323
  11. Annotations: <none>
  12. Status: Running
  13. IP: 10.0.2.227
  14. IPs:
  15. IP: 10.0.2.227
  16. Controlled By: ReplicaSet/carina-deployment-big-6b78fb9fd
  17. Init Containers:
  18. restic-wait:
  19. Container ID: containerd://ec0ecdf409cc60790fe160d4fc3ba0639bbb1962840622dc20dcc6ccb10e9b5a
  20. Image: velero/velero-restic-restore-helper:v1.7.0
  21. Image ID: docker.io/velero/velero-restic-restore-helper@sha256:6fce885ce23cf15b595b5d3b034d02a6180085524361a15d3486cfda8022fa03
  22. Port: <none>
  23. Host Port: <none>
  24. Command:
  25. /velero-restic-restore-helper
  26. Args:
  27. f4ddbfca-e3b4-4104-b36e-626d29e99334
  28. State: Terminated
  29. Reason: Completed
  30. Exit Code: 0
  31. Started: Tue, 09 Nov 2021 09:44:12 +0000
  32. Finished: Tue, 09 Nov 2021 09:44:14 +0000
  33. Ready: True
  34. Restart Count: 0
  35. Limits:
  36. cpu: 100m
  37. memory: 128Mi
  38. Requests:
  39. cpu: 100m
  40. memory: 128Mi
  41. Environment:
  42. POD_NAMESPACE: default (v1:metadata.namespace)
  43. POD_NAME: carina-deployment-big-6b78fb9fd-mwf8g (v1:metadata.name)
  44. Mounts:
  45. /restores/mypvc-big from mypvc-big (rw)
  46. /var/run/secrets/kubernetes.io/serviceaccount from default-token-kw7rf (ro)
  47. Containers:
  48. web-server:
  49. Container ID: containerd://f3f49079dcd97ac8f65a92f1c42edf38967a61762b665a5961b4cb6e60d13a24
  50. Image: nginx:latest
  51. Image ID: docker.io/library/nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
  52. Port: <none>
  53. Host Port: <none>
  54. State: Running
  55. Started: Tue, 09 Nov 2021 09:44:14 +0000
  56. Ready: True
  57. Restart Count: 0
  58. Environment: <none>
  59. Mounts:
  60. /usr/share/nginx/html from mypvc-big (rw)
  61. /var/run/secrets/kubernetes.io/serviceaccount from default-token-kw7rf (ro)
  62. Conditions:
  63. Type Status
  64. Initialized True
  65. Ready True
  66. ContainersReady True
  67. PodScheduled True
  68. Volumes:
  69. mypvc-big:
  70. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
  71. ClaimName: csi-carina-pvc-big
  72. ReadOnly: false
  73. default-token-kw7rf:
  74. Type: Secret (a volume populated by a Secret)
  75. SecretName: default-token-kw7rf
  76. Optional: false
  77. QoS Class: Burstable
  78. Node-Selectors: <none>
  79. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
  80. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  81. Events:
  82. Type Reason Age From Message
  83. ---- ------ ---- ---- -------
  84. Warning FailedScheduling 7m20s carina-scheduler pod 6b8cf52c-7440-45be-a117-8f29d1a37f2c is in the cache, so can't be assumed
  85. Warning FailedScheduling 7m20s carina-scheduler pod 6b8cf52c-7440-45be-a117-8f29d1a37f2c is in the cache, so can't be assumed
  86. Normal Scheduled 7m18s carina-scheduler Successfully assigned default/carina-deployment-big-6b78fb9fd-mwf8g to u20-w2
  87. Normal SuccessfulAttachVolume 7m18s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-e81017c5-0845-4bb1-8483-a31666ad3435"
  88. Normal Pulling 6m59s kubelet Pulling image "velero/velero-restic-restore-helper:v1.7.0"
  89. Normal Pulled 6m54s kubelet Successfully pulled image "velero/velero-restic-restore-helper:v1.7.0" in 5.580228382s
  90. Normal Created 6m54s kubelet Created container restic-wait
  91. Normal Started 6m53s kubelet Started container restic-wait
  92. Normal Pulled 6m51s kubelet Container image "nginx:latest" already present on machine
  93. Normal Created 6m51s kubelet Created container web-server
  94. Normal Started 6m51s kubelet Started container web-server

感兴趣希望深入研究整体恢复应用的过程,可以参考官网链接:https://velero.io/docs/v1.7/restic/#customize-restore-helper-container[3],代码实现看这里:https://github.com/vmware-tanzu/velero/blob/main/pkg/restore/restic_restore_action.go[4]

以上通过 step by step 演示了如何使用 carina 和 velero 实现容器存储自动化管理和数据的快速备份和恢复,这次虽然只演示了部分功能(还有如磁盘读写限速、按需备份恢复等高级功能等待大家去尝试),但从结果可以看出,已实现了有存储应用的快速部署、备份和恢复,给 k8s 集群异地灾备方案带来了“简单”的选择。我自己也准备把 Wordpress 博客也用这个方式备份起来。

原文链接:https://mp.weixin.qq.com/s/8J-o6YBNQY6J_E_6rmSUxw

延伸 · 阅读

精彩推荐