本文共 6840 字,大约阅读时间需要 22 分钟。
[TOC]
系统:CentOS7,/data
为非系统分区挂载目录
2个节点,192.168.105.97、192.168.105.98
使用yum安装
yum install centos-release-glusteryum -y install glusterfs glusterfs-fuse glusterfs-server
CentOS-Gluster-4.1.repo
启动及设置开机启动
systemctl start glusterd systemctl enable glusterd
GlusterFS通过24007端口相互通信。防火墙需要开放端口。
/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6# k8s 192.168.105.92 lab1 # master1192.168.105.93 lab2 # master2192.168.105.94 lab3 # master3192.168.105.95 lab4 # node4192.168.105.96 lab5 # node5# glusterfs192.168.105.98 glu1 # glusterfs1192.168.105.97 harbor1 # harbor1
在主机glu1
上执行
#添加节点到集群执行操作的本机不需要probe本机gluster peer probe harbor1
查看集群状态(节点间相互看到对方信息)
gluster peer status
Number of Peers: 1Hostname: harbor1Uuid: ebedc57b-7c71-4ecb-b92e-a7529b2fee31State: Peer in Cluster (Connected)
GlusterFS 几种volume模式说明:
链接中比较直观:gluster volume create test-volume server1:/exp1 server2:/exp2
gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
gluster volume create test-volume replica 3 arbiter 1 transport tcp server1:/exp1 server2:/exp2 server3:/exp3
gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
gluster volume create test-volume disperse 3 server{1..3}:/bricks/test-volume
gluster volume create <volname> disperse 3 server1:/brick{1..6}
gluster volume create k8s_volume 192.168.105.98:/data/glusterfs/dev/k8s_volumegluster volume start k8s_volumegluster volume statusgluster volume info
列一些Glusterfs调优:
# 开启 指定 volume 的配额gluster volume quota k8s-volume enable# 限制 指定 volume 的配额gluster volume quota k8s-volume limit-usage / 1TB# 设置 cache 大小, 默认32MBgluster volume set k8s-volume performance.cache-size 4GB# 设置 io 线程, 太大会导致进程崩溃gluster volume set k8s-volume performance.io-thread-count 16# 设置 网络检测时间, 默认42sgluster volume set k8s-volume network.ping-timeout 10# 设置 写缓冲区的大小, 默认1Mgluster volume set k8s-volume performance.write-behind-window-size 1024MB
###3.1 物理机上使用GlusterFS的volume
yum install -y centos-release-glusteryum install -y glusterfs glusterfs-fuse fuse fuse-libs openib libibverbsmkdir -p /tmp/testmount -t glusterfs 192.168.105.98:k8s_volume/tmp/test # 和NFS挂载用法类似
以下操作在kubernetes master节点操作
vim /etc/kubernetes/glusterfs/glusterfs-endpoints.json
{ "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "subsets": [ { "addresses": [ { "ip": "192.168.105.98" } ], "ports": [ { "port": 1 } ] }, { "addresses": [ { "ip": "192.168.105.97" } ], "ports": [ { "port": 1 } ] } ]}
注意:
该subsets字段应填充GlusterFS集群中节点的地址。可以在port字段中提供任何有效值(从1到65535)。
kubectl apply -f /etc/kubernetes/glusterfs/glusterfs-endpoints.jsonkubectl get endpoints
NAME ENDPOINTS AGEglusterfs-cluster 192.168.105.97:1,192.168.105.98:1
我们还需要为这些端点创建服务,以便它们能够持久存在。我们将在没有选择器的情况下添加此服务,以告知Kubernetes我们想要手动添加其端点
vim glusterfs-service.json
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "spec": { "ports": [ {"port": 1} ] }}
kubectl apply -f glusterfs-service.json
创建glusterfs-pv.yaml文件,指定storage容量和读写属性
vim glusterfs-pv.yaml
apiVersion: v1kind: PersistentVolumemetadata: name: pv001spec: capacity: storage: 10Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs-cluster" path: "k8s_volume" readOnly: false
kubectl apply -f glusterfs-pv.yaml kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv001 10Gi RWX Retain Available 21s
创建glusterfs-pvc.yaml
文件,指定请求资源大小
vim glusterfs-pvc.yaml
apiVersion: v1kind: PersistentVolumemetadata: name: pv001spec: capacity: storage: 10Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs-cluster" path: "k8s_volume" readOnly: false
kubectl apply -f glusterfs-pvc.yamlkubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc001 Bound zk001 10Gi RWX 44s
以创建nginx,把pvc挂载到容器内的/usr/share/nginx/html
文件夹为例:
vim glusterfs-nginx-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2kind: Deploymentmetadata: name: nginx-dm namespace: defaultspec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: storage001 mountPath: "/usr/share/nginx/html" volumes: - name: storage001 persistentVolumeClaim: claimName: pvc001
kubectl create -f nginx_deployment.yaml# 查看部署是否成功kubectl get pod|grep nginx-dm
nginx-dm-c8c895d96-hfdsz 1/1 Running 0 36snginx-dm-c8c895d96-jrfbx 1/1 Running 0 36s
验证结果:
# 查看挂载[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- df -h|grep nginx192.168.105.97:k8s_volume 1000G 11G 990G 2% /usr/share/nginx/html[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-zf6ch -- df -h|grep nginx192.168.105.97:k8s_volume 1000G 11G 990G 2% /usr/share/nginx/html[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- touch /usr/share/nginx/html/ygqygq2 [root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-5h649 -- ls -lt /usr/share/nginx/html/total 1-rw-r--r--. 1 root root 4 Aug 13 09:43 ygqygq2-rw-r--r--. 1 root root 5 Aug 13 09:34 ygqygq2.txt[root@lab1 glusterfs]# kubectl exec -it nginx-dm-c8c895d96-zf6ch -- ls -lt /usr/share/nginx/html/total 1-rw-r--r--. 1 root root 4 Aug 13 09:43 ygqygq2-rw-r--r--. 1 root root 5 Aug 13 09:34 ygqygq2.txt
至此部署完成。
此文GlusterFS是安装在物理系统下,而非kubernetes中,所有需要手工维护,下次介绍在kubernetes中安装使用gluster。GlusterFS的volume模式根据业务灵活应用。需要注意的是,如果使用分布卷,pod中的挂载目录文件可能存在卷的任一节点中,可能并非直接df -h
看到的那个节点中。
参数资料:
[1] [2] [3] [4] [5] [6] [7] [8]转载于:https://blog.51cto.com/ygqygq2/2160958