k8s 使用cephFS 配置storageclass

105

k8s 使用cephFS 配置storageclass

生成ceph-secret

ceph auth get-key client.admin |base64
QVFBYktSeGVCRzZNRWhBQUJLRVM0ZjhVcXpFRENkUEU2V2c9PQ==

在k8s中创建ceph-secret

kubectl create ns  cephfs
vim ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin
  namespace: cephfs
data:
  key: QVFCSmQ3ZGFyMTUvSGhBQXF2VVAySU5pSmhmQTZ1SkxNFE9PQo=

创建cephfs-provisioner

---
kind: Namespace
apiVersion: v1
metadata:
  name: cephfs

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns","coredns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
  namespace: cephfs
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: cephfs
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cephfs-provisioner
  namespace: cephfs
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cephfs-provisioner
  namespace: cephfs
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cephfs-provisioner
subjects:
- kind: ServiceAccount
  name: cephfs-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: cephfs
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: cephfs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cephfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        - name: PROVISIONER_SECRET_NAMESPACE
          value: cephfs
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
      serviceAccount: cephfs-provisioner

  • images里面有ceph的客户端 你对接的ceph服务端的版本要和镜像里面对应 官网最新的镜像就到1.13  已经很老了 我这边提供一个自己做好的经里面是ceph1.15版的客户端 替换上面的Deploymentl里面的imagesregistry.cn-hangzhou.aliyuncs.com/liweiqiang/ceph:provisioner-1.15.2

Apply

$ kubectl apply -f cephfs-provisioner.yml
namespace/cephfs created
clusterrole.rbac.authorization.k8s.io/cephfs-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
role.rbac.authorization.k8s.io/cephfs-provisioner created
rolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
serviceaccount/cephfs-provisioner created
deployment.apps/cephfs-provisioner created

查看cephfs-provisioner是否再运行

$ kubectl get pods -l app=cephfs-provisioner -n cephfs
NAME                                  READY   STATUS    RESTARTS   AGE
cephfs-provisioner-7b77478ceev-7nnxs   1/1     Running   0          84s

创建Cephfs Storage Class

vim cephfs-sc.yml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: cephfs
  namespace: cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 10.10.10.11:6789,10.10.10.12:6789,10.10.10.13:6789
    adminId: admin
    adminSecretName: ceph-admin-secret
    adminSecretNamespace: cephfs
    claimRoot: /pvc-volumes  #cephfs根下的目录

Apply

$ kubectl apply -f cephfs-sc.yml 
storageclass.storage.k8s.io/cephfs created

查看SC

$ kubectl get sc
NAME       PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
cephfs     ceph.com/cephfs   Delete          Immediate           false                  2m23s

创建测试PVC

$ vim cephfs-claim.yml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim1
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: cephfs
  resources:
    requests:
      storage: 1Gi

$ kubectl  apply -f cephfs-claim.yml
persistentvolumeclaim/cephfs-claim1 created

$ kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-claim1     Bound    pvc-1bfa81b6-2c0b-47fa-9656-92dc52f69c52   1Gi        RWO            cephfs         87s