EFS persistent storage in Amazon eks

Time:2022-1-23

Author:Sre operation and maintenance blog
Blog address:https://www.cnsre.cn/
Article address:https://www.cnsre.cn/posts/220110850573/
Related topics:https://www.cnsre.cn/tags/eks/


Learning objectives

  • Deploy Amazon EFS CSI driver in eks to
  • Verify the EFS and verify that it is working properly
  • Create EFS based static and dynamic storage

prerequisite

Create Iam policy

Create Iam policies and assign them to Iam roles. This policy will allow the Amazon EFS driver to interact with the file system.

1. View the Iam policy document below orPolicy document

{{< notice warning “>}}
Recommended viewPolicy document。 Get policy document.
{{< /notice >}}

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "elasticfilesystem:DescribeAccessPoints",
        "elasticfilesystem:DescribeFileSystems"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "elasticfilesystem:CreateAccessPoint"
      ],
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "aws:RequestTag/efs.csi.aws.com/cluster": "true"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": "elasticfilesystem:DeleteAccessPoint",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/efs.csi.aws.com/cluster": "true"
        }
      }
    }
  ]
}

2. InIam strategyCreate policy in

stayIdentity and Access Management (IAM)Middle clickstrategyThen click nextCreate policy

EFS persistent storage in Amazon eks

EFS persistent storage in Amazon eks
clickjsonThenIam strategyFill in and clickNext: Label
EFS persistent storage in Amazon eks
In the next tab, you can fill it in according to your own situation, and then clickNext step: audit
EFS persistent storage in Amazon eks
Fill in the nameAmazonEKS_EFS_CSI_Driver_Policy
{{< notice warning “>}}
You can use Amazon eks_ EFS_ CSI_ Driver_ The policy is changed to a different name, but if it is changed, make sure to change it in subsequent steps.
{{< /notice >}}
EFS persistent storage in Amazon eks

Attach EFS policy to eks node role

The EFS policy we just createdAmazonEKS_EFS_CSI_Driver_PolicyAttached to eks_ In the role of node, ensure that eks node has EFS permission.

{{< notice warning “>}}
If you created eks before, there will be one in your character namedeksctl-<eks-name>-nodegrou-NodeInstanceRole-xxxxxxxxxYour role.
{{< /notice >}}

stayroleSearch innodeThen clickeksctl-<eks-name>-nodegrou-NodeInstanceRole-xxxxxxxxx
EFS persistent storage in Amazon eks
Click in the roleAdditional strategy
EFS persistent storage in Amazon eks
Search the EFS policy created before, that isAmazonEKS_EFS_CSI_Driver_PolicyThen select and click the attached policy at the bottom.
EFS persistent storage in Amazon eks

Install Amazon EFS driver

Install the Amazon EFS CSI driver using the helm or yaml manifest.
The helm deployment method will not be described in detail here. It mainly introduces yaml list deployment
{{< notice warning “>}}
Be sure to change the image address to your regionAmazon eks add on container image address
{{< /notice >}}

Yaml inventory deployment

{{< notice info “prompt” >}}
Because of the problem of GitHub network. If the deployment does not respond when executing again, please terminate the operation and try to deploy several more times
{{< /notice >}}

kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.3" 

The output is as follows:

serviceaccount/efs-csi-controller-sa created
serviceaccount/efs-csi-node-sa created
clusterrole.rbac.authorization.k8s.io/efs-csi-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/efs-csi-provisioner-binding created
deployment.apps/efs-csi-controller created
daemonset.apps/efs-csi-node created
csidriver.storage.k8s.io/efs.csi.aws.com created

Check whether the drive operates normally

kubectl  get  pods -A|grep  efs
kube-system                    efs-csi-controller-56f6dc4c76-2lvqf               3/3     Running     0          3m32s
kube-system                    efs-csi-controller-56f6dc4c76-dxkwl               3/3     Running     0          3m32s
kube-system                    efs-csi-node-9ttxp                                3/3     Running     0          3m32s
kube-system                    efs-csi-node-hsn94                                3/3     Running     0          3m32s

{{< notice warning “>}}
Although the display here works normally, you still need to modify the image address. Otherwise, after creating PV and PVC, an error will occur when they are mounted in the pod. (this error will be recorded separately later)
{{< /notice >}}

Modify EFS CSI node driver

kubectl edit     daemonsets.apps  -n kube-system efs-csi-node

findaws-efs-csi-driverLocation of the drive
EFS persistent storage in Amazon eks

Then change the mirror to918309763551.dkr.ecr.cn-north-1.amazonaws.com.cn/eks/aws-efs-csi-driver:v1.3.3

The details are as follows
EFS persistent storage in Amazon eks

Create Amazon EFS file system

Create Amazon EFS file system for Amazon eks cluster

Search in consoleefsClick OK and enterEFSConsole
EFS persistent storage in Amazon eks
Click in the consoleCreate file system
EFS persistent storage in Amazon eks
Name: fill in according to your own situation
VPC: be sure to create the followingekssameVPClower
Availability and persistence: follow the instructions to create your own
If you have more requirements, you can clickCustomize to set moreSuch as throughput, encryption, backup and other strategies
Last clickestablish
EFS persistent storage in Amazon eks

Create inbound rule

Allow inbound NFS traffic from CIDR of eks cluster VPC
In the just createdEFSMedium selectionnetwork –> Security groupThen copy the ID of the security groupsg-152XXX
EFS persistent storage in Amazon eks
stayEC2Found inNetwork and securitychoiceSecurity groupThen search in the search boxsg-152XXXSelect the security group. And selectInbound rule
EFS persistent storage in Amazon eks
The eks cluster is allowed to access NFS (2049) port traffic in the inbound rule.

Deploy sample application

{{< tabs deploy static provisioning content deploy dynamic provisioning >}}
{{< tab >}}

Deploy static provisioning

Deploy a sample application that uses the persistent volume you created

This process utilizes data fromAmazon EFS container storage interface (CSI) driverOf GitHub repositoryMultiple pods read and write manyExample to use a statically preset Amazon EFS persistent volume, and useReadWriteManyAccess mode accesses it from multiple pods.

  1. takeAmazon EFS container storage interface (CSI) driverClone the GitHub repository to your local system.

    git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git
  2. Navigate tomultiple_podsSample directory.

    cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/
  3. Retrieve your Amazon EFS file system ID. You can find this information in the Amazon EFS console or use the following AWS cli commands.

    aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text

    Output:

    fs-<582a03f3>
  4. editspecs/pv.yamlFile andvolumeHandleReplace the value with your Amazon EFS file system ID.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: efs-pv
    spec:
      capacity:
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: efs-sc
      csi:
        driver: efs.csi.aws.com
        volumeHandle: fs-<582a03f3>

    be careful

    Since Amazon EFS is an elastic file system, it does not enforce any file system capacity limits. When the system is created, the actual storage capacity values in the persistent volume and persistent volume declaration are not used. However, since storage capacity is a required field in kubernetes, you must specify a valid value, for example, in this example5Gi。 This value does not limit the size of the Amazon EFS file system.

  5. fromspecsDirectory deploymentefs-scStorage classefs-claimPersistent volume declaration andefs-pvPersistent volume.

    kubectl apply -f specs/pv.yaml
    kubectl apply -f specs/claim.yaml
    kubectl apply -f specs/storageclass.yaml
  6. Lists the persistent volumes in the default namespace. Find withdefault/efs-claimDeclared persistent volume.

    kubectl get pv -w

    Output:

    NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
    efs-pv   5Gi        RWX            Retain           Bound    default/efs-claim   efs-sc                  2m50s

    staySTATUSBecomeBoundDo not proceed to the next step until.

  7. fromspecsDirectory deploymentapp1andapp2Sample application.

    kubectl apply -f specs/pod1.yaml
    kubectl apply -f specs/pod2.yaml
  8. View the pod in the default namespace and waitapp1andapp2Pod’sSTATUSBecomeRunningStatus.

    kubectl get pods --watch

    be careful

    It may take a few minutes for the pod to reachRunningStatus.

  9. Describes the persistent volume.

    kubectl describe pv efs-pv

    Output:

    Name:            efs-pv
    Labels:          none
    Annotations:     kubectl.kubernetes.io/last-applied-configuration:
                       {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":["ReadWriteMany"],"capaci...
                     pv.kubernetes.io/bound-by-controller: yes
    Finalizers:      [kubernetes.io/pv-protection]
    StorageClass:    efs-sc
    Status:          Bound
    Claim:           default/efs-claim
    Reclaim Policy:  Retain
    Access Modes:    RWX
    VolumeMode:      Filesystem
    Capacity:        5Gi
    Node Affinity:   none
    Message:
    Source:
        Type:              CSI (a Container Storage Interface (CSI) volume source)
        Driver:            efs.csi.aws.com
        VolumeHandle:      fs-582a03f3
        ReadOnly:          false
        VolumeAttributes:  none
    Events:                none

    Amazon EFS file system ID will be used asVolumeHandleList.

  10. verificationapp1Whether the pod successfully wrote data to the volume.

    kubectl exec -ti app1 -- tail /data/out1.txt

    Output:

    ...
    Mon Mar 22 18:18:22 UTC 2021
    Mon Mar 22 18:18:27 UTC 2021
    Mon Mar 22 18:18:32 UTC 2021
    Mon Mar 22 18:18:37 UTC 2021
    ...
  11. verificationapp2The data displayed by pod in the volume is the same asapp1The data written to the volume is the same.

    kubectl exec -ti app2 -- tail /data/out1.txt

    Output:

    ...
    Mon Mar 22 18:18:22 UTC 2021
    Mon Mar 22 18:18:27 UTC 2021
    Mon Mar 22 18:18:32 UTC 2021
    Mon Mar 22 18:18:37 UTC 2021
    ...
  12. When the experiment is complete, remove the resources for this sample application for cleanup.

    kubectl delete -f specs/

    You can also manually delete the file systems and security groups you create.
    {{< /tab >}}
    {{< tab >}}

    Deploy dynamic provisioning

    Prerequisite

You must use the Amazon EFS CSI driver version 1.2x or later, which requires a cluster version 1.17 or later. To update the cluster, seeUpdate cluster

Deploy a sample application that uses persistent volumes created by the controller

This process utilizes data fromAmazon EFS container storage interface (CSI) driverOf GitHub repositoryDynamic presetExamples. It passesAmazon EFS access pointAnd pod use persistent volume claim (PVC) to dynamically create a persistent volume.

  1. Create a storage class for EFS. For all parameters and configuration options, see on GitHubAmazon EFS CSI driver

    1. Download Amazon EFSStorageClassdetailed list.

      curl -o storageclass.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml
    2. Edit the corresponding file andfileSystemIdReplace the value of with your file system ID.
    3. Deploy storage classes.

      kubectl apply -f storageclass.yaml
  2. Leverage through deploymentPersistentVolumeClaimTo test auto presets with a new Pod:

    1. Download a manifest that will deploy a pod and a persistent volume claim.

      curl -o pod.yaml https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/pod.yaml
    2. Deploy the pod using the persistentvolumeclaim used by the sample application and pod.

      kubectl apply -f pod.yaml
  3. Determines the name of the pod that runs the controller.

    kubectl get pods -n kube-system | grep efs-csi-controller

    output

    efs-csi-controller-74ccf9f566-q5989   3/3     Running   0          40m
    efs-csi-controller-74ccf9f566-wswg9   3/3     Running   0          40m
  4. After a few seconds, you can observe that the controller begins to accept the changes (edited to improve readability). take74ccf9f566-q5989Replace with the value from a pod in the output of the previous command.

    kubectl logs efs-csi-controller-74ccf9f566-q5989 \
        -n kube-system \
        -c csi-provisioner \
        --tail 10

    output

    ...
    1 controller.go:737] successfully created PV pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca for PVC efs-claim and csi volume name fs-95bcec92::fsap-02a88145b865d3a87

    If you do not see the previous output, run the previous command using one of the other controller pods.

  5. Confirm that the created status isBoundtoPersistentVolumeClaimPersistent volumes:

    kubectl get pv

    output

    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
    pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca   20Gi       RWX            Delete           Bound    default/efs-claim   efs-sc                  7m57s
  6. View information about the createdPersistentVolumeClaimDetails of.

    kubectl get pvc

    output

    NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    efs-claim   Bound    pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca   20Gi       RWX            efs-sc         9m7s
  7. View the status of the sample application pod.

    kubectl get pods -o wide

    output

    NAME          READY   STATUS    RESTARTS   AGE   IP               NODE                                           NOMINATED NODE   READINESS GATES
    efs-example   1/1     Running   0          10m   192.168.78.156   ip-192-168-73-191.us-west-2.compute.internal   <none>           <none>

    Confirm that the data has been written to the volume.

    kubectl exec efs-app -- bash -c "cat data/out"

    output

    ...
    Tue Mar 23 14:29:16 UTC 2021
    Tue Mar 23 14:29:21 UTC 2021
    Tue Mar 23 14:29:26 UTC 2021
    Tue Mar 23 14:29:31 UTC 2021
    ...
  8. (optional) terminate the Amazon eks node running the pod and wait for the pod to be rescheduled. Alternatively, you can delete the pod and redeploy it. Complete step 7 again and confirm that the output contains the previous output.
    {{< /tab >}}
    {{< /tabs >}}

Author:Sre operation and maintenance blog
Blog address:https://www.cnsre.cn/
Article address:https://www.cnsre.cn/posts/220110850573/
Related topics:https://www.cnsre.cn/tags/eks/