Migrate Aws Qbert Gp2 Volumes To Gp3

Introduction

AWS has made Amazon EBS General Purpose SSD volume type gp3, designed to provide predictable 3,000 IOPS baseline performance and 125 MiB/s, regardless of volume size. With gp3 volumes, you can provision IOPS and throughput independently, without increasing storage size, at costs up to 20% lower per GB compared to gp2 volumes. Read more here: https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/arrow-up-right

With Platform9 5.6.5 release, AWS gp3 ebs volumes have been set as default for all new aws type clusters with a default Throughput of 125 MiB/s and IOPS as3000.

Platform9 users can migrate their existing gp2 cluster volumes to gp3 using the following procedure.

Migration Requirements

Platform9 Version

The Platform9 version must be PMK 5.6.8 or PMK 5.8.

Migrate volumes to GP3 for a specific cluster

Step1: Update the AWS Launch configuration to set GP3 volume type as default :

Call edit operation on the cluster:

  • Edit operation will update the AWS launch configuration from gp2 volume type to gp3. (It will update the launch configuration with VolumeType=gp3, VolumeThroughput: 125 and VolumeIops: 3000)

  • Users can use the PF9 UI or directly edit API:

    • UI: Infrastructure > Clusters > [select your cluster] > edit > update cluster

      • There is no need to modify parameter on the UI page.

    • API: PUT /qbert/v4/<project_id >/clusters/<cluster_uuid>

      • Note for custom VolumeThroughput/VolumeIops customers can use the Edit API with the below payload:

Step2: Migrate the volumes from GP2 to GP3

There are two ways to do this:

  1. Migrate all volumes in a cluster using the script provided by Platform9.

  2. Migrate individual volumes on a cluster.

Migrate all volumes in a cluster using the script provided by Platform9.

Download the script from this linkarrow-up-right

Pre-requisites for running the script:

Migrate the volumes:

For clusters with a large number of worker nodes, migration can be done in a batch of the desired count of nodes in parallel. Default is set to 5 nodes. Example:

Migrate individual volumes on a cluster.

  1. Configure the AWS cli with accessKey/SecretKey/region [Configuring the AWS CLI - AWS Command Line Interfacearrow-up-right ]

  2. Get the volume information to migrate

AWS Console > Instances > select the node > Storage > Block device > OR using the below script

Example:

Modify volume with default throughput/iops:

Modify volume with custom throughput/iops:

Check the status of the modification

Once the "ModificationState" is "completed", migration is completed.

Last updated

Was this helpful?