AWS Prerequisites

Before getting started with Platform9 Managed Kubernetes (PMK) you will need to prepare some infrastructure that you’d use to work with PMK. Make sure to read through these requirements carefully as a successful deployment of PMK depends on it.

AWS: PMK provides native integration with Amazon AWS to create Kubernetes clusters using AWS EC2 instances. In this model, PMK manages the lifecycle of the nodes on EC2. It also integrates with other AWS services such as Route53, ELB, EBS to create a fully production-ready Kubernetes cluster that can auto-scale based on workload requirements. (NOTE: PMK does not support integration with AWS EKS today)

Pre-requisites for an AWS cluster

PMK requires that you specify an AWS access key ID and associated secret access key for a single IAM user in your AWS account. All credentials are encrypted in the Platform9 SaaS Management Plane. Your account must have at least one domain registered in Route 53 with an associated public hosted zone. When creating a cluster, the API server and Kubernetes Service fully qualified domain names will be associated to your Route 53 domain. For example, if the hosted zone is associated to the domain name “” then the API and Service FQDN will be created in the following syntax “

AWS IAM Policy: Pre-configured policy

You can download a pre-configured AWS Policy that is limited to the permissions detailed below from here, and apply it to an existing or new credential.

Following permissions are required on your AWS account.

  • ELB Management
  • Route 53 DNS Configuration
  • Access to two or more Availability Zones within the region
  • EC2 Instance Management
  • EBS Volume Management
  • VPC Management

Download IAM Policy

Refer to this AWS article for more info on how to create and manage AWS access key ID and secret access key for your AWS account.

AWS Access Overview

The provided credentials will be utilized for creating, deleting and updating the following artifacts.

  • VPC (Only if deploying a cluster to a new VPC)
  • Subnets in each AZ (Only if deploying a cluster to a new VPC. In an existing VPC, the first subnet of each AZ is used)
  • Security Group (For cluster connectivity)
  • ELB (For HA Kubernetes API)
  • Auto Scaling Groups (For creation of ASGs for master and worker nodes)
  • Route 53 Hosted Zone Record sets (For API and Service FQDNs)
  • Launch Configuration (For creating EC2 instances)
  • Internet Gateway (For exposing the Kubernetes API with HTTPS)
  • Routes (For the Internet Gateway)
  • IAM Roles and Instance Profiles (For deployment of highly available etcd and Kubernetes AWS integration)

Make sure that the default limits for your region are configured properly

All AWS resources are configured by default with limits. As your usage of Kubernetes on AWS grows, you might run into some of them.

For example, the AWS default limit for number of VPCs in a region is 5, as stated in AWS documentation on VPC limits

To see the default limit values for all your EC2 resources within a given region:

  • Log into your AWS console
  • Navigate to Services > EC2
  • Once in EC2, on the left hand side menu panel, click on limits

This will show you all default limits for your AWS resources.

Any specific limit can be raised by submitting a ‘Service limit increase’ request with AWS.