Platform9 3.8 release notes
Platform9 Managed Kubernetes
1. Kubernetes version upgrade to 1.10.11
This version of Platform9 Managed Kubernetes has upgraded the Kubernetes version from 1.9.x to 1.10.11. You can find more info on this version, along with its various features, in the blog for the Kubernetes 1.10 release. Clusters can be upgraded to this Kubernetes version by using the “Upgrade Cluster” button in the Clusters view of the Infrastructure page of the Platform9 Clarity UI.
We highly recommend users upgrade their clusters at the earliest convenience and within 15 days of the release of new Platform9 Managed Kubernetes versions. Users may need to obtain a compatible kubectl for this version if their existing kubectl is not compatible with Kubernetes 1.10.11. See Install and Set Up kubectl for more information.
2. Multi-master Kubernetes clusters for OpenStack KVM deployments
We are happy to announce the capability to deploy Kubernetes clusters in a multi-master setup on Platform9 Managed OpenStack for KVM. This feature relies on the Load Balancer as a Service (LBaaS) capability in OpenStack Neutron. As part of the cluster deployment, you will be able to deploy clusters with 3 or 5 masters. These master nodes will run an etcd cluster and Kubernetes master components in a highly available fashion.
3. Change Docker version to 17.03.3
In the v3.5 release, we had moved to Docker 17.09.1. We found that this Docker version has multiple issues with its handling of storage for containers. Due to this, we decided to downgrade Docker to a version which is certified by the Kubernetes community. Note that this change is a downgrade of the Docker version, but should offer more stability to your Kubernetes environment.
4. Bug fixes
This release also contains a number of performance optimizations and bug-fixes that should result in a better user experience for your Platform9 cloud platform! Some significant ones are listed below
- Fix for Kubernetes security vulnerability CVE-2018-1002105. See Kubernetes GitHub issue #71411 for more details.
- When disabling workloads on masters during cluster creation, Clarity UI would incorrectly toggle the selected option.
- Creating an OpenStack cloud provider cluster with an uppercase letter in the cluster name would fail.
- Occasionally Docker startup would fail because Unix socket /var/run/docker.sock is a directory.
- File descriptor leak during certain error conditions when requesting Kubernetes cluster certificate.
- Swagger files for qbert would fail to load in the Clarity UI.
Platform9 Managed OpenStack
1. Keystone project upgraded to Pike release
OpenStack Keystone has been upgraded to the Pike release. This release brings a number of new features, critical bug fixes, and stability enhancements.
Existing MFA users will need to disable and re-enable MFA to switch to the new mechanism.
2. Omni enhanced with Neutron object discovery for AWS
Platform9 now discovers the following EC2 resource types and creates the corresponding objects in OpenStack.
- Route table
- Internet Gateway
- Security Group
- Network Interface
- Elastic IP
Omni will also discover private AMIs in EC2 along with “owned by me” AMIs. These AMIs are shared appropriately within OpenStack based on project to AWS account mapping.
3. VM-HA cluster status now visible from Clarity
Platform9 UI now exposes the Consul cluster health to enable monitoring of the cluster quorum status after any host failure event. Consul server nodes need to form a quorum in order to properly detect host failures. If a Consul server node fails, the reported cluster status will be based on whether the available server nodes are still able to form a quorum.
If cluster status is not healthy, please reach out to Platform9 Support to disable/re-enable HA in order to re-balance the Consul server and client nodes in that cluster.
4. UI allows modifying Flavor Access
Clarity now supports editing tenant access for private flavors after flavor creation.
Previously this could only be performed via the API/CLI.
5. Bug fixes
This release also contains a number of bug fixes which should result in a better user experience for your Platform9 cloud platform! Some significant ones are listed below.
- Platform9 Discovery service does not discover offline VMs from vSphere.
- Skip MTU check during deletion of Networks - https://bugs.launchpad.net/neutron/+bug/1713499.
- In the Murano UI, the default value for a parameter of type
booleanis not automatically selected if the value is “False”.
- In the Murano UI, the default value for a parameter of type
numberis not selected if the value is zero (0).
ip_addrconstraint does not work in the Murano UI.
- Refreshing the Images page adds more pages, and increases the count of images.
- UI does not list more than 1,000 images.
- Cinder NFS volume path is not remounted by Nova after reboot.
- User-level quotas cannot be saved due to unexpected property error in UI.
- dnsmasq with
--strict-orderbreaks DNS resolution if a DNS resolver is unavailable - https://bugs.launchpad.net/neutron/+bug/1746000.
- High tenant count causes “Tenants & Users” page in the UI to load slowly.
- VM-HA enable API call is now an async method.
- Federated Shadow Users do not have an email address - https://bugs.launchpad.net/keystone/+bug/1746599.
- After hypervisor reboot, Neutron ports are missing from Open vSwitch.
- Panko performance improvement, and periodic database purging for events older than 30 days.
6. Known limitations
- For a HA cluster with 5 Consul
servernodes, the cluster can tolerate up to 2
servernode failures, in addition to any
clientnode failures. See Consul’s Consensus Protocol Deployment Table for details. After an HA evacuation event, if the
consul memberscommand output on a host shows only 3 server nodes alive, please reach out to Platform9 Support to disable/re-enable HA on that cluster.
- Omni does not discover security groups assigned to an Instance. As a result OpenStack will not accurately reflect the assigned security group. The Instance will appear within OpenStack as having the “default” security group although the actual Security Group assigned to the Instance in AWS remains unchanged.
- Changing Security Group of an instance after it has been created from OpenStack does not change the Security Group in EC2.
AWS allows adding multiple IP address ranges to a single security group rule. However, Omni will only discover the first IP range in the list, ignoring the additional ranges. If the discovered security group rule is updated from OpenStack, the IP ranges which were not discovered will be removed from the rule.
Workaround: Create multiple security group rules - one for each IP range.
Creating security group rules which reference other security groups does not work when the security group rule is created from Omni. The Omni conversion logic currently cannot convert an OpenStack ID to an AWS security group ID.
As a result, security group rules which reference other security groups will not be accurately discovered. The rule will be discovered with a source and destination of 0.0.0.0/0. If the discovered security group rule is updated from OpenStack, the correct rule definition in AWS will be overwritten with the incorrect data from OpenStack.