Note: The latest versions of Ops Manager validated for the reference architecture do not support using vSphere Storage Clusters. New Tier-1 routers are created on-demand as new clusters and namespaces are added to Enterprise PKS. When a new app is deployed, new NSX-T Tier-1 routers are generated and Enterprise PKS creates a /24 network from the Enterprise PKS pods network. You must assign either a private or a public IP address assigned to the domains for the PAS system and apps. The diagram below illustrates the reference architecture for PAS on vSphere with NSX-V deployments. Use both Layer 4 and Layer 7 load balancers: NSX-T provides ingress routing natively. With the horizontal shared storage approach, you grant all hosts access to all datastores and assign a subset to each TAS for VMs installation. They also provide requirements and recommendations for deploying Ops Manager with TAS for VMs on vSphere with NSX-T, Ops Manager supports these configurations for vSphere deployments: TAS for VMs on vSphere with NSX-T. For more information, see TAS for VMs on vSphere with NSX-T. TAS for VMs on vSphere with NSX-V. For more information, see TAS for VMs on vSphere with NSX-V. TAS for VMs on vSphere without NSX. … VMware vSphere™ Reference Architecture for Small and Medium Business. For more information, see TAS for VMs on vSphere without NSX. Note: To use NSX-T with PAS, the NSX-T Container Plugin must be installed, configured, and deployed at the same time as the PAS tile. You can build smaller groups of Gorouters and Diego Cells aligned to a particular service. Allocate a large IP block in NSX-T for Kubernetes pods. When a new app is deployed, new NSX-T Tier-1 routers are generated and TKGI creates a /24 network from the TKGI Pods network. ESG provides load balancing and is configured to route to the TAS for VMs platform. Deployments with several load balancers: VMware recommends a /23 network for deployments that use several load balancers. The network octet is numerically sequential. These sections describe networking requirements and recommendations for TKGI on vSphere with NSX-T deployments. For information about high availability (HA) requirements and recommendations for TAS for VMs on vSphere, see High Availability in Platform Architecture and Planning Overview. The TKGI on vSphere with NSX-T architecture supports multiple master nodes for TKGI v1.2 and later. For information about security requirements and recommendations for PAS deployments, see Security in Platform Architecture and Planning Overview. The Tier-0 router must have routable external IP address space to advertise on the BGP network with its peers. TAS for VMs on vSphere with NSX-T supports these following SDN features: Virtualized, encapsulated networks and encapsulated broadcast domains, VLAN exhaustion avoidance with the use of virtualized Logical Networks, DNAT/SNAT services to create separate, non-routable network spaces for the TAS for VMs installation, Load balancing services to pass traffic through Layer 4 to pools of platform routers at Layer 7, SSL termination at the load balancer at Layer 7 with the option to forward on at Layer 4 or 7 with unique certificates, Virtual, distributed routing and firewall services native to the hypervisor. Discussions and planning within your organization are essential to acquiring the necessary amount of IP address space for a TAS for VMs deployment with future growth considerations. The load balancing requirements and recommendations for PAS on vSphere with NSX-T deployments are: You must configure NSX-T load balancers for the Gorouters. An NSX-T Tier-0 router is on the front end of the PAS deployment. The diagram below illustrates the reference architecture for Enterprise PKS on vSphere with NSX-T deployments. An NSX-T Tier-0 router is on the front end of the TAS for VMs deployment. You must assign routable external IPs on the server side, such as routable IPs for NATs and load balancers, to the Edge router. For more information, see Networks in Platform Architecture and Planning Overview. ESG provides load balancing and is configured to route to the PAS platform. Several Tier-1 routers, such as the router for the infrastructure subnet, connect to the Tier-0 router. The number of master nodes should be an odd number to allow etcd to form a quorum. To support the persistent storage requirements of containers, VMware developed the vSphere Cloud Provider and its corresponding volume plugin. Non-production environments: Configure 4 to 6 TB of data storage. For information about security requirements and recommendations for TAS for VMs deployments, see Security in Platform Architecture and Planning Overview. PAS on vSphere with NSX-V enables services provided by NSX on the PAS platform, such as an Edge services gateway (ESG), load balancers, firewall services, and NAT/SNAT services. These sections describe networking requirements and recommendations for PAS on vSphere with NSX-V deployments. These sections describe the architecture for TAS for VMs on vSphere without software-defined networking deployments. Select a network range for the Tier-0 router with enough space so that the network can be separated into these two jobs: Note: Compared to NSX-V, NSX-T consumes much more address space for SNATs. It can be smaller, but VMware discourages using a larger size in a single deployment. Non-production environments: Configure 4 to 6 TB of data storage. With the vertical shared storage approach, you grant each cluster its own datastores, creating a cluster-aligned storage strategy. This router is a central logical router into the TKGI platform. vStart 100 and 200 VMware vSphere Reference Architecture Dell Inc 8 With a 24 drive chassis full of 600GB SAS drives, the PS6100X array delivers 14.4 Terabyte (TB) of iSCSI- based storage built on fully-redundant, hot-swappable enterprise hardware. You must assign routable external IPs on the server side, such as routable IPs for NATs and load balancers, to the Edge router. You can also use a third-party service for ingress routing, such as Istio or NGINX. Resize as necessary. Scale out capacity and performance is provided by adding additional arrays. Keywords: vSphere 6.0; vSAN 6.2; VxRail 4.0; Redis 1.5.16; MySQL 1.8.0 -- This document describes the reference architecture for deploying PCF using Dell EMC VxRail Appliances powered by VMware vSAN 6.2 and VMware vSphere 6.0. Layer 4 and Layer 7 NSX-T load balancers are created automatically during app deployment. DNATs and SNATs, load balancer WIPs, and other platform components. Any TCP routers and SSH Proxies also require NSX-V load balancers. Use Layer 7 load balancers for ingress routing. For more information, see How to Migrate Pivotal Platform to a New Datastore in vSphere. Resize as necessary. VMware recommends using an SDN to take advantage of features including: Smaller groups use less IP address space. With its consistency and flexibility, VMware vSAN architecture provides the simplest path from server virtualization to hyperconverged infrastructure and a true hybrid cloud architecture. Isolation segments can help with satisfying IP address space needs in a routed network design. Layer 4 and Layer 7 NSX-T load balancers are created automatically during app deployment. You must assign either a private or a public IP address assigned to the domains for the TAS for VMs system and apps. vSphere offers NSX-T and NSX-V to support SDN infrastructure. The approach you follow reflects how your data center arranges its storage and host blocks in its physical layout. VMware recommends that you use these blobstore storages for production and non-production TAS for VMs environments: Note: For non-production environments, the NFS/WebDAV blobstore can be the primary consumer of storage, as the NFS/WebDAV blobstore must be actively maintained. Note: This architecture was validated for earlier versions of TAS for VMs. Share This Page Download . Platform Architecture and Planning Overview, Using Edge Services Gateway on VMware NSX, Upgrading vSphere without Runtime Downtime, Migrating Ops Manager to a New Datastore in vSphere, Global DNS Load Balancers for Multi-Foundation Environments, Installing Ops Manager in Air-Gapped Environments, Preparing to Deploy Ops Manager on AWS Manually, Installing Ops Manager on AWS Using Terraform, Deploying Ops Manager on AWS Using Terraform, Configuring BOSH Director on AWS Using Terraform, Preparing to Deploy Ops Manager on Azure Manually, Configuring BOSH Director on Azure Manually, Installing Ops Manager on Azure Using Terraform, Deploying Ops Manager on Azure Using Terraform, Configuring BOSH Director on Azure Using Terraform, Preparing to Deploy Ops Manager on GCP Manually, Configuring BOSH Director on GCP Manually, Installing Ops Manager on GCP Using Terraform, Deploying Ops Manager on GCP Using Terraform, Configuring BOSH Director on GCP Using Terraform, Using the Cisco Nexus 1000v Switch with Ops Manager, Upgrade Preparation Checklist for Ops Manager v2.9, Upgrading TAS for VMs and Other Ops Manager Products, Using Ops Manager Programmatically and from the Command Line, Modifying Your Ops Manager Installation and Product Template Files, Creating and Managing Ops Manager User and Client Accounts, Managing Certificates with the Ops Manager API, Checking Expiration Dates and Certificate Types, Rotating Non-Configurable Leaf Certificates, Rotating the Services TLS CA and Its Leaf Certificates, Rotating Identity Provider SAML Certificates, Retrieving Credentials from Your Deployment, Reviewing and Resetting Manually Set Certificates in BOSH CredHub, Advanced Certificate Rotation with CredHub Maestro, Restoring Lost BOSH Director Persistent Disk, Recovering from an Ops Manager and TAS for VMs Upgrade Failure, Configuring AD FS as an Identity Provider, TAS for VMs Component Availability During Backup, Restoring Deployments from Backup with BBR, Container-to-Container Networking Communications, Security Guidelines for Your IaaS Provider, Assessment of Ops Manager against NIST SP 800-53(r4) Controls, Security-Related Ops Manager Tiles and Add-Ons, Advanced Troubleshooting with the BOSH CLI, Troubleshooting Ops Manager for VMware vSphere, How to Migrate Ops Manager to a New Datastore in vSphere, PersistentVolume Storage Options on vSphere, Create a pull request or raise an issue on the source for this page in GitHub, DNATs and SNATs, load balancer VIPs, and other platform components. The NSX-T Container Plugin enables a container networking stack and integrates with NSX-T. You can allocate networked storage to the host clusters following one of two common approaches: horizontal or vertical. vSphere offers NSX-T and NSX-V to support SDN infrastructure. vSphere VSAN is an example of this architecture. For more information about general storage requirements and recommendations for TAS for VMs, see Storage in Platform Architecture and Planning Overview. For additional requirements and installation instructions for Pivotal Platform on vSphere, see Installing Pivotal Platform on vSphere. This CIDR range for Kubernetes services network ranges is configurable in Ops Manager. Ops Manager supports these configurations for vSphere deployments: TAS for VMs on vSphere … You can allocate networked storage to the host clusters following one of two common approaches: horizontal or vertical. You can configure the block of address space in the NCP Configuration section of the NSX-T tile in Pivotal Operations Manager. Multiple clusters provide additional features such as security, customization on a per-cluster basis, privileged containers, failure domains, and version choice. For more information about PAS subnets, see Required Subnets in Platform Architecture and Planning Overview. These sections describe networking requirements and recommendations for TAS for VMs on vSphere with NSX-T deployments. They also provide requirements and recommendations for deploying PAS on vSphere with NSX-V, such as network, load balancing, and storage capacity requirements and recommendations. The architecture of VirtualCenter Management Server will be described in detail in later sections. Pivotal Platform requires shared storage. To accommodate the higher address space, allow for four times the address space. NSX-T creates address blocks of /24 by default. For more information about storage requirements and recommendations, see PersistentVolume Storage Options on vSphere. Flannel as your container network interface in the Networking pane of The NSX-T Container Plugin enables a container networking stack and integrates with NSX-T. PAS deployments require the VMware NSX-T Container Plugin for Pivotal Platform to enable the SDN features available through NSX-T. An internal MySQL database is sufficient for use in production environments. The Tier-0 router must have routable external IP address space to advertise on the BGP network with its peers. It builds on the common base architectures described in Platform Architecture and Planning. For information about high availability (HA) requirements and recommendations for PAS on vSphere, see High Availability in Platform Architecture and Planning Overview. For more information, see PAS on vSphere without NSX. TAS for VMs on vSphere with NSX-V enables services provided by NSX on the TAS for VMs platform, such as an Edge Services Gateway (ESG), load balancers, firewall services, and NAT/SNAT services. This means that every org in TAS for VMs is assigned a new /24 network. TKGI deployments with NSX-T are deployed with three clusters and three AZs. VMware recommends the following storage capacity allocation for production and non-production Enterprise PKS environments: Enterprise PKS on vSphere supports static persistent volume provisioning and dynamic persistent volume provisioning. For example, you can configure an F5 external load balancer. With Layer 4 load balancers, traffic passes through the load balancers and SSL is terminated at the Gorouters. They also provide requirements and recommendations for deploying Enterprise PKS on vSphere with NSX-T, such as network, load balancing, and storage capacity requirements and recommendations. Reference Architecture Model for CRD v2.5 The Certified Reference Design (CRD) for VMware Cloud Providers is a pre-validated set of software components that simplify the deployment of a VMware Cloud Director® based multitenant cloud in a predictable and efficient manner. the Enterprise PKS tile. Intended Audience This guide is intended for telecommunications and solution architects, sales engineers, field DNATs and SNATs, load balancer WIPs, and other Pivotal Platform components. This approach reduces overhead processing. You can install the NSX-V Edge router as an Edge services gateway (ESG) or as a distributed logical router (DLR). PAS deployments with NSX-V also include an NSX-V Edge router on the front end. Any TCP Gorouters and SSH Proxies within the platform also require NSX-T load balancers. You can configure static or dynamic routing using BGP from the routed IP address backbone through the Tier-0 router with the edge gateway. For additional requirements and installation instructions for Ops Manager on vSphere, see Installing Ops Manager on vSphere. PAS deployments experience downtime during events such as storage upgrades or migrations to new disks. For example: When you push a TKGI on vSphere deployment with a service type set to LoadBalancer, NSX-T automatically creates a new WIP for the deployment on the existing load balancer for that namespace. The diagram below illustrates reference architecture for PAS on vSphere with NSX-T deployments: PAS deployments with NSX-T are deployed with three clusters and three Availability Zones (AZs). For information about security requirements and recommendations, see Security in Platform Architecture and Planning Overview. You can configure static or dynamic routing using BGP from the routed IP address backbone through the Tier-0 router. If you use a third-party ingress routing service, you must: Define domain information for the ingress routing service in the manifest of the Enterprise PKS on vSphere deployment. Otherwise, s-vMotion activity can rename independent disks and cause BOSH to malfunction. Cloud Disaster Recovery Cloud Foundation Cloud Foundation 3.9 Cloud Foundation 4 ESXi ESXi 6.5 ESXi 6.7 ESXi 7 Site Recovery Site Recovery Manager Site Recovery Manager 8 vCenter Server vCenter Server 6.5 vCenter Server 6.7 vCenter Server 7 VMware Cloud on AWS vSAN vSAN 6.7 vSAN 7 vSphere vSphere 6.5 vSphere 6.7 vSphere 7 vSphere with Tanzu For example, a /14 network. For more information about blobstore storage requirements and recommendations, see Configure File Storage in Configuring PAS for Upgrades. For information about network, subnet, and IP address space planning requirements and recommendations, see Required Subnets in Platform Architecture and Planning Overview. PAS requires a system domain, app domain, and several wildcard domains. Use this reference architecture guide to design and configure your VMware environment on Hitachi Unified Compute Platform CI. With this arrangement, all VMs in the same installation and cluster share a dedicated datastore. With this arrangement, all VMs in the same installation and cluster share a dedicated datastore. For more information, see Networks in Platform Architecture and Planning Overview. You run the third-party ingress routing service as a container in the cluster. For information about configuring system databases on PAS, see Configure System Databases in Configuring PAS. For information about HA requirements and recommendations for PAS on vSphere, see High Availability in Platform Architecture and Planning Overview. Namespaces should be used as a naming construct and not as a tenancy construct. The Tier-0 router must have routable external IP address space to advertise on the BGP network with its peers. VMware recommends that you configure Layer 4 NSX-V load balancers for the Gorouters. TAS for VMs deployments with NSX-V are deployed with three clusters and three AZs. Hitachi Unified Compute Platform CI for VMware vSphere Reference Architecture Guide. To deploy TKGI without NSX-T, select An internal MySQL database is sufficient for use in production environments. You must specify a listening and translation port in the service, a name for tagging, and a protocol.