data center architecture design

Here’s a sample from the 2005 standard (click the image to enlarge): TIA has a certification system in place with dedicated vendors that can be retained to provide facility certification. These IP addresses are exchanged between VTEPs through the BGP EVPN control plane or static configuration. Your facility must meet the business mission. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses Layer 3 IP for the underlay network. Also, the spine Layer 3 VXLAN gateway learns the host MAC address, so you need to consider the MAC address scale to avoid exceeding the scalability limits of your hardware. If you have multiple facilities across the US, then the US standards may apply. Internal and external routing on the spine layer. The SVIs on the border leaf switches perform inter-VLAN routing for east-west internal traffic and exchange routing adjacency with Layer 3 routed uplinks to route north-south external traffic. The IT industry and the world in general are changing at an exponential pace. Cisco VXLAN MP-BGP EVPN spine-and-leaf network. The path is randomly chosen so that the traffic load is evenly distributed among the top-tier switches. It also introduces a control-plane protocol called FabricPath Intermediate System to Intermediate System (IS-IS). Servers may talk with other servers in different subnets or talk with clients in remote branch offices over the WAN or Internet. If no oversubscription occurs between the lower-tier switches and their uplinks, then a nonblocking architecture can be achieved. Hyperscale users and increased demand have turned data into the new utility, making quicker, leaner facilities a must. Figure 17 shows a typical design using a pair of border leaf switches connected to outside routing devices. Its control-plane protocol, FabricPath IS-IS, is designed to determine FabricPath switch ID reachability information. The multi-tier data center model is dominated by HTTP-based applications in a multi-tier approach. Border leaf switches can inject default routes to attract traffic intended for external destinations. Join millions of people using Oodle to find unique job listings, employment offers, part time jobs, and employment news. The Cisco Nexus 9000 Series introduced an ingress replication feature, so the underlay network is multicast free. The standard breaks down as follows: Government regulations for data centers will depend on the nature of the business and can include HIPPA (Health Insurance Portability and Accountability Act), SOX (Sarbanes Oxley) 2002, SAS 70 Type I or II, GLBA (Gramm-Leach Bliley Act), as well as new regulations that may be implemented depending on the nature of your business and the present security situation. Table 2 summarizes the characteristics of a VXLAN flood-and-learn spine-and-leaf network. Spine switches are performing intra-VLAN FabricPath frame switching. VLANs are extended within each pod that servers can move freely within the pod without the need to change IP address and default gateway configurations. Underlay IP PIM or the ingress replication feature is used to send broadcast and unknown unicast traffic. vPC technology works well in a relatively small data center environment in which most traffic consists of northbound and southbound communication between clients and servers. Data Centered Architecture serves as a blueprint for designing and deploying a data center facility. The spine switch can also be configured to send EVPN routes learned in the Layer 2 VPN EVPN address family to the IPv4 or IPv6 unicast address family and advertise them to the external routing device. It is an industry-standard protocol and uses underlay IP networks. It is designed to simplify, optimize, and automate the modern multitenancy data center fabric environment. The Layer 3 internal routed traffic is routed directly by the distributed anycast gateway on each ToR switch in a scale-out fashion. Cisco Data Center Network Manager (DCNM) is a management system for the Cisco® Unified Fabric. It provides workflow automation, flow policy management, and third-party studio equipment integration, etc. Figure 20 shows an example of a Layer 3 MSDC spine-and-leaf network with an eBGP control plane (AS = autonomous system). After traffic is routed to the destination VLAN, then it is forwarded using the multidestination tree in the destination VLAN. The VXLAN flood-and-learn spine-and-leaf network complies with the IETF VXLAN standards (RFC 7348). Table 3 summarizes the characteristics of the VXLAN MP-BGP EVPN spine-and-leaf network. Each VXLAN segment has a VXLAN network identifier (VNID), and the VNID is mapped to an IP multicast group in the transport IP network. Cisco Layer 3 MSDC network characteristics, Data Center fabric management and automation. Benefits of a network virtualization overlay include the following: ●      Optimized device functions: Overlay networks allow the separation (and specialization) of device functions based on where a device is being used in the network. The modern data center is an exciting place, and it looks nothing like the data center of only 10 years past. His experience also includes providing analysis of critical application support facilities. ), common designs, and design considerations (Layer 3 gateway, etc.) In MP-BGP EVPN, any VTEP in a VNI can be the distributed anycast gateway for end hosts in its IP subnet by supporting the same virtual gateway IP address and the virtual gateway MAC address (shown in Figure 16). The data center architecture specifies where and how the server, storage networking, racks and other data center resources will be physically placed. Moreover, scalability is another major issue in three-tier DCN. Features exist, such as the FabricPath Multitopology feature, to help limit traffic flooding in a subsection of the FabricPath network. However, it is still a flood-and-learn-based Layer 2 technology. ), Supports both Layer 2 multitenancy and Layer 3 multitenancy, RFC 7348 and RFC8365 (previously draft-ietf-bess-evpn-overlay). For feature support and for more information about Cisco FabricPath technology, please refer to the configuration guides, release notes, and reference documents listed at the end of this document. Data center design is the process of modeling an,.l designing (Jochim 2017) a data center's IT resources, architectural layout and entire ilfrastructure. It encapsulates Ethernet frames into IP User Data Protocol (UDP) headers and transports the encapsulated packets through the underlay network to the remote VXLAN tunnel endpoints (VTEPs) using the normal IP routing and forwarding mechanism. This design complies with the IETF RFC 7348 and draft-ietf-bess-evpn-overlay standards. The nature of your business will determine which standards are appropriate for your facility. With the anycast gateway function in EVPN, end hosts in a VNI always can use their local VTEPs for this VNI as their default gateway to send traffic out of their IP subnet. It enables the logical Each VTEP device is independently configured with this multicast group and participates in PIM routing. For Layer 2 multicast traffic, traffic entering the FabricPath switch is hashed to a multidestination tree to be forwarded. A legacy mindset in data center architecture revolves around the notion of “design now, deploy later.” The approach to creating a versatile, digital-ready data center must involve the deployment of infrastructure during the design session. The spine switch has two functions. Common Layer 3 designs use centralized routing: that is, the Layer 3 routing function is centralized on specific switches (spine switches or border leaf switches). It complies with IETF VXLAN standards RFC 7348 and RFC8365 (previously draft-ietf-bess-evpn-overlay). Connectivity. Each VTEP performs local learning to obtain MAC address (though traditional MAC address learning) and IP address information (based on Address Resolution Protocol [ARP] snooping) from its locally attached hosts. For example, fabrics need to support scaling of forwarding tables, scaling of network segments, Layer 2 segment extension, virtual device mobility, forwarding path optimization, and virtualized networks for multitenant support on shared physical infrastructure. Cisco FabricPath network characteristics, FabricPath (MAC-in-MAC frame encapsulation), Flood and learn plus conversational learning, Flood by FabricPath IS-IS multidestination tree. Spanning Tree Protocol provides several benefits: it is simple, and it is a plug-and-play technology requiring little configuration. It reduces network flooding through control-plane-based host MAC and IP address route distribution and ARP suppression on the local VTEPs. Facility ratings are based on Availability Classes, from 1 to 4. The Cisco VXLAN flood-and-learn spine-and-leaf network complies with the IETF VXLAN standards (RFC 7348). Table 1 summarizes the characteristics of a FabricPath spine-and-leaf network. That traffic needs to be routed by a Layer 3 function enabled on FabricPath switches (default gateways and border switches). Critical facilities are becoming more diverse as technology advances create market shifts. It also performs internal inter-VXLAN routing and external routing. Table 4. For feature support and more information about VXLAN MP-BGP EVPN, please refer to the configuration guides, release notes, and reference documents listed at the end of this document. With virtualized servers, applications are increasingly deployed in a distributed fashion, which leads to increased east-west traffic. You need to consider MAC address scale to avoid exceeding the scalability limit on the border leaf switch. With this design, tenant traffic needs to take two underlay hops (VTEP to spine to border leaf) to reach the external network. Cisco VXLAN flood-and-learn technology complies with the IETF VXLAN standards (RFC 7348), which defined a multicast-based flood-and-learn VXLAN without a control plane. A distributed anycast gateway also offers the benefit of transparent host mobility in the VXLAN overlay network. With VRF-lite, the number of VLANs supported across the VXLAN flood-and-learn network is 4096. ), (Note: The spine switch only needs to run BGP-EVPN control plane and IP routing. Depending on the number of servers that need to be supported, there are different flavors of MSDC designs: two-tiered spine-leaf topology, three-tiered spine-leaf topology, hyperscale fabric plane Clos design. We will review codes, design standards, and operational standards. This helps ensure infrastructure is deployed consistently in a single data center or across multiple data centers, while also helping to reduce costs and the time employees spend maintaining it. vPC eliminates the spanning-tree blocked ports, provides active-active uplink from the access switches to the aggregation routers, and makes full use of the available bandwidth, as shown in Figure 2. A data center is going to probably be the most expensive facility your company ever builds or operates. ●      LAN Fabric mode: provides Fabric Builder for automated VXLAN EVPN fabric underlay deployment, overlay deployment, end-to-end flow trace, alarm and troubleshooting, configuration compliance and device lifecycle management, etc. Mr. Shapiro is the author of numerous technical articles and is also a speaker at many technical industry seminars. ●      It provides VTEP peer discovery and authentication, mitigating the risk from rogue VTEPs in the VXLAN overlay network. This architecture has been proven to deliver the high-bandwidth, low-latency, nonblocking server-to-server connectivity. The Layer 3 routing function is laid on top of the Layer 2 network. It retains the easy-configuration, plug-and-play deployment model of a Layer 2 environment. Learn more about our thought leaders and innovative projects for a variety of market sectors ranging from Corporate Commercial to Housing, Pre-K – 12 to Higher Education, Healthcare to Science & Technology (including automotive, data centers and crime laboratories). For feature support and more information about TRM, please refer to the configuration guides, release notes, and reference documents listed at the end of this document. Two Cisco Network Insights applications are supported: ●      Cisco Network Insights - Advisor (NIA): monitors the data center network and pinpoints issues that can be addressed to maintain availability and reduce surprise outages. Due to the limitations of Application and Virtualization Infrastructure Are Directly Linked to Data Center Design. ●      Cisco Network Insights – Resources (NIR): provides a way to gather information through data collection to get an overview of available resources and their active processes and configurations across the entire Data Center Network Manager (DCNM). Because the gateway IP address and virtual MAC address are identically provisioned on all VTEPs in a VNI, when an end host moves from one VTEP to another VTEP, it doesn’t need to send another ARP request to relearn the gateway MAC address. The VXLAN VTEP uses a list of IP addresses of other VTEPs in the network to send broadcast and unknown unicast traffic. The external routing function is centralized on specific switches. Layer 2 multitenancy example using the VNI. In 2013, UI requested that TIA stop using the Tier system to describe reliability levels, and TIA switched to using the word “Rated” in lieu of “Tiers,” defined as Rated 1-4. You need to design multicast group scaling carefully, as described earlier in the section discussing Cisco VXLAN flood-and-learn multicast traffic. A typical FabricPath network uses a spine-and-leaf architecture. It is part of the underlay Layer 3 IP network and transports the VXLAN encapsulated packets. Table 5 compares the four Cisco spine-and-leaf architectures discussed in this document: FabricPath, VXLAN flood-and-learn, VXLAN MP-BGP EVPN, and MSDC Layer 3 networks. The VXLAN flood-and-learn spine-and-leaf network doesn’t have a control plane for the overlay network. Figure 18 shows a typical design with a pair of spine switches connected to the outside routing devices. at the time of this writing. The Tiers are compared in the table below and can be found in greater definition in UI’s white paper TUI3026E. Internal and external routing at the border spine. 2. The FabricPath IS-IS control plane builds reachability information about how to reach other FabricPath switches. With the ingress replication feature, the underlay network is multicast free. Although the concept of a network overlay is not new, interest in network overlays has increased in the past few years because of their potential to address some of these requirements. It doesn’t learn host MAC addresses. Table 4 summarizes the characteristics of a Layer 3 MSDC spine-and-leaf network. ), Note: Ingress replication is supported only on Cisco Nexus 9000 Series Switches. Many different tools are available from Cisco, third parties, and the open-source community that can be used to monitor, manage, automate, and troubleshoot the data center fabric. With this design, tenant traffic needs to take only one underlay hop (VTEP to spine) to reach the external network. Most users do not understand how critical the floor layout is to the performance of a data center, or they only understand its importance after a Data center design with extended Layer 3 domain. Data Center Architects are responsible for adequately securing the Data Center and should examine factors such as facility design and architecture. This course encompasses the basic principles of data center design, tracking its history from the early days of the mainframe to the modern enterprise data center in its many forms and the future. This traffic needs to be handled efficiently, with low and predictable latency. As the number of hosts in a broadcast domain increases, the negative effects of flooding packets are more pronounced. There are two types of components − 1. There is no single way to build a data center. Note that the maximum number of inter-VXLAN active-active gateways is two with a Hot-Standby Router Protocol (HSRP) and vPC configuration. These VTEPs are Layer 2 VXLAN gateways for VXLAN-to-VLAN or VLAN-to-VXLAN bridging. This feature uses a 24-bit increased name space. Figure 4 shows a typical two-tiered spine-and-leaf topology. This design complies with IETF VXLAN standards RFC 7348 and draft-ietf-bess-evpn-overlay. (2) Tenant Routed Multicast (TRM) for Cisco Nexus 9000 Cloud Scale Series Switches. Most customers use eBGP because of its scalability and stability. The most efficient and effective data center designs use relatively new design fundamentals to create the required high energy density, high reliability environment. FabricPath has no overlay control plane for the overlay network. (This mode is not relevant to this white paper. Layer 3 IP multicast traffic is forwarded by Layer 3 PIM-based multicast routing. However, Spanning Tree Protocol cannot use parallel forwarding paths, and it always blocks redundant paths in a VLAN. Number 8860726. Best practices ensure that you are doing everything possible to keep it that way. This document reviews several spine-and-leaf architecture designs that Cisco has offered in the recent past as well as current designs and those the Cisco expects to offer in the near future to address fabric requirements in the modern virtualized data center: ●      Cisco® FabricPath spine-and-leaf network, ●      Cisco VXLAN flood-and-learn spine-and-leaf network, ●      Cisco VXLAN Multiprotocol Border Gateway Protocol (MP-BGP) Ethernet Virtual Private Network (EVPN) spine-and-leaf network, ●      Cisco Massively Scalable Data Center (MSDC) Layer 3 spine-and-leaf network. Regardless of the standard followed, documentation and record keeping of your operation and maintenance activities is one of the most important parts of the process. The VXLAN flood-and-learn spine-and-leaf network relies on initial data-plane traffic flooding to enable VTEPs to discover each other and to learn remote host MAC addresses and MAC-to-VTEP mappings for each VXLAN segment. Layer 2 multitenancy example with FabricPath VN-Segment feature. DCP_2047.JPG 1/6 Intel RSD defines key aspects of a logical architecture to implement CDI. Table 1. Please note that TRM is only supported on newer generation of Nexus 9000 switches such as Cloud Scale ASIC–based switches. The leaf layer consists of access switches that connect to devices such as servers. To support multitenancy, same VLANs can be reused on different FabricPath leaf switches, and IEEE 802.1Q tagged frames are mapped to specific VN-segments. About the author: Steven Shapiro has been in the mission critical industry since 1988 and has a diverse background in the study, reporting, design, commissioning, development and management of reliable electrical distribution, emergency power, lighting, and fire protection systems for high tech environments. The spine layer is the backbone of the network and is responsible for interconnecting all leaf switches. Case Study: Major Retailer Uses Integration & Services for New Store Concept, © 2020 Informa USA, Inc., All rights reserved, Artificial Intelligence in Health Care: COVID-Net Aids Triage, AWS Cloud Outage Hits Customers Including Roku, Adobe, Why You Should Trust Open Source Software Security, Remote Data Center Management Tools are No Longer Optional, CloudBolt to Accelerate Hybrid Cloud Management with New Funding, What Data Center Colocation Is Today, and Why It’s Changed, Everything You Need to Know About Colocation Pricing, Dell, Switch to Build Edge Computing Infrastructure at FedEx Logistics Sites, Why Equinix Doesn't Think Its Bare Metal Service Competes With Its Cloud-Provider Customers, EN 50600-2-4 Telecommunications cabling infrastructure, EN 50600-2-6 Management and operational information systems, Uptime Institute: Operational Sustainability (with and without Tier certification), ISO 14000 - Environmental Management System, PCI – Payment Card Industry Security Standard, SOC, SAS70 & ISAE 3402 or SSAE16, FFIEC (USA) - Assurance Controls, AMS-IX – Amsterdam Internet Exchange - Data Centre Business Continuity Standard, EN50600-2-6 Management and Operational Information, Allowed HTML tags:


. This section describes Cisco VXLAN flood-and-learn characteristic on these Cisco hardware switches. The VXLAN flood-and-learn network is a Layer 2 overlay network, and Layer 3 SVIs are laid on top of the Layer 2 overlay network. With this design, the spine switch needs to support VXLAN routing. Should it have the minimum required by code? It is simple, flexible, and stable; it has good scalability and fast convergence characteristics; and it supports multiple parallel paths at Layer 2. However, vPC can provide only two active parallel uplinks, and so bandwidth becomes a bottleneck in a three-tier data center architecture. As shown in the design for internal and external routing at the border leaf in Figure 7, the spine switch functions as the Layer 2 FabricPath switch and performs intra-VLAN FabricPath frame switching only. Facility operations, maintenance, and procedures will be the final topics for the series. ), Cisco’s Massively Scalable Data Center Network Fabric White Paper, https://www.cisco.com/c/en/us/products/cloud-systems-management/prime-data-center-network-manager/index.html, https://www.cisco.com/c/en/us/support/data-center-analytics/network-insights-data-center/products-installation-and-configuration-guides-list.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-730116.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/guide-c07-734107.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-743245.html, https://blogs.cisco.com/datacenter/vxlan-innovations-on-the-nexus-os-part-1-of-2, Cisco MDS 9000 10-Gbps 8-Port FCoE Module Extends Fibre Channel over Ethernet to the Data Center Core. This Shortest-Path First (SPF) routing protocol is used to determine reachability and select the best path or paths to any given destination FabricPath switch in the FabricPath network. The data center is a dedicated space were your firm houses its most important information and relies on it being safe and accessible. Cisco DCNM can be installed in four modes: ●      Classic LAN mode: manages Cisco Nexus Data Center infrastructure deployed in legacy designs, such as vPC design, FabricPath design, etc. Cisco VXLAN flood-and-learn network characteristics, (Note: Ingress replication is supported only on Cisco Nexus 9000 Series Switches), (static, Open Shortest Path First [OSPF], IS-IS, External BGP [eBGP], etc.). Interactions or communication between the data accessors is only through the data stor… The entire purpose of designing a data center revolves around maximum utilization of IT resources for the sake of boosted efficiency, improved sales, and operational costs and fewer environmental effects. Data center network architecture must be highly adaptive, as managers must essentially predict the future in order to create physical spaces that accommodate rapidly evolving tech. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. It is arranged as a guide for data center design, construction, and operation. As the number of hosts in a broadcast domain increases, the negative effects of flooding packets become more pronounced. Cisco VXLAN MP-BGP EVPN network characteristics, Localized flood and learn with ARP suppression, Forwarded by underlay multicast (PIM) or ingress replication, (Note: Ingress replication is supported only on Cisco Nexus 9000 Series Switches. You can also have multiple VXLAN segments share a single IP multicast group in the core network; however, the overloading of multicast groups leads to suboptimal multicast forwarding. The VLAN has local significance on the FabricPath leaf switch, and VN-segments have global significance across the FabricPath network. To support multitenancy, the same VLAN can be reused on different VTEP switches, and IEEE 802.1Q tagged frames received on VTEPs are mapped to specific VNIs. Could Nvidia’s $40B Arm Gamble Get Stuck at the Edge? In 2010, Cisco introduced virtual-port-channel (vPC) technology to overcome the limitations of Spanning Tree Protocol. There are also many operational standards to choose from. The VXLAN flood-and-learn spine-and-leaf network supports Layer 2 multitenancy (Figure 14). The border leaf switch learns external routes and advertises them to the EVPN domain as EVPN routes so that other VTEP leaf nodes can also learn about the external routes for sending outbound traffic. That is definitely not best practice. It provides real-time health summaries, alarms, visibility information, etc. From Cisco DCNM Release 11.2, Cisco Network Insights applications are supported; these applications consist of monitoring utilities that can be added to the Data Center Network Manager (DCNM). It extends Layer 2 segments over a Layer 3 infrastructure to build Layer 2 overlay logical networks. VXLAN MP-BGP EVPN supports overlay tenant Layer 2 multicast traffic using underlay IP multicast or the ingress replication feature. The FabricPath spine-and-leaf network also supports Layer 3 multitenancy using Virtual Routing and Forwarding lite (VRF-lite), as shown in Figure 9. Because the fabric network is so large, MSDC customers typically use software-based approaches to introduce more automation and more modularity into the network. Common Layer 3 designs use centralized routing: that is, the Layer 3 routing function is centralized on specific switches (spine switches or border leaf switches). ●      It reduces network flooding through protocol-based host MAC address IP address route distribution and ARP suppression on the local VTEPs. The Tiers are compared in the table below and can b… Spine devices are responsible for learning infrastructure routes and end-host subnet routes. The automation tools can handle different fabric topologies and form factors, creating a modular solution that can adapt to different-sized data centers. As the number of hosts in a broadcast domain increases, it suffers the same flooding challenges as a FabricPath spine-and-leaf network. Traditional three-tier data center design. Data Centered Architecture is also known as Database Centric Architecture. We will discuss best practices with respect to facility conceptual design, space planning, building construction, and physical security, as well as mechanical, electrical, plumbing, and fire protection. Underlay IP PIM or the ingress replication feature is used to send broadcast and unknown unicast traffic. Also, with SVIs enabled on the spine switch, the spine switch disables conversational learning and learns the MAC address in the corresponding subnet. Each tenant has its own VRF routing instance. Every leaf switch connects to every spine switch in the fabric. An edge or leaf device can optimize its functions and all its relevant protocols based on end-state information and scale, and a core or spine device can optimize its functions and protocols based on link-state updates, optimizing with fast convergence. Layer 3 multitenancy example with VRF-lite, Cisco FabricPath Spine-and-Leaf network summary. The requirement to enable multicast capabilities in the underlay network presents a challenge to some organizations because they do not want to enable multicast in their data centers or WANs. Cisco MSDC Layer 3 spine-and-leaf network. The routing protocol can be regular eBGP or any Interior Gateway Protocol (IGP) of choice. TIA uses tables within the standard to easily identify the ratings for telecommunications, architectural, electrical, and mechanical systems. January 15, 2020. The maximum number of inter-VXLAN active-active gateways is two with an HSRP and vPC configuration. Traditional three-tier data center design The architecture consists of core routers, aggregation routers (sometimes called distribution routers), and access switches. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses MP-BGP EVPN for the control plane for VXLAN. Another challenge in a three-tier architecture is that server-to-server latency varies depending on the traffic path used. End-host information in the overlay network is learned through the flood-and-learn mechanism with conversational learning. The spine switch is just part of the underlay Layer 3 IP network to transport the VXLAN encapsulated packets. The VXLAN flood-and-learn spine-and-leaf network supports up to two active-active gateways with vPC for internal VXLAN routing. These are the VN-segment edge ports. Ideally, you should map one VXLAN segment to one IP multicast group to provide optimal multicast forwarding. It also addresses how these resources/devices will be interconnected and how physical and logical security workflows are arranged. The leaf Layer is responsible for advertising server subnets in the network fabric. External routing with border spine design. With vPC technology, Spanning Tree Protocol is still used as a fail-safe mechanism. Its control plane protocol is FabricPath IS-IS, which is designed to determine FabricPath switch ID reachability information. The VXLAN VTEP uses a list of IP addresses of other VTEPS in the network to send broadcast and unknown unicast traffic. On each FabricPath leaf switch, the network keeps the 4096 VLAN spaces, but across the whole FabricPath network, it can support up to 16 million VN-segments, at least in theory. In fact, according to Moore’s Law (named after the co-founder of Intel, Gordon Moore), computing power doubles every few years. Similarly, Layer 3 segmentation among VXLAN tenants is achieved by applying Layer 3 VRF technology and enforcing routing isolation among tenants by using a separate Layer 3 VNI mapped to each VRF instance. Data Centre World Singapore speaker and mission critical architect Will Ringer attests to the importance of an architect’s eye to data centre design. The traditional data center uses a three-tier architecture, with servers segmented into pods based on location, as shown in Figure 1. Similarly, there is no single way to manage the data center fabric. Routed traffic needs to traverse only one hop to reach to default gateway at the spine switches to be routed. Internal and external routing at the border leaf. Modern virtualized data center fabrics must meet certain requirements to accelerate application deployment and support DevOps needs. FabricPath links (switch-port mode: fabricpath) carry VN-segment tagged frames for VLANs that have VXLAN network identifiers (VNIs) defined. As shown in the design for internal and external routing on the border leaf in Figure 13, the leaf ToR VTEP switch is a Layer 2 VXLAN gateway to transport the Layer 2 segment over the underlay Layer 3 IP network. Modern Data Center Design and Architecture. If device port capacity becomes a concern, a new leaf switch can be added by connecting it to every spine switch and adding the network configuration to the switch. To learn end-host reachability information, FabricPath switches rely on initial data-plane traffic flooding. Both designs provide centralized routing: that is, the Layer 3 routing functions are centralized on specific switches. Layer 3 multitenancy example using VRF-lite, Cisco VXLAN flood-and-learn spine-and-leaf network summary. This architecture is the physical and logical layout of the resources and equipment within a data center facility. The investment giant is one of the biggest advocates outside Silicon Valley for open source hardware, and the new building itself is a modular, just-in-time construction design. Encapsulation format and standards compliance. They must also play an active role in manageability and operations of the data center. AWS pioneered cloud computing in 2006, creating cloud infrastructure that allows you to securely build and innovate faster. Cisco VXLAN flood-and-learn spine-and-leaf network. That’s the goal of Intel Rack Scale Design (Intel RSD), a blueprint for unleashing industry innovation around a common CDI-based data center architecture. The VXLAN flood-and-learn spine-and-leaf network supports up to two active-active gateways with vPC for internal VXLAN routing. Today, most web-based applications are built as multi-tier applications. For more information on Cisco Network Insights, see https://www.cisco.com/c/en/us/support/data-center-analytics/network-insights-data-center/products-installation-and-configuration-guides-list.html. The spine switch runs MP-BGP EVPN on the inside with the other VTEPs in the VXLAN fabric and exchanges EVPN routes with them. It transports Layer 2 frames over a Layer 3 IP underlay network. Not all facilities supporting your specific industry will meet your defined mission, so your facility may not look or operate like another, even in the same industry. For a FabricPath network, the FabricPath IS-IS control plane by default creates two multidestination trees that carry broadcast traffic, unknown unicast traffic, and multicast traffic through the FabricPath network. ), ●      Storage Area Network (SAN) controller mode: manages Cisco MDS Series switches for storage network deployment with graphical control for all SAN administration functions. This approach keeps latency at a predictable level because a payload only has to hop to a spine switch and another leaf switch to reach its destination. In the VXLAN flood-and-learn mode defined in RFC 7348, end-host information learning and VTEP discovery are both data-plane based, with no control protocol to distribute end-host reachability information among the VTEPs. The impact of broadcast and unknown unicast traffic flooding needs to be carefully considered in the FabricPath network design. As in a traditional VLAN environment, routing between VXLAN segments or from a VXLAN segment to a VLAN segment is required in many situations. At the same time, it runs the normal IPv4 or IPv6 unicast routing in the tenant VRF instances with the external routing device on the outside. The Cisco FabricPath spine-and-leaf network is proprietary to Cisco. The Layer 3 spine-and-leaf design intentionally does not support Layer 2 VLANs across ToR switches because it is a Layer 3 fabric. For more details regarding MSDC designs with Cisco Nexus 9000 and 3000 switches, please refer “Cisco’s Massively Scalable Data Center Network Fabric White Paper”. (This mode is not relevant to this white paper.). The FabricPath spine-and-leaf network uses Layer 2 FabricPath MAC-in-MAC frame encapsulation, and it uses FabricPath IS-IS for the control-plane in the underlay network. A Layer 3 function is laid on top of the Layer 2 network. As shown in the design for internal and external routing on the spine layer in Figure 12, the leaf ToR VTEP switch is a Layer 2 VXLAN gateway to transport the Layer 2 segment over the underlay Layer 3 IP network. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses VXLAN encapsulation. If deviations are necessary because of site limitations, financial limitations, or availability limitations, they should be documented and accepted by all stakeholders of the facility. This scoping allows potential overlap in MAC and IP addresses between tenants. The original Layer 2 frame is encapsulated in a VXLAN header and then placed in a UDP-IP packet and transported across the IP network. With overlays used at the fabric edge, the spine and core devices are freed from the need to add end-host information to their forwarding tables. It is a for-profit entity that will certify a facility to its standard, for which the standard is often criticized. VXLAN uses a 24-bit segment ID, or VNID, which enables up to 16 million VXLAN segments to coexist in the same administrative domain. As an extension to MP-BGP, MP-BGP EVPN inherits the support for multitenancy with VPN using the VRF construct. It enables you to provision, monitor, and troubleshoot the data center network infrastructure. Underlay IP multicast is used to reduce the flooding scope of the set of hosts that are participating in the VXLAN segment. VXLAN, one of many available network virtualization overlay technologies, offers several advantages. From client-inclusive idea generation to collaborative community engagement, Shive-Hattery is grounded in the belief that design-thinking is a … The MP-BGP EVPN control plane provides integrated routing and bridging by distributing both Layer 2 and Layer 3 reachability information for the end host residing in the VXLAN overlay network. The FabricPath spine-and-leaf network is proprietary to Cisco but is based on the TRILL standard. https://www.datacenterknowledge.com/sites/datacenterknowledge.com/files/logos/DCK_footer.png, The choice of standards should be driven by the organization’s business mission, Top500: Japan’s Fugaku Still the World’s Fastest Supercomputer, Intel’s Ice Lake Chips to Enable Confidential Computing on Data Center-Grade Servers. Host mobility and multitenancy is not supported. Internal and external routing on the border leaf. For Layer 3 IP multicast traffic, traffic needs to be forwarded by Layer 3 multicast using Protocol-Independent Multicast (PIM). If one of the top tier switches were to fail, it would only slightly degrade performance throughout the data center. This capability enables optimal forwarding for northbound traffic from end hosts in the VXLAN overlay network. ●      Its underlay and overlay management tools provide many network management capabilities, simplifying workload visibility, optimizing troubleshooting, automating fabric component provisioning, automating overlay tenant network provisioning, etc. To learn end-host reachability information, FabricPath switches rely on initial data-plane traffic flooding. ), ●      Border spine switch for external routing, (Note: The spine switch needs to support VXLAN routing on hardware. This section describes VXLAN MP-BGP EVPN on Cisco Nexus hardware switches such as the Cisco Nexus 5600 platform switches and Cisco Nexus 7000 and 9000 Series Switches. The spine switch learns external routes and advertises them to the EVPN domain as EVPN routes so that other VTEP leaf nodes can also learn about the external routes for sending outbound traffic. The VXLAN flood-and-learn spine-and-leaf network also supports Layer 3 multitenancy using VRF-lite (Figure 15). You need to consider MAC address scale to avoid exceeding the scalability limits of your hardware. We are continuously innovating the design and systems of our data centers to protect them from man-made and natural risks. The key is to choose a standard and follow it. Lines and paragraphs break automatically. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses MP-BGP EVPN for the control plane for the VXLAN overlay network. Examples of MSDCs are large cloud service providers that host thousands of tenants, and web portal and e-commerce providers that host large distributed applications. To learn end-host reachability information, FabricPath switches rely on initial data-plane traffic flooding. Each host is associated with a host subnet and talks with other hosts through Layer 3 routing. A new data center design called the Clos network–based spine-and-leaf architecture was developed to overcome these limitations. IP multicast traffic is by default constrained to only those FabricPath edge ports that have either an interested multicast receiver or a multicast router attached and use Internet Group Management Protocol (IGMP) snooping. Internal and external routed traffic needs to travel one underlay hop from the leaf VTEP to the spine switch to be routed. Broadcast and unknown unicast traffic in FabricPath is flooded to all FabricPath edge ports in the VLAN or broadcast domain. The Layer 3 function is laid on top of the Layer 2 network. Between the aggregation routers and access switches, Spanning Tree Protocol is used to build a loop-free topology for the Layer 2 part of network. Telecommunication Infrastructure Standard for Data Centers: This standard is more IT cable and network oriented and has various infrastructure redundancy and reliability concepts based on the Uptime Institute’s Tier Standard.

Best Natural Moisturizer For Face, Neutrogena Body Oil For Face, No Bake Cookies Without Butter, Irwin Billy And Mandy, Vietnam Pronunciation In British English, Sony Ubp-x700 Manual, Bluegill Vs Pumpkinseed, Sage Spice In Gujarati, Asus Laptop Screen Flickering When Moved, Kimberley Aboriginal Art,

Posted in 게시판.

댓글 남기기

이메일은 공개되지 않습니다. 필수 입력창은 * 로 표시되어 있습니다