This design is a baseline scenario and likely assumes you are using a rack-mount server with 6-8 1GBe NICs. This setup was most popular before the proliferation of 10Gbe networking at the server level. Although this type of configuration can be simulated to various degrees by blade server network abstraction techniques, 10Gbe and blade server designs will be discussed separately. This page servers as a working document to describe the baseline design of vSphere distributed switches based on best practices, or at least my take on current best practices.
<li>
<a href="#Sections_In_Progress"><span class="toc_number toc_depth_1">3</span> Sections In Progress</a><ul>
<li>
<a href="#Network_IO_Control_NIOC"><span class="toc_number toc_depth_2">3.1</span> Network IO Control (NIOC)</a>
</li>
<li>
<a href="#iSCSI_MPIO_Configuration_with_vDS"><span class="toc_number toc_depth_2">3.2</span> iSCSI MPIO Configuration with vDS</a>
</li>
<li>
<a href="#Multiple_Distributed_Switches_vs_Single_Distributed_Switch"><span class="toc_number toc_depth_2">3.3</span> Multiple Distributed Switches vs Single Distributed Switch</a>
</li>
<li>
<a href="#Choosing_NIC_Teaming_Policies"><span class="toc_number toc_depth_2">3.4</span> Choosing NIC Teaming Policies</a>
</li>
<li>
<a href="#Distributed_Switch_Resiliency_and_Backup"><span class="toc_number toc_depth_2">3.5</span> Distributed Switch Resiliency and Backup</a>
</li>
</ul>
</li>
Physical and Logical Design
The physical design makes the assumption that we will be using 1Gbe physical network adapters , network storage, either iSCSI or NFS and of course the distributed switch. All physical switch ports are trunking required VLANs across each servers physical NICs.
<th class="column-2">
dSwitch
</th>
<th class="column-3">
VMNIC
</th>
<th class="column-4">
NIC Teaming
</th>
<th class="column-5">
VLAN Tag
</th>
<th class="column-6">
NIOC Shares
</th>
<td class="column-2">
Management-DSwitch
</td>
<td class="column-3">
0,1
</td>
<td class="column-4">
Originating Port ID
</td>
<td class="column-5">
20
</td>
<td class="column-6">
Unconfigured
</td>
<td class="column-2">
Management-DSwitch
</td>
<td class="column-3">
</td>
<td class="column-4">
Originating Port ID
</td>
<td class="column-5">
21
</td>
<td class="column-6">
Unconfigured
</td>
<td class="column-2">
Management-DSwitch
</td>
<td class="column-3">
1
</td>
<td class="column-4">
Originating Port ID
</td>
<td class="column-5">
21
</td>
<td class="column-6">
Unconfigured
</td>
<td class="column-2">
Storage-DSwitch
</td>
<td class="column-3">
2
</td>
<td class="column-4">
N/A
</td>
<td class="column-5">
30
</td>
<td class="column-6">
Unconfigured
</td>
<td class="column-2">
Storage-DSwitch
</td>
<td class="column-3">
3
</td>
<td class="column-4">
N/A
</td>
<td class="column-5">
30
</td>
<td class="column-6">
Unconfigured
</td>
<td class="column-2">
VM-DSwitch
</td>
<td class="column-3">
4,5,6,7
</td>
<td class="column-4">
Load Based Teaming (LBT)
</td>
<td class="column-5">
150
</td>
<td class="column-6">
Unconfigured
</td>
<td class="column-2">
VM-DSwitch
</td>
<td class="column-3">
4.5.6.7
</td>
<td class="column-4">
Load Based Teaming (LBT)
</td>
<td class="column-5">
151
</td>
<td class="column-6">
Low
</td>
This design can be easily scaled to support different requirements such as allowing iSCSI to consume more than 2 uplinks and scaling down the VM traffic to only 2 uplinks for example.
NFS could be used instead of iSCSI in this design but for now the focus is on iSCSI. vSphere 5.X uses NFS v3 therefore there is no native multi-pathing capability. There are other options, the best of them being to simply upgrade to 10Gbe which is beyond the scope of this design.
Naming Conventions
Naming conventions are important to have a manageable environment. Beyond satisfying the need for order and symmetry, which I am personally a big fan of, having clear naming conventions streamlines management especially in a multi-administrator environment, reduces the need to relearn or to reference diagrams when making changes which ultimately leads to less mistakes and better reliability.
More important than the specifics of the following examples is to ensure you have any logical naming convention.
Distributed Switch Names
%Function%-DSwitch
The key here is to make it extremely clear what the function of the switch is and to note that it is a distributed switch.
Uplink Port Group Names
%Switch_Name%-Uplinks
This one is straight forward. The only thing of note is I like to remove the numbers appended to the default name of the uplinks.
Distributed Port Group Names
Distributed port group names will be high variable depending on the purpose, environment and administrator. They directive is to be descriptive and consistent.
Virtual Machine Port Groups
VM_%Environment%Network-v%VLAN ID%
VMKernel Port Group Names
The key to VMKernel Port Group naming is to keep it simple, consistent and use number series when appropriate.
Sections In Progress
-
Network IO Control (NIOC) {#Network_IO_Control_(NIOC)}
-
iSCSI MPIO Configuration with vDS
-
Multiple Distributed Switches vs Single Distributed Switch
-
Choosing NIC Teaming Policies
-
Distributed Switch Resiliency and Backup