Ten days ago I highlighted a blog post by Microsoft’s Jose Baretto that SMB Multichannel across multiple NICs in a clustered node required that both NICs be in different subnets. That means:
- You have 2 NICs in each node in the Scale-Out File Server cluster
- Both NICs must be in different subnets
- You must enable both NICs for client access
- There will be 2 NICs in each of the hosts that are also on these subnets, probably dedicated to SMB 3.0 comms, depending on if/how you do converged fabrics
You can figure out cabling and IP addressing for yourself – if not, you need to not be doing this work!
The question is, what else must you do? Well, SMB Multichannel doesn’t need any configuration to work. Pop the NICs into the Hyper-V hosts and away you go. On the SOFS cluster, there’s a little bit more work.
After you create the SOFS cluster, you need to make sure that client communications is enabled on both of the NICs on subnet 1 and subnet 2 (as above). This is to allow the Hyper-V hosts to talk to the SOFS across both NICs (the green NICs in the diagram) in the SOFS cluster nodes. You can see this setting below. In my demo lab, my second subnet is not routed and it wasn’t available to configure when I created the SOFS cluster.
You’ll get a warning that you need to enable a Client Access Point (with an IP address) for the cluster to accept communications on this network. Damned if I’ve found a way to do that. I don’t think it’s necessary to do that additional step in the case of an SOFS, as you’ll see in a moment. I’ll try to confirm that with MSFT. Ignore the warning and continue. My cluster (uses iSCSI because I don’t have a JBOD) looks like:
You can see ManagementOS1 and ManagementOS2 (on different subnets) are Enabled, meaning that I’ve allowed clients to connect through both networks. ManagementOS1 has the default CAP (configured when the cluster was created).
Next I created the file server for application data role (aka the SOFS). Over in AD we find a computer object for the SOFS and we should see that 4 IP addresses have been registered in DNS. Note how the SOFS role uses the IP addresses of the SOFS cluster nodes (demo-fs1 and demo-fs2). You can also need the DNS records for my 2 hosts (on 2 subnets) here.
If you don’t see 2 IP address for each SOFS node registered with the SOFS name (as above – 2 addresses * 2 nodes = 4) then double check that you have enabled client communications across both cluster networks for the NICs on the SOFS cluster nodes (as previous).
Now we should be all ready to rock and role.
In my newly modified demo lab, I run this with the hosts clustered (to show new cluster Live Migration features) and not clustered (to show Live Migration with SMB storage). The eagle-eyed will notice that my demo Hyper-V hosts don’t have dedicated NICs for SMB comms. In the real world, I’d probably have dedicated NICs for SMB 3.0 comms on the Hyper-V hosts. They’d be on the 2 subnets that have been referred to in this post.
This blog post is the property of Aidan Finn (@joe_elway / http://www.aidanfinn.com) and may not be reused in any manner without prior consent of Aidan Finn. You may quote one paragraph from this blog post if you link to the original blog post.
- Rough Guide To Setting Up A Scale-Out File Server
- When To Use And When NOT To Use A Scale-Out File Server
- Scale-Out File Server Role Fails To Start With Event IDs 1205, 1069, and 1194
- KB281662 – How To Use Windows Server Cluster Nodes As Domain Controllers
- Very Important Note on Multichannel & Failover Clusters