2012
05.31

Hold on tight folks!

Max running vCPUs per host in Windows Server 2008 R2 was 512. In the Windows Server “8” Beta it was increased to 1024.  In the Windows Server 2012 RC, it’s doubled to 2048.

So …. big masive VMs with 64 vCPUs and 1 TB RAM.  Hosts with up to 320 logical processors and 4 TB RAM.  And 2048 running vCPUs in a host.

You gotta know that Tad prefers smaller hosts, ones with 32 GB of RAM that are just perfect for VMLimitedSphere Standard Edition and it’s memory vTax.  Meanwhile anyone with the effectively free Hyper-V can grow to these limits all they want without being penalised.

2012
05.31

I just posted the new maximum specs for Windows Server 2012 Release Candidate VMs.  Now for the host:

  • 320 physical Logical Processors
  • 4 TB RAM

That’s 320 cores with no Hyperthreading or 160 Cores with hyperthreading turned on.  That’s 10 * 16 core processors with hyperthreading.  Feck!

And that’s 128 * 32 GB DIMMs!!! Damn, I bet there’s a lot of skidmarks in VMware marketing right now.

Wish I was there.

Also read:

2012
05.31

In Windows Server 2008 R2 it was:

  • 4 vCPUs
  • 64 GB RAM

In the Windows Server “8” Developer Preview it was:

  • 32 vCPUs
  • 512 GB RAM

In Windows Server “8” Beta people gasped when it jumped to:

  • 32 vCPUs
  • 1 TB RAM

And now I can finally say that VMware will shit their pants when they read that Windows Server 2012 Release Preview VMs will support:

VMware vSphere 5.0 supports a max of 32 vCPUs and 1TB RAM.  Throw in the 64 TB VHDX (compared to 2 TB VMDK) and MSFT has VMware beat on scalability.

Hyper-V Replica for free, Network Virtualisation, SR-IOV, SMB 3.0 transparent failover storage, Shared Nothing Live Migration, PowerShell, Storage Migration, …… How does VMware compete in a few months time when vSphere 5.0 becomes the product that is feature chasing and is way more expensive?

Anyone remember Novell?

Credit to Hans Vredevoort for finding the announcement.

Also read:

2012
05.31

It is on TechNet and MSDN.  I don’t see it on the Build Connect site yet for the Slate PC upgrade.

2012
05.31

The final pre-RTM versions of Windows Server 2012 is available now.  Get ‘em while they’re hot!

2012
05.31

Note: This post was originally written using the Windows Server “8″ (aka 2012) Beta.  The PowerShell cmdlets have changed in the Release Candidate and this code has been corrected to suit it.

After the posts of the last few weeks, I thought I’d share a script that I am using to build a converged fabric hosts in the lab.  Some notes:

  1. You have installed Windows Server 2012 on the machine.
  2. You are either on the console or using something like iLO/DRAC to get KVM access.
  3. All NICs on the host will be used for the converged fabric.  You can tweak this.
  4. This will not create a virtual NIC in the management OS (parent partition or host OS).
  5. You will make a different copy of the script for each host in the cluster to change the IPs.
  6. You could strip out all but the Host-Parent NIC to create converged fabric for standalone host with 2 or 4 * 1 GbE NICs

And finally …. MSFT has not published best practices yet.  This is still a beta release.  Please verify that you are following best practices before you use this script.

OK…. here we go.  Watch out for the line breaks if you copy & paste:

write-host “Creating virtual switch with QoS enabled”
New-VMSwitch “ConvergedNetSwitch” -MinimumBandwidthMode weight -NetAdapterName “ConvergedNetTeam” -AllowManagementOS 0

write-host “Setting default QoS policy”
Set-VMSwitch “ConvergedNetSwitch” -DefaultFlowMinimumBandwidthWeight 10

write-host “Creating virtual NICs for the management OS”
Add-VMNetworkAdapter -ManagementOS -Name “Host-Parent” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-Parent” -MinimumBandwidthWeight 10

Add-VMNetworkAdapter -ManagementOS -Name “Host-Cluster” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-Cluster” -MinimumBandwidthWeight 10

Add-VMNetworkAdapter -ManagementOS -Name “Host-LiveMigration” -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-LiveMigration” -MinimumBandwidthWeight 10

Add-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI1″ -SwitchName “ConvergedNetSwitch”
Set-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI1″ -MinimumBandwidthWeight 10

#Add-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI2″ -SwitchName “ConvergedNetSwitch”
#Set-VMNetworkAdapter -ManagementOS -Name “Host-iSCSI2″ -MinimumBandwidthWeight 15

write-host “Waiting 30 seconds for virtual devices to initialise”
Start-Sleep -s 30

write-host “Configuring IPv4 addresses for the management OS virtual NICs”
New-NetIPAddress -InterfaceAlias “vEthernet (Host-Parent)” -IPAddress 192.168.1.51 -PrefixLength 24 -DefaultGateway 192.168.1.1
Set-DnsClientServerAddress -InterfaceAlias “vEthernet (Host-Parent)” -ServerAddresses “192.168.1.40″

New-NetIPAddress -InterfaceAlias “vEthernet (Host-Cluster)” -IPAddress 172.16.1.1 -PrefixLength “24″

New-NetIPAddress -InterfaceAlias “vEthernet (Host-LiveMigration)” -IPAddress 172.16.2.1 -PrefixLength “24″

New-NetIPAddress -InterfaceAlias “vEthernet (Host-iSCSI1)” -IPAddress 10.0.1.55 -PrefixLength “24″

#New-NetIPAddress -InterfaceAlias “vEthernet (Host-iSCSI2)” -IPAddress 10.0.1.56 -PrefixLength “24″

That will set up the following architecture:

image

QoS is set up as follows:

  • The default (unspecified links) is 10% minimum
  • Parent: 10%
  • Cluster: 10%
  • Live Migration: 20%

My lab has a single VLAN network.  In production, you should have VLANs and trunk the physical switch ports.  Then (I believe), you’ll need to add a line for each virtual NIC in the management OS (host) to specify the right VLAN (I’ve not tested this line yet on the RC release of WS2012 – watch out for teh VMNetowrkAdaptername parameter):

Set-VMNetworkAdapterVLAN –ManagementOS –VMNetworkAdapterName “vEthernet (Host-Parent)” –Trunk –AllowedVLANList 101

Now you have all the cluster connections you need, with NIC teaming, using maybe 2 * 10 GbE, 4 * 1 GbE, or maybe even 4 * 10 GbE if you’re lucky.

2012
05.30

Assuming that you converge all fabrics (including iSCSI and that may require DCB for NICs and physical switches) then my recent work in the lab has found me another reason to like converged fabrics, beyond using fewer NICs.

If I am binding roles (parent, live migration, etc) to physical NICs then any host networking configuration scripts that I write must determine what NIC is correct.  That would not be easy and would be subject to human cabling error, especially if hardware configurations change.

If however, I bind all my NICs into a team, and then build a converged fabric on that team, I have completely abstracted the physical networks from the logical connections.  Virtual management OS NICs and trunking/VLAN bindings mean I don’t care any more … I just need 2 or 4 NICs in my team and connected to my switch.

Now that physical bindings don’t matter, I have simplified my configuration and I can script my depoyments and configuration to my heart’s content!

The only question that remains … do I really converge my iSCSI connections?  More to come …

2012
05.30

A man’s mind can wander when he’s driving for endless hours over boring featureless motorways.  So I got to thinking: how would I put together a Windows 8/Server 2012 launch event?

Even before things kick off, I cozy up the press.  That’ll be two streams of work.  The tech press are more aware and will have different questions.  The ordinary press need lots of love – many of them only report tech news around the time of Apple launches.  Get them onside, start setting the agenda and getting the public aware that something is coming.  Use the press to get news of the public launch details out.  Then on to the actual launch events.

There will be one multi-track launch event in every country, followed by smaller single track events on a regional basis.

The big event: To get things warmed up you need some music.  Not some of that custom bought “Kenny G would cringe” muzak that’s usually used right before an opening.  No, I want Bat Out Of Hell, and I want it LOUD … louder than everything else.

Then on to the keynote.  This has to be the most senior MSFT person in the country.  For a global launch, it has to be Ballmer.  In regional launch events, it has to be the local MD.  People need to see how important this thing is.  It’s not “just another version of Windows”. 

Then bring out the BGs or whatever for Windows 8 and Windows Server & Tools to talk about their respective products.  This will be level 100 intro stuff, to highlight important features and why people should be excited.  Important to remember that 99.99% of people don’t stay informed like us nerds.

Then the event breaks into 3 tracks.  Each track has an associated sponsor who gets their own time slot.  Each track has multiple sessions, with maybe 5/10 minute breaks to allow people to move around.  The tracks are:

  • Consumer – with a consumer related sponsor (preferably some large retail chain that’ll be selling Windows 8 devices on the GA day)
  • Business – this is the IT pro and decision maker track.  There are sessions on Windows Server, Windows 8, and the “better together” story.
  • Developer – What good is an OS without apps, and devs will need to be educated about the new app model and the Store.

Regional events would also need to be done.  It would be impossible to do the entire multi-track thing for them, so I’d go with a “best of” road trip, with maybe a single sponsor.  I liked how MSFT Ireland did the Windows 7/Server 2008 R2 road show: business side of things in the day, and consumer in the evening.

That’s my five cents and the ramblings of a person who has spent too many hours on the M1/M4/M6/N20/M8/M7.

2012
05.29

My lesson from the lab is this …  If you are implementing WS2012 Hyper-V hosts with converged fabrics then you need to realise that all of your NICs for RDP access will be committed to the NIC team and Hyper-V switch.  That means that while implementing or troubleshooting the switch and converged fabrics you will need some alternative to RDP/Remote Desktop.  And I’m not talking VNC/TeamViewer/etc.

In my lab, I have LOTS of spare NICs.  That won’t be true of field implementation.  I temporarily fired up an “RDP” NIC, configure my team, switch, and a virtual NIC for the Parent.  Then I RDPd into the Parent virtual NIC and disabled the “RDP” NIC.

In the field, I strongly advise using the baseboard management controller (BMC) to remotely log into the host while implementing, re-configuring or troubleshooting the converged fabrics setup.  Why?  Because you’ll be constantly interrupted if relying on RDP into one of the converged or virtual NICs.  You may even find NICs switching from static to DHCP addressing and it’ll take time to figure out what their new IPs are.

You’ll be saving money by converging fabrics.  Go ahead and cough up the few extra quid to get a BMC such as Dell DRAC or HP iLO fully configured and onto the network so you can reliably log into the server.  Plus it gives you other features like power control, remote OS installation, and so on.

2012
05.28

I’ve since posted a more complete script for a Hyper-V cluster that’s using SMB 3.0 storage.

I am creating and destroying Hyper-V clusters like crazy in the lab at the moment.  And that means I need to script; I don’t want to waste time repeating the same thing over and over in the GUI, wasting valuable time.  Assuming your networking is completed (more to come on scripting that!) and your disk is provisioned/formatted, then the following script will build a cluster for you:

New-Cluster –Name demo-hvc1 –StaticAddress 192.168.1.61 –Node demo-host1, demo-host2

Get-ClusterResource | Where-Object {$_.OwnerGroup –eq “Available Storage”}  | Add-ClusterSharedVolume

(Get-Cluster).SharedVolumeBlockCacheSizeInMB = 512

Get-ClusterSharedVolume *  |  Set-ClusterParameter CSVEnabledBlockCache 1

Get-ClusterSharedVolume  | Stop-ClusterResource

Get-ClusterSharedVolume | Start-ClusterResource

What does the script do?

  1. It creates a new cluster called demo-hvc1 with an IP address of 192.168.1.61 using demo-host1 and demo-host2 as the nodes.
  2. It finds all available disk and converts it to CSV volumes.
  3. Then it configures CSV cache to use 512 MB RAM
  4. Every CSV is configured to use CSV cache
  5. The CSVs are stopped
  6. The CSVs are restarted so they can avail of CSV cache

The script doesn’t do a validation.  My setup is pretty static so no validation is required.  BTW, for the VMLimited fanboys out there who moan about time to deploy Hyper-V, my process (networking included) builds the cluster in probably around 30-40 seconds.

2012
05.28

We continue further down the road of understanding converged fabrics in WS2012 Hyper-V.  The following diagram illustrates a possible design goal:

image

Go through the diagram of this clustered Windows Server 2012 Hyper-V host:

  • In case you’re wondering, this example is using SAS or FC attached storage so it doesn’t require Ethernet NICs for iSCSI.  Don’t worry iSCSI fans – I’ll come to that topic in another post.
  • There are two 10 GbE NICs in a NIC team.  We covered that already.
  • There is a Hyper-V Extensible Switch that is connected to the NIC team.  OK.
  • Two VMs are connected to the virtual switch.  Nothing unexpected there!
  • Huh!  The host, or the parent partition, has 3 NICs for cluster communications/CSV, management, and live migration.  But … they’re connected to the Hyper-V Extensible Switch?!?!?  That’s new!  They used to require physical NICs.

In Windows Server 2008 a host with this storage would require the following NICs as a minimum:

  • Parent (Management)
  • VM (for the Virtual Network, prior to the Virtual Switch)
  • Cluster Communications/CSV
  • Live Migration

All that accumulation of NICs wasn’t a matter of bandwidth. What we really care about in clustering is quality of service: bandwidth when we need it and low latency. Converged fabrics assume we can guarantee those things. If we have those SLA features available to us (more in later posts) then 2 * 10 GbE physical NICs in each clustered hosts might be enough, depending on business and technology requirements of the site.  4 NICs per host … and that’s without NIC teaming.  Double the NICs!

The amount of NICs go up.  The number of switch ports goes up.  The wasted rack space cost goes up.  The power bill for all that goes up.  The support cost for your network goes up.  In truth, the complexity goes up.

NICs aren’t important.  Quality communications channels are important.

In this WS2012 converged fabrics design, we can create virtual NICs that attach to the Virtual Switch.  That’s done by using the Add-VMNetworkAdapter PowerShell cmdlet, for example:

Add-VMNetworkAdapter -ManagementOS -Name “Manage” -SwitchName External1

… where Manage will be the name of the new NIC and the name of the Virtual Switch is External1.  The –ManagementOS tells the cmdlet that the new vNIC is for the parent partition or the host OS.

You can then:

I think configuring the VLAN binding of these NICs with port trunking (or whatever) would be the right way to go with this.  That will further isolate the traffic on the physical network.  Please bear in mind that we’re still in the beta days and I haven’t had a chance to try this architecture yet.

Armed with this knowledge and these cmdlets, we can now create all the NICs we need that connect to our converged physical fabrics.  Next we need to look at securing and guaranteeing quality levels of communications.

2012
05.25

Before we looks at this new networking feature of W2012 Hyper-V, lets look at what we have been using in Windows Server 2008/R2.  Right now, if you create a VM, you give it one or more virtual network cards (vNICs).  Each vNIC is connected to a virtual network (basically a virtual unmanaged switch) and each switch is connected to one physical NIC (pNIC) or NIC team in the host.  Time for a visual:

image

Think about a typical physical rack server for a moment.  When you connect it to a switch the port is a property of the switch, right?  You can configure properties for that switch port like QoS, VLANs, etc.  But if you move that server to another location, you need to configure a new switch port.  That’s messy and time consuming.

In the above example, there is a switch port.  But Microsoft anticipated the VM mobility issue and port configuration.  Instead of the port being a property of the virtual network, it’s actually a property of the VM.  Move the VM, you move the port, and you move the port settings.  That’s clever; configure the switch port once and now it’s a matter of “where do you want your workload to run today?” with no configuration issues.

OK, now let’s do a few things:

  • Stop calling it a virtual network and now call it a virtual switch.
  • Now you have a manageable layer 2 network device.
  • Introduce lots of new features for configuring ports and doing troubleshooting.
  • Add certified 3rd-party extensibility.

We have different kinds of Virtual Switch like we did before:

  • External – connected to a pNIC or NIC team in the host to allow VM comms on the physical network.
  • Internal – Allows VMs to talk to each other on the virtual switch and with the host parent partition.
  • Private – An isolated network where VMs can talk to each other on the same virtual switch.

Although I’m focusing on the converged fabric side of things at the moment, the extensibility is significant.  Companies like Cisco, NEC, Five9, and others have announced how they are adding functionality.  NEC are adding their switch technology, Five9 are adding a virtual firewall, and Cisco have SR-IOV functionality and a Cisco Nexus 1000v that pretty much turns the Hyper-V Switch into a Cisco switch with all the manageability from their console.  The subject of extensibility is a whole other set of posts.

With a virtual switch I can do something as basic as this:

image

It should look kind of familiar Smile  I’ve already posted about NIC teaming in Windows Server 2012.  Let’s add a team!

image

With the above configuration, the VMs are now connected to both the NICs in the host.  If one NIC dies, the team fails over and the VMs talk through the other NIC.  Depending on you load distribution setting, your VMs may even use the aggregation of the bandwidth, e.g. 2 * 10 GbE to get 20 Gbps of bandwidth. 

With NIC teaming, we have converged two NICs and used a single pipe for VM communications.  We haven’t converged any fabrics just yet.  There’s a lot more stuff with policies and connections that we can do with the Virtual Switch.  There will be more posts on those topics soon, helping us get to the point where we can look at converging fabrics.

2012
05.24

Every time Microsoft gave us a new version of Hyper-V (including W2008 R2 SP1) we got more features to get the solution closer in functionality to the competition.  With the current W2008 R2 SP release, I reckon that we have a solution that is superior to most vSphere deployments (think of licensed or employed features).  Every objection, one after the next, was knocked down: Live Migration, CSV, Dynamic Memory, and so on.  The last objection was NIC teaming … VMware had it but Microsoft didn’t have a supported solution.

True, MSFT hasn’t had NIC teaming and there’s a KB article which says they don’t support it.  NIC teaming is something that the likes of HP, Dell, Intel and Broadcom provided using their software.  If you had a problem, MSFT might ask you to remove it.  And guess what, just about every networking issue I’ve heard on on Hyper-V was driver or NIC teaming related.

As a result, I’ve always recommended against NIC teaming using OEM software.

We want NIC teaming!  That was the cry … every time, every event, every month.  And the usual response from Microsoft is “we heard you but we don’t talk about futures”.  Then Build came along in 2011, and they announced that NIC teaming would be included in W2012 and fully supported for Hyper-V and Failover Clustering.

image

NIC teaming gives us LBFO.  In other words, we can aggregate the bandwidth of NICs and have automatic failover between NICs.  If I had 2 * 10 GbE NICs then I could team them to have a single pipe with 20 Gbps if both NICs are working and connected.  With failover we typically connect both NICs to ports on different access switches.  The result is that if one switch, it’s NIC becomes disconnected, but the other one stays connected and the team stays up and running, leaving the dependent services available to the network and their clients.

Here’s a few facts about W2012 NIC teaming:

  • We can connect up to 32 NICs in a single team.  That’s a lot of bandwidth!
  • NICs in a single team can be different models from the same manufacturer or even NICs from different manufacturers.  Seeing as drivers can be troublesome, maybe you want to mix Intel and Broadcom NICs in a team for extreme uptime.  Then a dodgy driver has a lesser chance of bringing down your services.
  • There are multiple teaming modes for a team: Generic/Static Teaming requires the switches to be configured for the team and isn’t dynamic.  LACP is self-discovering and enables dynamic expansion and reduction of the NICs in the team.  Switch independent works with just a single switch – switches have no knowledge of the team.
  • There are two hashing algorithms for traffic distribution in the NIC team.  With Hyper-V switch mode, a VM’s traffic is limited to a single NIC.  In lightly loaded hosts, this might no distribute the network load across the team.  Apparently it can work well on heavily loaded hosts with VMQ enabled.  Address hashing uses a hashing algorithm to spread the load across NICs.  There is 4-tuple hashing (great distribution) but it doesn’t work with “hidden” protocols such as IPsec and fails back to 2-tuple hashing.
  •  

    NIC teaming is easy to set up.  You can use Server Manager (under Local Server) to create a team.  This GUI is similar to what I’ve seen from OEMs in the past. 

    image

    You can also use PowerShell cmdlets such as New-NetLbfoTeam and Set-VMNetworkAdapter.

    One of the cool things about a NIC team is that, just like with OEM versions, you can create virtual networks/connections on a team.  Each of those connections have have an IP stack, it’s own policies, and VLAN binding.

    image

    In the Hyper-V world, we can use NIC teams to do LBFO for important connections.  We can also use it for creating converged fabrics.  For example, I can take a 2U server with 2 * 10 GbE connections and use that team for all traffic.  I will need some more control … but that’s another blog post.

    2012
    05.23

    I just saw this tweet by Damian Flynn, regarding the book Microsoft Private Cloud Computing (Sybex, 2012):

    #MsftPrivateCloud And it is done, that final edits have being submitted and the printer takes ownership tomorrow!

    Hans, Patrick and Damian did in incredible amount of work on this book.  In fact, Damian went the extra mile *twice* (or was it three times? Smile) to make sure the reader got the very best and latest information on this solution (it’s hard writing a book on something before it RTMs).  Gentlemen, I salute you!

    image

    Amazon has a date of July 3rd posted.  That’s not always accurate.  And yes, there will be ebook versions, such as Kindle.  Don’t ask me when – you’ll know before I do.

    2012
    05.23

    DCB is a feature that is new to Windows Server 2012 networking and we can take advantage of this in creating converged fabrics in Hyper-V, private and public clouds.  According to Microsoft:

    IEEE 802.1 Data Center Bridging (DCB) is a collection of standards that defines a unified 802.3 Ethernet media interface, or fabric, for local area network (LAN) and storage area network (SAN) technologies. DCB extends the current 802.1 bridging specification to support the coexistence of LAN-based and SAN-based applications over the same networking fabric within a data center. DCB also supports technologies, such as Fibre Channel over Ethernet (FCoE) and iSCSI, by defining link-level policies that prevent packet loss.

    According to Wikipedia:

    Specifically, DCB goals are, for selected traffic, to eliminate loss due to queue overflow and to be able to allocate bandwidth on links. Essentially, DCB enables, to some extent, the treatment of different priorities as if they were different pipes. The primary motivation was the sensitivity of Fibre Channel over Ethernet to frame loss. The higher level goal is to use a single set of Ethernet physical devices or adapters for computers to talk to a Storage Area Network, Local Area network and InfiniBand fabric.

    Long story short: DCB is a set of Ethernet standards that leverage special functionality in a NIC to allow us to converge mixed classes of traffic onto that NIC such as SAN and LAN, which we would normally keep isolated.  If your host’s NIC has DCB functionality then W2012 can take advantage of it to converge your fabrics.

    image

    2012
    05.22

    I think my demo at the Windows Server 2012 Rocks events is cool but I have bigger ambitions …

    Imagine that Hyper-V Replica has replicated from the private cloud to the public cloud.  Using Kinnect for Windows, I select my VM, move my hand through the air and cause it to Live Migrate from private to public, with the storage migration leveraging the Hyper-V Replica content in the DR site.

    Credit to Dave Northey (MSFT IE DPE) for the Replica concept which he dreamed up this morning over coffee.  Maybe we’ll get it and admin by Kinnect in vNext, vNext +1 or vNext +2 Smile

    Technorati Tags: ,,
    2012
    05.22

    If you wanted to build a clustered Windows Server 2008 R2 host, how many NICs would you need?  With iSCSI, the answer would be 6 – and that’s without any NIC teaming for the parent, cluster, or VM comms.  That’s a lot of NICs.  Adding 4 ports into a host is going to cost hundreds of euros/dollars/pounds/etc.  But the real cost is in the physical network.  All those switch ports add up: you double the number of switches for NIC teaming, those things aren’t free, and the suck up power too.  We’re all about consolidation when we do virtualisation.

    Why do we have all those NICs in a W2008 R2 Hyper-V cluster?  The primary driver isn’t bandwidth.  The primary reason is to guarantee a level of service. 

    What if we had servers that came with 2 * 10 GbE NICs?  What if they could support not only 256 GB RAM, but 768 GB RAM?  That’s the kind of spec that Dell and HP are shipping now with their R720 and HP DL380 Gen8.  What if we had VM loads to justify these servers, then we needed 10 GbE for the Live Migration and backup loads?  What if there was a way to implement these servers with fewer network ports, that could take advantage of the cumulative 20 Gbps of bandwidth but with a guaranteed level of service?  Windows Server 2012 can do that!

    My goal with the next few posts is to describe the technologies that allow us to converge fabrics and use fewer network interfaces and switch ports.  Fabrics, what are they?  Fabric is a cloud term … you have a compute cluster (the hosts), a storage fabric (the storage area network, e.g. iSCSI or SMB 3.0), and fabrics for management, backup, VM networking and so on.  By converging fabrics, we use fewer NICs and fewer switch ports.

    There is no one right design.  In fact, at Build, the presenters showed lots of designs.  In recent weeks and months, MSFT bloggers have even shown a number of designs.  Where there was a “single” right way to do things in W2008 R2/SP1, there are a number of ways in W2012.  W2012 gives us options, and options are good.  It’s all a matter of trading off on tech requirements, business requirements, complexity, and budget.

    Watch out for the posts in the coming days.

    2012
    05.21

    If you were to wander down to ZDNet today, you were in for a surprise.  There, on Mary Jo Foley’s All About Microsoft blog, you’ll find a guest article by me, talking about Windows Server 2012 Hyper-V Replica (HVR).

    Mary Jo is on vacation and when planning for it, she asked a few people to write guest articles for her absence.  You may have noticed that I’m a HVR fan, so I suggested this topic.  I wrote the post, Ben Armstrong (aka The Virtual PC Guy) was kind enough to check my work, and submitted it to Mary Jo.

    Two other posts that I’ve written on the subject might interest you:

    • One from last year from when we didn’t have the techie details where I look at different scenarios.
    • And a post I wrote after the release of the beta when we MVPs were cleared to talk about the techie details.
    • And of course, don’t forget the guest post I did for Mary Jo.

    Thanks to Ben for checking my article, and thanks to Mary Jo for the chance to post on her blog!

    2012
    05.20

    In the USA, it seems like that if you subscribe to Netflix then you probably also buy a Roku.  I knew about the devices from a few years ago when I friend introduced me to Netflix and Roku when visiting with him in NC, USA.  Netflix came to Ireland early this year, and thanks to my employers (a distributor), Roku is now available in Irish retail outlets too.

    I made sure to put my name down for one once they came into stock.  That was a few weeks ago but I’ve been out of the office for a while.  I finally got my one on Friday and set it up that night.

    Here in Ireland (and the UK) the Roku comes in two models, the LT and the higher spec 2 XS.  I went for the latter model.

    The device is tiny, about 3 inches square and about 1 inch tall, taking no space at all under the TV, and is totally silent.  It has a HDMI output and a composite output.  There is a USB port and a micro SD port.  It can use wifi or a classic wired network connection (always preferred for streaming media).

    Setting it up was easy:

    • Cable it up – power and TV connection (HDMI for me)
    • Configure the wifi connection
    • Allow the automatic software update & reboot
    • Set the time zone
    • Log into http://roku.com/link with an activation code
    • Create Roku account and activate the device
    • Create a payment method for any future purchases, just like with iTunes
    • Select apps/channels, e.g. free Netflix or TWiT
    • The Roku downloads apps automatically right there

    At that point the machine is ready to rock and roll.  The Roku is a great way to watch Netflix on your TV.  I went into the settings and configured it for 1080p instead of the default 720p.  Then I fired up the Netflix channel, logged in (required once only) and started browsing and watching.  I also tried out the TWiT channel and started watching an archived episode of Windows Weekly.

    The other big reason to have a device like a Roku is to play media.  Apparently you can do this with USB, and I guess the micro SD card.  But I prefer to use the network for this.  I keep content on m Windows Home Server.  I was told that a free download called Plex could be installed on a Windows machine so that’s what I did, turning my WHS into a Plex media server.  The Plex server is configured using a web portal, where you can add channels for TV, Movies, and Music, pointing to the folders that contain the content.  I browsed the available channels on the Roku and installed the Plex client (channel).  Starting it, it automatically discovered my WHS.  I browsed my content and found that Plex also downloaded metadata for some content from the web, making it easier to browse.

    The Roku is a nice device.  The lower end model is pretty cheap, making it one of those things that you could quite happily pick up without a big decision.  I’m liking it so far.

    2012
    05.18

    On Monday I’ll be in Belfast and on Tuesday I’ll be in Dublin presenting at the Windows Server 2012 Rocks community events.  My topics for next week are Hyper-V and Networking.  Assuming the Internet connectivity works, I’ve got a very cool demo to show of some of the capabilities of Windows Server 2012 featuring:

    • Some of the great open source work by Microsoft
    • PowerShell scripting
    • New networking features
    • Virtualisation mobility

    Not to mention a bunch of other demos all pushing the HP ProLiant lab that I have at work.  The other demos are canned … experience has taught me that I can’t rely on hotel Internet … but this special demo is not recorded just so I can have something special for a live “will it break?” demo.

    If you’ve registered (click on the event to register), then don’t miss out.  And if you haven’t registered yet, then what are you waiting for?

    EDIT:

    The demo won’t break Smile

    2012
    05.18

    Microsoft, with development partners NetApp and Citrix, recently announced that support for FreeBSD 8.2 and 8.3 as a guest operating system (VOSE) will be coming to Hyper-V.  Apparently this is being accomplished with the help of NetApp, Citrix, and the FreeBSD community.

    Soon the list of non-Microsoft operating systems that are supported (not only work, but have been tested and you can call for assistance with) will be:

    • FreeBSD 8.2
    • FreeBSD 8.3
    • CentOS 5.2
    • CentOS 5.3
    • CentOS 5.4
    • CentOS 5.5
    • CentOS 5.6
    • Red Hat Enterprise Linux 5.5
    • Red Hat Enterprise Linux 5.6
    • Red Hat Enterprise Linux 5.4
    • Red Hat Enterprise Linux 5.3
    • Red Hat Enterprise Linux 5.2
    • SUSE Linux Enterprise Server 11 with Service Pack 1
    • SUSE Linux Enterprise Server 10 with Service Pack 4

    In addition to this, the Hyper-V integration components are included in Linux Kernel 3.3 and later, and Ubuntu 12.04 runs natively without any work from you on Hyper-V.  I’ve got it running in my lab and can use it just like other guest OSs, e.g. run a clean shutdown from the Hyper-V Manager console.

    2012
    05.18

    Microsoft has released an elective hotfix (hotfixes are never in Windows Update/WSUS/etc) for when a virtual machine is restored in the Saved state incorrectly on a Hyper-V server that is running Windows Server 2008 R2:

    Consider the following scenario:

    • You have two Hyper-V servers that are running Windows Server 2008 R2.
    • On one Hyper-V server, you perform a redirected restore operation to restore a Hyper-V virtual machine that is located on the other Hyper-V server.
    • The Hyper-V Integration component that is installed on the guest operating system is incompatible with the target Hyper-V server.

    In this scenario, the virtual machine is restored in the Saved state. Additionally, you must delete the saved state file before you use this virtual machine.

    This issue occurs because the Volume Shadow Copy Service backup requester copies corrupted .vsv file during the restore operation.

    A supported hotfix is available from Microsoft.

    2012
    05.17

    I mentioned a little while ago that there was going to be a community event in Belfast and Dublin next week (still some places left so register now if you are interested in learning about Windows Server 2012 and want to attend).   I want to be sure that you also know that the show is coming to London (June 14th) and Edinburgh (June 15th).

    The following topics will be presented by MVPs (including me):

    Manageability

    • Simplifies configuration processes
    • Improved management of multi-server environments
    • Role-centric dashboard and integrated console
    • Simplifies administration process of multi-server environments with Windows PowerShell 3.0

    Virtualization – I’m doing this one Smile  I’m trying to put the final pieces together for a very cool PowerShell demo. Even without this, I have some cool demos ready.

    • More secure multi-tenancy
    • Flexible infrastructure, when and where you need it
    • Scale, performance, and density
    • High availability

    Storage and Availability

    • Reduces planned maintenance downtime
    • Addresses the causes of unplanned downtime
    • Increases availability for services and applications
    • Increases operational efficiency and lower costs

    Networking

    • Manage private clouds more efficiently
    • Link private clouds with public cloud services
    • Connect users more easily to IT resources

    I think my demos are done.  The slides are nearly there.  Final polish and rehearsals tomorrow and this weekend.  This is a big brain dump that we’ll be dropping on people.  I’d certainly attend if I wanted to get my career ahead of the pack and be ready for the most important Server release since Windows 2000.

    Technorati Tags: ,
    2012
    05.17

    … is the sheer amount of information that it provides.  I previously talked about the monitoring.  That’s great for the reactive side of things.  When I managed infrastructures, I like to take some some to see who things were trending so I could plan.  That’s where reports come in handy, and there’s no shortage of those in this management pack:

    image

    On my client’s site, we had an alert about latency on a HBA in one of the hosts.  I wanted to give the client some useful information to plan VM placement using affinity rules to avoid this from happening again.  One of the cool reports allows you to create a top-bottom chart of VMs based on a specific performance metric.  The below report was created with with the VMGUEST IOPS metric and shows the top 25 disk activity VMs.

    image

    As usual with OpsMgr, the report could be scheduled for a time period, and/or saved as a web archive, PDF, word file, etc.  I like this management pack.  Sure, it is pricey (I was told over EUR400/host socket being monitored), but it’s good.  BTW, Veeam did release a 10 socket (enough for 5 hosts with 2 CPUs each) management pack for free, which is available to you under two conditions:

    1. Be a new customer to Veeam AND
      2. Be a SCOM 2012 customer (not SCOM 2007)
    2012
    05.16

    If you’re considering installation Windows Server 2012 Hyper-V, or if you’re considering moving from vSphere to Windows Server 2012 Hyper-V, then I have one very important question to ask you:

    Do you want the project to succeed?

    If the answer is yes, then go get your hands on the free Microsoft Assessment and Planning Toolkit (MAP) 7.0, which just went into beta and will probably RTM when Windows Server 2012 does.

    I’ve come to the conclusion that there is a direct correlation between success of a virtualisation project and a pre-design assessment.  Why?  Because every time I’m asked in, and this only happens when things go bad, I ask for the assessment reports and I’m told that there are no reports.  I dig a little further and I find that there were mistakes with design that some due process may have eliminated.

    Key features and benefits of MAP 7.0 Beta help you:

    • Determine your readiness for Windows Server 2012 Beta and Windows 8
    • Virtualize your Linux servers on Hyper-V
    • Migrate your VMware-based virtual machines to Hyper-V
    • Size your server environment for desktop virtualization
    • Simplify migration to SQL Server 2012
    • Evaluate your licensing needs for Lync 2010
    • Determine active users and devices

    It’s free folks, so cop on!  Spend half a day installing it, doing the discovery, and starting the measurement, and 1 week later come back and run some sizings against different infrastructure specs.  Run some reports and you have a scientifically sized infrastructure.  Surely that’s better than the guesswork that you would have done instead?  Oh you must be the exception because you know your customer’s requirements.  If I had a Euro for every time I’ve heard that one …

    If you can’t guess, this stuff makes me angry.  But never mind me; you probably know better than me, Microsoft, real VMware experts, etc.  If I had another Euro for every time I’ve heard that one …

    Get Adobe Flash player