2011
07.29

Virtual Machine Manager 2012 (VMM/SCVMM) 2012 adds something that was lacking in VMM 2007/2008/20008 R2: clustered VMM servers.  VMM 2012 is the gateway to the private cloud and you want that gateway to be fault tolerant at the hardware, OS, and service level.  If you want to have a clustered VMM server then you will need to get to grips with some new concepts.

The VMM database contains a lot of information.  Some of that information can be sensitive, such as product keys or administrator passwords.  You don’t want just anyone getting a copy of that database (from offsite stored backup tapes, for example [which should be encrypted anyway]) and figuring out a way into gaining administrative rights to your network.  For this reason, VMM uses encryption to protect the contents of this database. 

By default the decryption keys for accessing the encrypted data are stored on the VMM server.  Now imagine you have set up a clustered VMM server and those keys are stored locally, as seen below.

image

The first node with the local keys would encrypt the SQL data and access it with no issue at all.  But what would happen after a failover of the VMM service from Node 1 to Node 2?  The decryption keys are unavailable, on Node 1, and Node 2 has no way to read the encrypted data in clear text.  There goes the uptime of your cloud!

image

That’s why we have a new concept called Distributed Key Management (DKM) in VMM 2012.  Instead of storing the decryption keys on the server, they’re stored in a specially created container in Active Directory.  This means that the decryption keys can be accessed by both of the VMM cluster nodes, and either node can read the encrypted data in clear text.

You can configure the option to enable DKM when you install the first member of the VMM cluster.  You can optionally do this even if you’re setting up a non-clustered VMM server.  It’ll mean the keys are safe in AD, and it gives you the flexibility to easily set up a cluster without too much mucking around.

When you enable the option to use DKM, you have two choices:

  • Installing as a Domain Administrator: You can enter the LDAP path (e.g. CN = VMMDKM, CN = System, DN = demo, DN = local) and the installer will use your rights to create the VMM container inside of the default System container.
  • Not Installing as a Domain Administrator: You can get a domain admin to create the container for you, ensuring that your new user account will have Read, Write, and Create all child objects permissions.  You can enter the LDAP path (as above) that is provided by the domain administrator.

I like SystemVMMDKM for two reasons:

  1. ConfigMgr uses SystemSystemsManagement for its advanced client objects
  2. VMMDKM is quite descriptive. 

Now Node 1 of the VMM server cluster will use the DKM/AD-stored decryption keys and access the secured data in the SQL Server instead of storing them locally.

image

After a failover, Node 2 can also read those DKM/AD-stored decryption keys to access the encrypted data successfully:

image

Decryption keys; I bet your security officer is concerned about that!  I haven’t mentioned the protection of these keys yet.  Note how we didn’t do anything to lock down that container?  Normally, Authenticated Users will have read permissions.  We sure don’t want them to read those decryption keys!  Don’t worry, the VMM group has you covered.

In the new container, you will find an object called DC Manager <unique GUID>.  This is a container that DKM has created and contains the protected keys for the VMM server/cluster you just set up.

clip_image002

It is protected using traditional AD permissions.  VMM is granted rights based on what account is running VMM.  I prefer to install VMM using a domain user account, e.g. demoVMMSvc.  That account was granted full control over the container object and all descendent (contained) objects:

clip_image001

Note that Authenticated Users is not present.  In fact what you will find is:

  • Self: Inherited with apparently no rights
  • System: Full Control on the container object only
  • Enterprise Domain Controllers: Read tokenGroups (Descendent User Objects), Read tokenGroups (Descendent Group Objects), Read tokenGroups (Descendent Computer Objects)
  • Enterprise Admins: Full Control on this and descendent objects
  • Domain Admins: Full Control on this and descendent objects
  • Administrators: It’s long but basically it’s not Full Control and no delete rights on this and descendent objects
  • Administrator: Full Control on this and descendent objects

In other words, VMM 2012 DKM is a pretty sure way to:

  • Enable a SQL database to securely store sensitive data for a highly available VMM cluster running across multiple servers
  • Allow those nodes of a highly available VMM cluster to share a single set of decryption keys to access the encrypted data in the SQL database

Now you have some very special data in your AD – like you didn’t already!  But if you’re “just” a virtualisation administrator/engineer or a consultant, you better make sure that someone is backing up AD.  Lose your AD (those DKM keys), and you lose that sensitive data in the SQL database.  While you’re verifying the existence of a working AD backup (System State Backup of a few DCs, maybe), make sure that the backup is secure in terms of access rights to data and encryption.  You’ve got sensitive encryption keys in there after all.

2011
07.29

The official TechNet content is a bit scattered about so I through I’d reorganise it and consolidate to make stuff easier to find.  The software requirements of Virtual Machine Manager (VMM/SCVMM) 2012 are easy:

  • Windows Server 2008 R2 Standard, Enterprise or Datacenter with SP1
  • Windows Remote Management (WinRM) 2.0 – a part of W2008 R2
  • .NET 3.5 with SP1 (a feature in W2008 R2)
  • WAIK  for Windows 7

There’s a significant change for the database.  SQL Express is no longer supported.  You will need to migrate the VMM database to one of the supported versions/editions:

  • SQL Server 2008 R2 Enterprise/Standard x86/x64 (no news of support for the recent SP1 yet)
  • SQL Server 2008 Enterprise/Standard x86/x64 with Service Pack 2

Here’s the system requirements for VMM 2012:

Manage Up To 150 Hosts

Let’s be honest; how many of us really have anything close to 150 hosts to manage with VMM?  Hell; how many of us have 15 hosts to manage?  Anyway, here’s the system requirements and basic architecture for this scale of deployment.

image

You can run all of the VMM roles on a single server with the following hardware configuration:

Component Minimum Recommended
CPU Pentium 4, 2 GHz (x64)

Dual-Processor, Dual-Core, 2.8 GHz (x64) or greater

Memory

2 GB

4 GB
Disk space (no local DB)

2 GB

40 GB
Disk Space (local DB) 80 GB 150 GB

Although you can run all the components on a single server, you may want to split them out onto different servers if you need VMM role fault tolerance.  You’re looking at something like this if that’s what you want to do:

image

A dedicated SQL server will require:

Component Minimum Recommended
CPU Pentium 4, 2.8 GHz (x64)

Dual-Processor, Dual-Core, 2 GHz (x64) or greater

Memory

2 GB

4 GB
Disk space (no local DB)

80 GB

150 GB

A dedicated library server will require:

Component Minimum Recommended
CPU Pentium 4, 2.8 GHz (x64)

Dual-Processor, Dual-Core, 3.2 GHz (x64) or greater

Memory

2 GB

2 GB
Disk space (no local DB)

Depends on what you store in it

Depends on what you store in it

A dedicated Self-Service Portal server will require:

Component Minimum Recommended
CPU Pentium 4, 2.8 GHz (x64)

Dual-Processor, Dual-Core, 2.8 GHz (x64) or greater

Memory

2 GB

2 GB
Disk space (no local DB)

512 MB

20 GB

If all you want is hardware fault tolerance for VMM then the simple solution is to run VMM in a highly available virtual machine.  I don’t like System Center being a part of a general production Hyper-V cluster.  That’s because you create a chicken/egg situation with fault monitoring/responding.  If you want to virtualise System Center then consider setting up a dedicated host or cluster for the VMM, OpsMgr, ConfigMgr VMs.  DPM is realistically going to remain physical because of disk requirements.

Manage More Than 150 Hosts

It is recommended that you:

  • Not use VMM server to host your library.  Set the library up on a dedicated server/cluster.
  • Install SQL Server on a dedicated server/cluster.

The VMM server requirements are:

Component Minimum Recommended
CPU Pentium 4, 2.8 GHz (x64)

Dual-Processor, Dual-Core, 3.6 GHz (x64) or greater

Memory

4 GB

8 GB
Disk space (no local DB)

10 GB

50 GB

The database server requirements are:

Component Minimum Recommended
CPU Pentium 4, 2 GHz (x64)

Dual-Processor, Dual-Core, 2.8 GHz (x64) or greater

Memory

4 GB

8 GB
Disk space (no local DB)

150 GB

200 GB

A dedicated library server will require:

Component Minimum Recommended
CPU Pentium 4, 2.8 GHz (x64)

Dual-Processor, Dual-Core, 3.2 GHz (x64) or greater

Memory

2 GB

2 GB
Disk space (no local DB)

Depends on what you store in it

Depends on what you store in it

A dedicated Self-Service Portal server will require:

Component Minimum Recommended
CPU Pentium 4, 2.8 GHz (x64)

Dual-Processor, Dual-Core, 3.2 GHz (x64) or greater

Memory

2 GB

8 GB
Disk space (no local DB)

10 GB

40 GB

VMM Console

The software requirements are:

  • Either Windows 7 with SP1 or Windows Server 2008 R2 with SP1
  • PowerShell 2.0 (included in the OS)
  • .NET 3.5 SP1 (installed by default in Windows 7 and a feature in W2008 R2 – VMM setup will enable it for you)

Managing up to 150 hosts will require:

Component Minimum Recommended
CPU Pentium 4, 550 MHz

Pentium 4, 1 GHz or more

Memory

512 MB

1 GB
Disk space (no local DB)

512 MB

2 GB

Managing over 150 hosts will require:

Component Minimum Recommended
CPU

Pentium 4, 1 GHz

Pentium 4, 2 GHz or more

Memory

1 GB

2 GB
Disk space (no local DB)

512 MB

4 GB

Managed Hosts

Supported Hyper-V hosts are below. 

Parent OS Edition Service Pack
Windows Server 2008 R2 (Full or Server Core)

Enterprise or Datacenter

Service Pack 1 or earlier

Hyper-V Server 2008 R2  
Windows Server 2008 (Full or Server Core)

Enterprise or Datacenter

Service Pack 1 or earlier

Please note that the following are not listed as supported:

  • Hyper-V Server 2008
  • Windows Server 2008 R2 Standard edition
  • Windows Server 2008 Standard edition

In the beta, Windows Server 2008 is not supported.

Supported VMware hosts are listed below.  They must be managed by vCenter Server 4.1.

  • ESXi 4.1
  • ESX 4.1
  • ESXi 3.5
  • ESX 3.5

There is no mention of vSphere/ESXi 5 at the moment.  That’s understandable – both VMM and the VMware v5 product set were being developed at the same time.  Maybe support for v5 will appear later.

Citrix XenServer 5.6 FP1 can also be managed as standalone hosts or as Resource Pools if you deploy the Microsoft SCVMM XenServer Integration Suite to your hosts.

Bare Metal Host Deployment

The requirements for being able to use VMM 2012 to deploy Hyper-V hosts to bare metal machines are:

Item Notes
Windows Server 2008 R2 Windows Deployment Services (WDS) PXE Server to boot the bare metal up on the network.  No other PXE service is supported.
Boot Management Controller (BMC)

This is a server management card:

  • Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0
  • Data Center Management Interface (DCMI) version 1.0
  • Hewlett-Packard Integrated Lights-Out (iLO) 2
  • System Management Architecture for Server Hardware (SMASH) version 1.0 over WS-Management (WS-Man)
VHD image A Windows Server 2008 R2 host OS captured as a generalized VHD image.  Have a look into WIM2VHD or maybe using a VM to create this.
Host Hardware Drivers NIC, Storage, etc.

Update Management

A dedicated WSUS root server, running WSUS 3.0 SP2.  It cannot be a downstream server because that is not supported.  There will be a lot of processed updates so this may require a dedicated server (possible a VM).  If you install WSUS on a VMM server cluster then you must install the WSUS Administrator Console on each node in that cluster.

2011
07.29

Today is SysAdminDay!

Here’s hoping that this afternoon will not feature a sev 1 call at 16:30, a public cloud goes *bang* somewhere, that awful user who complains the most is fired, and that you’re not called back into the office over the weekend.

2011
07.28

See my more recent post which talks in great detail about how Hyper-V Replica works and how to use it.

At WPC11, Microsoft introduced (at a very high level) a new feature of Windows 8 (2012?) Server called Hyper-V Replica.  This came up in conversation in meetings yesterday and I immediately thought that customers in the SMB space, and even those in the corporate branch/regional office would want to jump all over this – and need the upgrade rights.

Let’s look at the DR options that you can use right now.

Backup Replication

One of the cheapest around and great for the SMB is replication by System Center Data Protection Manager 2010.  With this solution you are leveraging the disk-disk functionality of your backup solution.  The primary site DPM server backs up your virtual machines.  The DR site DPM server replicates the backed up data and it’s metadata to the DR site.  During the invocation of the DR plan, virtual machines can be restored to an alternative (and completely different) Hyper-V host or cluster.

image

Using DPM is cost effective, and thanks to throttling, is light on the bandwidth and has none of the latency (distance) concerns of higher-end replication solutions.  It is a bit more time consuming for the invocation.

This is a nice economic way for an SMB or a branch/regional office to do DR.  It does require some work during invocation: that’s the price you pay for a budget friendly solution that kills two marketing people with one stone – Hey; I like birds but I don’t like marke …Moving on …

Third-Party Software Based Replication

The next solution up the ladder is a 3rd party software replication solution.  At a high level there are two types:

  • Host based solution: 1 host replicates to another host.  These are often non-clustered hosts.  This works out being quite expensive.
  • Simulated cluster solution: This is where 1 host replicates to another.  It can integrate with Windows Failover Clustering, or it may use it’s own high availability solution.  Again, this can be expensive, and solutions that feature their own high availability solution can possibly be flaky, maybe even being subject to split-brain active-active failures when the WAN link fails.
  • Software based iSCSI storage: Some companies produce an iSCSI storage solution that you can install on a storage server.  This gives you a budget SAN for clustering.  Some of these solutions can include synchronous or asynchronous replication to a DR site.  This can be much cheaper than a (hardware) SAN with the same features.  Beware of using storage level backup with these … you need to know if VSS will create the volume snapshot within the volume that’s being replicated.  If it does, then you’ll have your WAN link flooded with unnecessary snapshot replication to the DR site every time you run that backup job.

image

This solution gives you live replication from the production to the DR site.  In theory, all you need to do to recover from a site failure is to power up the VMs in the DR site.  Some solutions may do this automatically (beware of split brain active-active if the WAN link and heartbeat fails).  You only need to touch backup during this invocation if the disaster introduced some corruption.

Your WAN requirements can also be quite flexible with these solutions:

  • Bandwidth: You will need at least 1 Gbps for Live Migration between sites.  100 Mbps will suffice for Quick Migration (it still has a use!).  Beyond that, you need enough bandwidth to handle data throughput for replication and that depends on change to your VMs/replicated storage.  Your backup logs may help with that analysis.
  • Latency: Synchronous replication will require very low latency, e.g. <2 MS.  Check with the vendor.  Asynchronous replication is much better at handling long distance and high latency connections.  You may lose a few seconds of data during the disaster, but it’ll cost you a lot less to maintain.

I am not a fan of this type of solution.  I’ve been burned by this type of software with file/SQL server replication in the past.  I’ve also seen it used with Hyper-V where compromises on backup had to be made.

SAN Replication

This is the most expensive solution, and is where the SAN does the replication at the physical storage layer.  It is probably the simplest to invoke in an emergency, and depending on the solution, can allow you to create multi-site clusters, sometimes with CSVs that span the sites (and you need to plan very carefully if doing that).  For this type of solutions you need:

  • Quite an expensive SAN.  That expense varies wildly.  Some SANs include replication, and some really high end SANs require an additional replication license(s) to be purchased.
  • Lots of high quality, and probably ultra low latency, WAN pipe.  Synchronous replication will need a lot of bandwidth and very low latency connections.  The benefit is (in theory) zero data loss during an invocation.  When a write happens in site A on the SAN, then it happens in site B.  Check with the manufacturer and/or an expert in this technology (not honest Bob, the PC salesman, or even honest Janet, the person you buy your servers from).

image

This is the Maybach of DR solutions for virtualisation, and is priced as such.  It is therefore well outside the reach of the SMB.  The latency limitations with some solutions can eliminate some of the benefits.  And it does require identical storage in both sites.  That can be an issue with branch/regional office to head office replication strategies, or using hosting company rental solutions.

Now let’s consider what 2012 may bring us, based purely on the couple of minutes presentation of Hyper-V replica that was at WPC11.

Hyper-V Replica Solution

I previously blogged about the little bit of technology that was on show at WPC 2011, with a couple of screenshots that revealed functionality.

Hyper-V Replica appears (in the demonstrated pre-beta build and things are subject to change) to offer:

  • Scheduled replication, which can be based on VSS to maintain application/database consistency (SQL, Exchange, etc).  You can schedule the replication for outside core hours, minimizing the impact on your Internet link on normal business operations.
  • Asynchronous replication.  This is perfect for the SMB or the distant/small regional/branch office because it allows the use of lower priced connections, and allows replication over longer distances, e.g. cross-continent.
  • You appear to be able to maintain several snapshots at the destination site.  This could possibly cover you in the corruption scenario.
  • The choice of authentication between replicating hosts appeared to allow Kerberos (in the same forest) and X.509 certificates.  Maybe this would allow replication to a different forest: in other words a service provider where equipment or space would be rented?

What Hyper-V Replica will give us is the ability to replicate VMs (and all their contents) from one site to another in a reliable and economic manner.  It is asynchronous and that won’t suit everyone … but those few who really need synchronous replication (NASDAQ and the like) don’t have an issue buying two or three Hitachi SANs, or similar, at a time.

image

I reckon DPM and DPM replication still have a role in the Hyper-V Replica (or any replication) scenario.  If we do have the ability to keep snapshots, we’ll only have a few of them.  What do you do if you invoke your DR after losing the primary site (flood, fire, etc) and someone needs to restore a production database, or a file with important decision/contract data?  Are you going to call in your tapes from last week?  Hah!  I bet that courier is getting themselves and their family to safety, stuck in traffic (see post-9/11 bridge closures or state of the roads in New Orleans floods), busy handling lots of similar requests, or worse (it was a disaster).  Replicating your back to the secondary site will allow you restore data (that is still on the disk store) where required without relying on external services.

Some people actually send their tapes to be stored at their DR site as their offsite archival.  That would also help.  However, remember you are invoking a DR plan because of an unexpected emergency or disaster.  Things will not be going smoothly.  Expect it to be the worst day of your career.  I bet you’ve had a few bad ones where things don’t go well.  Are you going to rely entirely on tape during this time frame?  Your day will only get worse if you do: tapes are notoriously unreliable, especially when you need them most.  Tapes are slow, and you may find a director impatiently mouth-breathing behind you as the tape catalogues on the backup server.  And how often do you use that tape library in the DR site?

To me, it seems like the best backup solution, in addition to Hyper-V Replica (a normal feature of the new version of Hyper-V that I cannot wait to start selling), is to combine quick/reliable disk-disk-disk backup/replication for short term backup along with tape for archival.

That’s my thinking now, after seeing just a few minutes of a pre-beta demo on a webcast.  As I said, it’s subject to change.  We’ll learn more at/after Build in September and as we progress from beta-RC-RTM.  Until then, these are musings, and not something to start strategising on.

2011
07.28

Then make sure you read this:

“To take advantage of the Windows 8’s side-by-side window view, that requirement rises to 1366x768”

Windows 8 will run on Windows Vista/7 hardware but you need to be aware of that graphics requirement for side-by-side.  Check the screen resolution of any slate PC you were considering to ensure that you will be able to upgrade to Windows 8 next year.

For example, have a look at these current machines:

Machine Maximum Screen Resolution Windows 8 Side-by-Side?
Asus Eee Slate EP121 1280 x 800 No
Toshiba WT310 1366 x 768 Yes, but … *1
Gigabye S1080 1024 x 600 Not a chance!
HP Slate 500 1024 x 600 or 1024 x 768 You must be kidding?
Acer Iconia Tab W500 1280 x 800 Snowball’s chance in hell
Lenovo IdeaPad P1 1280 x 800 (not RTM yet) The computer says “no”
Fujitsu STYLISTIC Q550 1920 x 1080 480p, 720 p and 1080i – Impressive! *2

That Acer is nearly €500 from a UK online reseller, and is around $549 in the USA.  I’d sure want to know that I could install Windows 8 on it next year if I spent money on it now.  I’m actually quite stunned that of the big names I found with Windows 7 slates, only Toshiba and Fujitsu have one with the graphics requirement for Windows 8 side-by-side. 

*1 It’s been rumoured that since the announcement the device, Toshiba did a 180 and cancelled their slate PC plans.  Oh well!  Kinda makes sense considering the hardware would have limited sales with anticipation of true Windows 8 tablets in 2012 and more suitable hardware with alternative OS’s right now.

My advice:

  1. If you can wait, then wait until Windows 8 is released and ARM (system on a chip) tablets are released.  You’ll get great battery life, a lighter machine, Windows 8 support, and your lap won’t be burned by a flat PC.
  2. If you want the true tablet experience now then compare the iPad2 (I have the iPad 1 and love it, watching movies on the go, or reading from Kindle) with one of the many Android devices such as the Motorola Xoom (my cousin bought one recently and he raves about it, including the ability to insert and use an SD card with media on it).  The expected Amazon Android device will also be rather interesting (October allegedly).
  3. If you really want to buy a Windows 7 slate PC (not a true tablet IMO) then check the screen resolution first and make sure it supports 1366 x 768.  That’s not looking so good.  If you’re using it for business and want to enable BitLocker then you’ll really want a TPM 1.2 (or later) chip.  But that’s a whole other conversation … *2 By the way, Fujitsu’s machine also has a TPM chip option.  I guess that makes the Fujitsu STYLISTIC Q550 the best option as an enterprise slate PC if you really must buy one now.
Technorati Tags: ,
2011
07.28

This was quick!

“On System Center Virtual Machine Manager 2008, R2, and R2 SP1 (SCVMM), the Virtual Machine Manager Service (vmmservice.exe) crashes unexpectedly and the VM Manager event log shows Event ID 19999 and 1”.

  • VM Manager 19999: Virtual Machine Manager (vmmservice:368) has encountered an error and needed to exit the process. Windows generated an error report with the following parameters.
  • VM Manager 1:
  • System.ArgumentException: Version string portion was too short or too long.  at System.Version..ctor(String version).

Apparently this is because the kernel version returned by Key Value Pair (KVP – a new feature in the Linux ICs) is longer than is expected.

The workaround at the moment is to run this command in the Linux guest OS as root:

/sbin/chkconfig –level 35 hv_kvp_daemon off

“This will prevent the KVP service from auto starting while retaining all other functionality of hv_utils. hv_utils provides integrated shutdown, key value pair data exchange, and heartbeat features”.

2011
07.28

If you’re a service provider (engineering/consulting/etc) and you’re involved in Hyper-V and/or System Center, or if you’re a customer that is currently buying those technologies, then pay attention.  2011/2012 is a time of interesting change and there are opportunities for customers and for service providers.

System Center

Everything in the System Center circle (a graphic used by MSFT in just about every System Center presentation) is going through an upgrade during the next 12 months:

If you’re doing a deployment of any of these products in the coming months then you really want to make sure that you or your customer will have the rights to do an upgrade next year.  Do you wait until the RTM of the new products?  Probably not; the reason you’re installing now is that there are technical or business issues that need solutions.  For the business’s sake, you solve that issue now, and upgrade later. 

Each of these new versions includes a bunch of new features or improves functionality.  That means there are more gains for the customer, and more service opportunities for the consultant.

Note that some licensing actually includes upgrade rights, e.g. OVS or OV, and System Center Management Suite.  Don’t forget that the SQL database on the back of these servers may also need an upgrade to 2008 R2 or even “Denali”, so protect them too.  And don’t forget the management licenses or CALs.

Hyper-V

If you are licensing Windows Server VMs on any virtualisation (Hyper-V, VMware, Xen) correctly then you are licensing the hosts with Enterprise or Datacenter edition, and availing of the free license benefits for VMs on those licensed hosts.  That alone can save you a fortune.  Bundle in Software Assurance for those host OSs and the saving growth is exponential.  Why would you want to do this?  Windows 8 of course!!!

Yes, you will find yourself needing/wanting to deploy Windows 8 Server virtual machines next year after the RTM.  You’ll need to license your hosts with Windows 8 to do that.  Software Assurance or upgrade rights (OVS) on your existing hosts will cover you for that.

We already know of 2 Hyper-V features that will bring technology and business benefits to the business:

  • More vCPUs: We’re getting support for at least 16 virtual CPUs per virtual machine.  That’s 16 threads of execution, meaning more powerful scale-up VMs, meaning more of the server farm can be virtualised.
  • Hyper-V Replica: Small-Medium Businesses (SMBs) struggle with implementing disaster recovery (DR) or business continuity.  This is an example of where technology and budget have an impact on business.  Get it right, and the business gains – I’m told insurance costs can go down.  Get it wrong, and the business … well … you may need to update your CV/résumé.  For a service provider, there is likely going to be a fantastic service opportunity to implement scheduled, asynchronous DR for customers in the SMB space, with modest bandwidth, and without expensive third party software or crazy costing storage solutions.

We’ll probably learn much more about Windows 8 Server at/after the Build conference in September (13-16).

Don’t forget the CALs either!  Something like the Core CAL Suite under OVS covers the end machine/user for a lot of products with Software Assurance.

Recommendation

My recommendations are:

  • Don’t “wait for 8”
  • Look at what Windows 8 and System Center can bring to you business next year, and figure out if you want that to solve your or your customers’ technology or business issues.  If so, make sure your licensing sales/purchases include rights to upgrade.
2011
07.27

It’s clear from Hyper-V’s Linux support developments over the last year that Microsoft is serious about supporting and managing Linux.  The IC’s were submitted to the Linux kernel, making Microsoft a top 5 contributor.  Then we had CentOS distro support – making a lot of people very happy.  And now we have a new 3.1 version of the IC’s that adds newer OS version support and more Hyper-V features.

Over in OpsMgr world, guidance for installing Linux agents is placed right up there with guidance for installing Windows agents.  I’ve made it no secret that I actually like how the OpsMgr team did OpsMgr 2007 Linux agents (self-serviced cross certification) way more than how they did Windows workgroup agents (flaky MOMCERTIMPORT based on custom x.509 certificate templates).

Microsoft are really taking cross-platform or heterogeneous environments seriously.

Here’s hoping for a Microsoft-written DPM agent for LAMP, and maybe a Microsoft-written ConfigMgr client/agents for Linux too!  That would complete the stack and probably help System Center Management Suite sales in those beloved Fortune 1000’s.

2011
07.27

I had a mad busy day with meetings at customer sites today and that’s when this great news breaks out.  Microsoft has released version 3.1 of the Linux Integration Components (or Services) for Hyper-V.

The supported operating systems for 3.1 are:

  • “Red Hat Enterprise Linux (RHEL) 6.0 and 6.1 x86 and x64 (Up to 4 vCPU)
  • CentOS 6.0 x86 and x64 (Up to 4 vCPU)”

SLES 10 SP3 and 11, and RHEL 5.2 / 5.3 / 5.4 / 5.5 still have support using Integration Services 2.1 for Hyper-V

Supported Host OS’s include:

  • “Windows Server 2008 Standard, Windows Server 2008 Enterprise, and Windows
  • Server 2008 Datacenter (64-bit versions only)
  • Microsoft Hyper-V Server 2008
  • Windows Server 2008 R2 Standard, Windows Server 2008 R2 Enterprise, and Windows
  • Server 2008 R2 Datacenter
  • Microsoft Hyper-V Server 2008 R2”

Service Packs 1 or 2 of those host OSs are supported too.

The features of V3.1 of the Linux Integration Services are:

  • “Driver support: Linux Integration Services supports the network controller and the IDE and
    SCSI storage controllers that were developed specifically for Hyper-V.
  • Fastpath Boot Support for Hyper-V: Boot devices now take advantage of the block
    Virtualization Service Client (VSC) to provide enhanced performance.
  • Timesync: The clock inside the virtual machine will remain synchronized with the clock on
    the virtualization server with the help of the pluggable time source device.
  • Integrated Shutdown: Virtual machines running Linux can be shut down from either Hyper-V
    Manager or System Center Virtual Machine Manager by using the “Shut Down” command.
  • Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.  SMP support is not available for 32-bit Linux guest operating systems running on Windows Server 2008 Hyper-V or Microsoft Hyper-V Server 2008.
  • Heartbeat: Allows the virtualization server to detect whether the virtual machine is running
    and responsive.
  • KVP (Key Value Pair) Exchange: Information about the running Linux virtual machine can
    be obtained by using the Key Value Pair exchange functionality on the Windows Server 2008
    virtualization server”.

The really big news about the new Integration Components is that they now install using rpm, making the installation much easier (Windows admins thank you!). 

You should really take a look at the KVP feature in the PDF (on the download page).  There’s some interesting information and links on how to use it to get information, such as the Linux IC version from the VMs on your hosts using PowerShell.

Technorati Tags: ,,
2011
07.26

In the PowerPoint that I posted yesterday, I mentioned that you should not go overboard with creating CSVs (Cluster Shared Volumes).  In the last two weeks, I’ve heard of several people who have.  I’m not going to play blame game.  Let’s dig into the technical side of things and figure out what should be done.

In Windows Server 2008 Hyper-V clustering, we did not have a shared disk mechanism like CSV.  Every disk in the cluster was single owner/operator.  Realistically (and required by VMM 2008) we had to have 1 LUN/cluster disk for each VM.

That went away with CSV in Windows Server 2008 R2.  We can size our storage (IOPS from MAP) and plan our storage (DR replication, backup policy, fault tolerance) accordingly.  The result is you can have lots of VMs and virtual hard disks (VHDs) on a single LUN.  But for some reason, some people are still putting 1 VM, and even 1 VHD, on a CSV.

An example: someone is worried about disk performance and they spread the VHDs of a single VM across 3 CSVs on the SAN.  What does that gain them?  In reality: nothing.  It actually is a negative.  Let’s look at the first issue:

SAN Disk Grouping is not like Your Daddy’s Server Storage

If you read some of the product guidance on big software publisher’s support site, you can tell that there is still some confusion out there.  I’m going to use HP EVA lingo because it’s what I know.

If I had a server with internal disks, and wanted to create three RAID 10 LUNs, then I would need 6 disks.

image

The first pair would be grouped together to make LUN1 at a desired RAID level.  The second pair would be grouped together to make the second LUN, and so on.  This means that LUN1 is on a completely separate set of spindles to LUN2 and LUN3.  They may or may not share a storage controller.

A lot of software documentation assumes that this is the sort of storage that you’ll be using.  But that’s not the case with a cluster with a hardware SAN. You need to use the storage it provides, and it’s usually nothing like the storage in a server.

By the way, I’m really happy that Hans Vredevoort is away on vacation and probably will miss this post.  He’d pick it to shreds Smile

Things are kind of reversed.  You start off by creating a disk group (HP lingo!)  This is a set of disks that will work as a team, and there is often a minimum number required.

image

From there you will create a virtual disk (not a VHD – it’s HP lingo for a LUN in this type of environment).  This is the LUN that you want to create your CSV volume on.  The interesting thing is that each virtual disk in the disk group spans every disk in the disk group.  How that spanning is done depends on the desired RAID level.  RAID 10 will stripe using pairs of disks, and RAID5 will stripe using all of the disks.  That gives you the usual expected performance hit/benefits of those RAID levels and the expected available amount of data.

In the below, you can see two virtual disks (LUNs) have been created in the disk group.  The benefit of this approach is that the virtual disks can benefit by having many more spindles to use.  The sales pitch is that you are getting much better performance than the alternative server internal storage.  Compare LUN1 from above (2 spindles) with vDisk1 below (6 spindles).  More spindles = more speed.

I did say it was sales pitch.  You’ve got other factors like SAN latency, controller cache/latency, vDisks competing for disk I/O, etc. But most often, the sales pitch holds fairly true.

image

If you think about it, a CSV spread across a lot of disk spindles will have a lot of horsepower.  It should provide excellent storage performance for a VM with multiple VHDs.

A MAP assessment is critical.  I’ve also pointed out in that PowerPoint that customers/implementers are not doing this.  This is the only true way to plan storage and decide between VHD or passthrough disk.  Gut feeling, “experience”, “knowledge of your network” are a bunch of BS.  If I hear someone saying “I just know I need multiple physical disks or passthrough disks” then my BS-ometer starts sending alerts to OpsMgr – can anyone write that management pack for me?

Long story short: a CSV on a SAN with this type of storage offers a lot of I/O horsepower.  Don’t think old school because that’s how you’ve always thought.  Run a MAP assessment to figure out what you really need.

Persistent Reservations

Windows Server 2008 and 2008 R2 Failover Clustering use iSCSI3 persistent reservations (PRs) to access storage.  Each SAN solution has a limit on how many PRs they can support.  You can roughly calculate what you need using:

PRs = Number of Hosts * Number of Storage * Channels per Host Number of CSVs

Let’s do an example.  We have 2 hosts, with 2 iSCSI connections each, with 4 CSVs.  That works out as:

2 [hosts] * 2 [channels] * 4 [CSVs] = 16 PRs

OK; Things get more complicated with some storage solutions, especially modular ones.  Here you really need to consult an expert (and I don’t mean Honest Bob who once sold you a couple of PCs at a nice price).  The key piece may end up being the number of storage channels.  For example, each host may have 2 iSCSI channels, but it maintains connections to each module in the SAN.

Here’s another example.  There is an iSCSI SAN with 2 storage modules.  Once again, we have 2 hosts, with 2 iSCSI connections each, with 4 CSVs.  This now works out as:

2 [hosts] * 4 [channels –> 2 modules * 2 iSCSI connections] * 4 [CSVs] = 32 PRs

Add 2 more storage modules and double the number of CSVs to 8 and suddenly:

2 [hosts] * 8 [channels –> 4 modules * 2 iSCSI connections] * 8 [CSVs] = 128 PRs

Your storage solution may actually calculate PRs using a formula with higher demands.  But the question is: how many PRs can your storage solution handle?  Deploy too many CSVs and/or storage modules and you may find that you have disks disappearing from your cluster.  And that leads to very bad circumstances.

You may find that a storage firmware update increases the number of required PRs.  But eventually you reach a limit that is set by the storage manufacturer.  They obviously cripple the firmware to create a reason to buy the next higher up model.  But that’s not something you want to hear after spending €50K or €100K on a new SAN.

They way to limit your PR requirement is to deploy only the CSVs you need.

Undoing The Damage

If you find yourself in the situation with way too many CSVs then you can use SCVMM Quick Storage Migration to move VMs onto fewer, larger CSVs, and then remove the empty CSVs.

Recommendations

Slow down to hurry up.  You MUST run an assessment of your pre-virtual environment to understand what storage you buy.  You also use this data as a factor for planning CSV design and virtual machine/VHD placement.  Like my old woodwork teacher used to say: “measure twice and cut once”.

Take that performance requirement information and combine it with backup policy (1 CSV backup policy = 1 or more CSVs, 2 CSV backup policies = 2 or more CSVs, etc), fault tolerance (place clustered or load balanced VMs on different CSVs), and DR policy (different storage level VM replication policies requires different CSVs).

2011
07.26

With this post, I’m going to try explain why I recommend against using Dynamic VHD in production.

What is Dynamic VHD?

There are two types of VHD you may use in production:

  • Fixed: This is where all of the allocated storage is consumed at once.  For example, if you want 60 GB of virtual machine storage, a VHD file of around 60 GB is created, consuming all of that storage at once.
  • Dynamic: This is where the VHD will only consume as much as is required, plus a little buffer space.  If you allocate 60 GB of storage, a tiny VHD is created.  It will grow by small chunks to accommodate new data, always leaving a small amount of free space.  It kind of works like a SQL Server database/log file.  Eventually the VHD will reach 60 GB and you’ll run out of space in the virtual disk.

With Windows Server 2008 we knew that Dynamic VHD was just too slow for production.  The VHD would grow in very small amounts, and often lots of growth was required at once, creating storage write latency.

Windows Server 2008 R2

We were told that was all fixed when Windows Server 2008 R2 was announced.  Trustworthy names stood in front of large crowds and told us how Dynamic VHD would nearly match Fixed VHD in performance.  The solution was to increase the size of the chunks that were added to the Dynamic VHD.  After RTM there were performance reports that showed us how good Dynamic VHD was.  And sure enough, this was all true … in the perfect, clean, short-lived, lab.

For now, lets assume that the W008 R2 Dynamic VHD can grow fast enough to meet write activity demand, and focus on the other performance negatives.

Fragmentation

Let’s imagine a CSV with 2 Dynamic VHDs on it.  Both start out as small files:

image

Over time, both VHDs will grow.  Notice that the growth is fragmenting the VHDs.  That’s going to impact reads and overwrites.

image

And over the long term, it doesn’t get any better.

image

Now imagine that with dozens of VMs, all with one or more Dynamic VHDs, all getting fragmented.

The only thing you can do to combat this is to run a defrag operation on the CSV volume.  Realistically, you’d have to run the defrag at least once per day. Defrag is an example of an operation that’s going to kick in Redirected Mode (or Redirected Access).  And unlike backup, it cannot make use of a Hardware VSS Provider to limit the impact of that operation.  Big and busy CSVs will take quite a while to defrag, and you’re going to impact on the performance of production systems.  And you really need to be aware of what that impact would be on multi-site clusters, especially those that are active(site)-active(site).

Odds are you probably should be doing the occasional CSV defrag even if you use Fixed VHD.  Stuff gets messed up over time on any file system.

Storage Controllers

I am not a storage expert.  But I talked with some Hyper-V engineers yesterday who are.  They told me that they’re seeing SAN storage controllers that really aren’t dealing well with Dynamic VHD, especially if LUN thin provisioning is enabled.  Storage operations are being queued up, leading to latency issues.  Sure, Dynamic VHD and thin provisioning may reduce the amount of disk you need, but at what cost to the performance/stability of your LOB applications, operations, and processes?

CSV and Dynamic VHD

I became aware of this one a while back thanks to my fellow Hyper-V MVPs.  It never occurred to me at all – but it does make sense.

In scenario 1 (below) the CSV1 coordinator role is on Host1.  A VM is running on Host1, and it has Dynamic VHDs on CSV1.  When that Dynamic VHD needs to expand, Host1 can take care of it without any fuss.

image

In scenario 2 (below) things are a little different.  The CSV1 coordinator role is still on Host1, but the VM is now on Host3.  Now when the Dynamic VHD needs to expand, we see something different happen.

image

Redirected Mode/Access kicks in so the CSV coordinator (Host1) for CSV1 can expand the Dynamic VHD of the VM running on Host3.  That means all storage operations for that CSV, on Hosts2-3 must travese the CSV network (maybe 1 Gbps) to Host1, and then go through its iSCSI or fibre channel link.  This may be a very brief operation, but it’s still something that has a cumulative effect on latency, with potential storage I/O bottlenecks in the CSV network, Host1, Host1 HBA, or Host1 SAN connection.

image

Now take a moment to think bigger:

  • Imagine lots of VMs, all with Dynamic VHDs, all growing at once.  Will the CSV ever not be in Redirected Mode? 
  • Now imagine there are lots of CSVs with lots of Dynamic VHDs on each.
  • When you’re done with that, now imagine that this is a multi-site cluster with a WAN connection adding bandwidth and latency limitations for Redirected Mode/Access storage I/O traffic from the cluster nodes to the CSV coordinator.
  • And then imagine that you’re using something like a HP P4000/LeftHand where each host must write to each node in the storage cluster, and that redirected storage traffic is going back across that WAN link!

Is your mind boggled yet?  OK, now add in the usual backup operations, and defrag operations (to handle Dynamic VHD fragmentation) into that thought!

You could try to keep the VMs on CSV1 running on Host1.  That’ll eliminate the need for Redirected Mode.  But things like PRO, and Dynamic Optimization of SCVMM 2012 will play havoc with that, moving VMs all over the place if they are enabled – and I’d argue that they should be enabled because they increase service uptime, reliability, and performance.

We need an alternative!

Sometimes Mentioned Solution

I’ve seen some say that they use Fixed VHD for data drives where there will be the most impact.  That’s a good start, but I’d argue that you need to think about those System VHDs (the ones with the OS).  Those VMs will get patched. Odds are that will happen at the same time and you could have a sustained level of Redirected Mode while Dynamic VHDs expand to handle the new files.  And think of the fragmentation!  Applications will be installed/upgraded, often during production hours.  And what about Dynamic Memory?  The VMs paging file will increase, thus expanding the size of the VHD: more Redirected I/O and fragmentation.  Fixed VHD seems to be the way to go for me.

My Experience

Not long after the release of Windows Server 2008 R2, a friend of mine deployed a Hyper-V cluster for a business here in Ireland.  They had a LOB application based on SQL Server.  The performance of that application went through the floor.  After some analysis, it was found that the W2008 R2 Dynamic VHDs were to blame.  They were converted to Fixed VHD and the problem went away.

I also went through a similar thing in a hosting environment.  A customer complained about poor performance of a SQL VM.  This was for read activity – fragmentation would cause the disk heads to bounce and increase latency.  I converted the VHDs to fixed and the run time for reports was immediately improved by 25%.

SCVMM Doesn’t Help

I love the role of the library in SCVMM. It makes life so much easier when it comes to deploying VMs, and SCVMM 2012 expands that exponentially with the deployment of a service.

If you are running a larger environment, or a public/private cloud, with SCVMM then you will need to maintain a large number of VM templates (VHDs in MSFT lingo but the rest of the world has been calling them templates for quite a long time). You may have Windows Server 2008 R2 with SP1 Datacenter, Enterprise, and Standard. You may have Windows Server 2008 R2 Datacenter, Enterprise, and Standard. You may have W2008 with SP1 x64 Datacenter, Enterprise, and Standard. You may have W2008 with SP1 x86 Datacenter, Enterprise, and Standard. You get the idea. Lots of VHDs.

Now you get that I prefer Fixed VHDs.  If I build a VM with Fixed VHD and then create a template from it, then I’m going to eat up disk space in the library.  Now it appears that some believe that disk is cheap.  Yes, I can get 1TB of a disk for €80.  But that’s a dumb, slow, USB 2.0 drive.  That’s not exactly the sort of thing I’d use for my SCVMM library, let alone put in a server or a datacenter.  Server/SAN storage is expensive, and it’s hard to justify 40 GB + for each template that I’ll store in the library.

The alternative is to store Dynamic VHDs in the library.  But SCVMM does not convert them to Fixed VHD on deployment.  That’s a manual process – and that’s one that is not suitable for the self-service nature of a cloud.  The same applies to storing a VM in the library; it seems pointless to store Fixed VHDs for an offline VM, but there’s a manual conversion process to convert the stored VMs to Dynamic VHD.

It seems to me that:

  • If you’re running a cloud then you realistically have to use Fixed VHDs for your library templates (library VHDs in Microsoft lingo)
  • If you’re a traditional IT-centric deploy/manage environment, then store Dynamic VHD templates, deploy the VM, and then convert from Dynamic VHD to Fixed VHD before you power up the VM.

What Do The Microsoft Product Groups Say?

Exchange: “Virtual disks that dynamically expand are not supported by Exchange”.

Dynamics CRM: “Create separate fixed-size virtual disks for Microsoft Dynamics CRM databases and log files”.

SQL Server: "Dynamic VHDs are not recommended for performance reasons”.

That seems to cover most of the foundations for LOB applications in a MSFT centric network.

Recommendation

Don’t use Dynamic VHD in production environments.  Use Fixed VHD instead (and passthrough in those rare occasions where required).  Yes, you will use more disk for Fixed VHD for all that white space, but you’ll get the best possible performance while using flexible and more manageable virtual disks. 

If you have implemented Dynamic VHD:

  • Convert to Fixed VHD (requires VM shut down) if you can. Defrag, and set up a less frequent defrag job.
  • If you cannot convert, then figure out when you can run frequent defrag jobs.  Try to control VM placement relative to CSV coordinator roles to minimize impact.  The script will need to figure out the CSV coordinator for the relevant CSV (because it can failover), and Live Migrate VMs on that CSV to the CSV coordinator, assuming that there is sufficient resource and performance capacity on that host.  Yes, the Fixed VHD option looks much more attractive!
2011
07.26

You can find a list of updates that are recommended for System Center Virtual Machine Manager (SCVMM/VMM) 2008 R2 SP1 and any hosts it manages on the Microsoft support site.

2011
07.25

Waiver: What you do following reading this post is up to you. 

After my earlier post on “Top Hyper-V Implementation Issues” I had some feedback on my preference to keep antivirus (AV) off of the Hyper-V hosts.

The configuration that you should have is in KB961804.  That article also says what can happen if you do install AV on your hosts, not follow that guidance, and scan everything.  One day you’ll end up with nasty errors such as 0x800704C8, 0×80070037 or 0x800703E3 and find lots of VMs (with their business apps and data) have:

  • Disappeared from your Hyper-V console
  • Disappeared from your VMM console
  • Are not running

The files are still there but, damn, the VMs will not start up or appear in a management tool.  That’s because AV has gotten in the way and screwed up with things.  I blogged about this back during the W2008 Hyper-V beta (can’t find the post now) in early 2008.  It happened to me.  I was unlucky; I set the required exclusions and restarted the host in question (a lab machine).  My VM configuration files were corrupted.  The solution was the recreate the VM’s and point them at the existing VHD’s containing the safe OS, programs, and data.  Time consuming – and how many people document/remember their VM configurations?  And come to think of it, how many businesses would be OK with their LOB applications being offline for half a day or more while admins do this?

I learned something in 2004.  There is a balancing act between security and business.  Sometimes it has to swing one way, sometimes another.  This is one of those cases.

I do not trust any antivirus product completely.  They are stupid assassins.  They are given rules of engagement, get a target list, and they attack.  But all too often, program updates, definition file updates, or dumb human operator error make mistakes.  It is not unknown for one of these to reset the exception list.  Yes; it has happened – and even happened recently.  Do you really want one of these things to undo the necessary configurations of your Hyper-V cluster – a thing that is effectively a mainframe running many/most/all of your LOB applications, and putting them at risk?

So I say: do not install AV on the parent partition or host OS.  Sure, go ahead and install it in the VMs.  If you can, choose an AV product that is aware of things like virtualisation and minimises redundant scanning.  On the host, make sure you apply security fixes.  Keep the service pack up to date.  And keep the Windows Firewall running.  Finally, restrict who has logon rights to the hosts.  If you can, prevent the hosts from having proxy/web access.  People should never browse from a server but I just don’t trust human nature.  All that should secure the parent pretty well.

Now let’s get back to why you’re installing AV on the parent partition.  Odds are there is a security officer who has a list of things that [booming voice] “must be done to all Windows computers” [/booming voice].  And if you do not do these things you will be fired!   One of them is: “you must install anti virus and scan everything because Windows is a threat to life itself”.  Hmm, someone’s been reading the SANS website again!  I hate checklist security experts.

Here’s my response to that person:

  • I’d point them to KB961804.  In fact, you might even want to show them the Microsoft required exceptions list.  It says “recommended” in the title but try having that argument with a MSFT support engineer when your SYSVOL is corrupted!
  • If they insist, then say you’ll comply but you have one requirement.  Never say “no” because that’s career suicide.  Give them a waiver form.  This form will clearly state that you the operator/administrator/engineer/consultant will not be held responsible for any corruption or loss of virtual machines because of the mandate to scan all things on the Hyper-V hosts.  All responsibility will lie with the undersigned security officer – and demand their signature.  Then keep a copy for yourself, give one to your boss, and one to the CIO.  At least then you know who will get fired when incorrectly configured AV causes your VMs to disappear.

It’s funny; security officers are usually career politicians.  And politicians do not like being nailed down to a something like that.  Taking responsibility is not in a politician’s nature.  I bet you get your way after that.

Maybe as a compromise, you might offer to take a host offline once in a while to perform a complete system scan of the C: drive.

Anyway, that’s my opinion on the matter.

2011
07.25

Going to BUILD

Assuming the USA lets me in, I’ll be going to the BUILD conference in September.  This is where Microsoft will be opening the taps on Windows 8 information.  It’s mainly aimed at developers and hardware manufacturers but I’m pretty sure there’ll be lots more information.  With no TechEd Europe this Autumn/Winter, I guess this’ll be our only event full of info this side of the new year.

I’ll try to live blog the good stuff, where possible, like I did at TechEd 2008 in Barcelona.  We were given a monstrous amount of info about Windows 7 & Server 2008 R2 back then.

Technorati Tags: ,
2011
07.25

When I set up my Windows Home Server I configure the normal Windows Server Backup task to backup the server folders to a USB disk.  That’s nice for normal backup/recovery.  But that doesn’t protect my data (documents, books, whitepapers, and thousands of photos) against fire and theft.  Sure, I could probably swap disks and store them offsite.  But I know how poor my discipline with doing that in the past was.  I need something automated for off-site backup.

So I decided to try Carbonite.  It’s one of the few online personal backup solutions that will work on WHS.  There’s a 15 day free trial so I signed up for that, and I added the offer code from the TWiT Security Now podcast – that gives you an extra 2 months free in addition to your 12 month subscription (unlimited storage for less than $60/year!!!!).

The install was easy.  The configuration wizard walks you through the few steps.  You’re warned that files like video will not be backed up.  I’m OK with that – I have no personal/holiday videos because I’m a still photo man.  Targeting a folder is easy – use Windows Explorer, right-click, and select the add to backup option.  I had two schedule choices: constantly backup changes or schedule.  I went for the first option.

OK, the flaw: I have 20GB per month limit and I’m on ADSL.  It’s going to take a very long time to get all of my photo collection backing up to the cloud.  I’ve been incrementally adding folders, starting with My Documents, and then I added some of my older photo folders to test.  All worked well.  I’ll continue testing, and then decided next week if I’ll pay for the service.

Technorati Tags: ,,
2011
07.25

It used to be that we had an official page on TechNet for updates for Windows Server 2008 R2 Hyper-V.  It has since been decided to move the Windows Server 2008 R2 Service Pack 1 Hyper-V recommended updates list over to the TechNet wiki where it is community driven.

2011
07.25

I gave a presentation earlier today on the subject of issues I’ve encountered, been asked about, or read about with Hyper-V implementations.  Just about all of them are related to operators or consultants not knowing any better.  Sometimes that’s caused by lack of education and sometimes it’s lack of documentation.  And sometimes … I am left exasperated!

2011
07.24

The story of Daemon  is that a games development genius dies, but that doesn’t stop him from wreaking havoc on the world.  Before he dies, he uses the AI from his games to create a distributed network to enact his will.

This book has what Zero Day didn’t: a hook, something to keep you turning the pages.  In fact, I found it quite addictive.  I was reading it before work, at lunch, and going to bed early to read more.  I finished it this morning and immediately ordered/downloaded the sequel, Freedom.

Whereas Zero Day featured an extremely believable scenario, Daemon goes a little bit more into the sci-fi end of things to add an element of danger.  However, it is still rooted in the believable.  I can’t watch a movie or read a book that features “go hack now” scenarios.  But this book was based on things like trojans, in-game AI, RSS feeds, GPS, and so on.  It just stretched what we know about a little to enable the plot, but kept this acceptable an acceptable limit for me.

Over and over, in this book, you’ll see how hacks take advantage of poor patch control.  Spotting a trend?

I reckon that if you work in IT, or find computers interesting, then there’s a really good chance that you’ll like Daemon.  This book can be ordered on Amazon.com.

Technorati Tags: ,
2011
07.24

Here’s a new book by Mark Russinovich and Aaron Margosis that you can order on Amazon.com.  If you’re a Windows admin, and find yourself needing to troubleshoot difficult issues, then this is essential reading.

“Get in-depth guidance—and inside insights—for using the Windows Sysinternals tools available from Microsoft TechNet. Guided by Sysinternals creator Mark Russinovich and Windows expert Aaron Margosis, you’ll drill into the features and functions of dozens of free file, disk, process, security, and Windows management tools. And you’ll learn how to apply the book’s best practices to help resolve your own technical issues the way the experts do.

Diagnose. Troubleshoot. Optimize.

  • Analyze CPU spikes, memory leaks, and other system problems
  • Get a comprehensive view of file, disk, registry, process/thread, and network activity
  • Diagnose and troubleshoot issues with Active Directory®
  • Easily scan, disable, and remove autostart applications and components
  • Monitor application debug output
  • Generate trigger-based memory dumps for application troubleshooting
  • Audit and analyze file digital signatures, permissions, and other security information
  • Execute Sysinternals management tools on one or more remote computers
  • Master Process Explorer, Process Monitor, and Autoruns“
Technorati Tags:
2011
07.22

Another common question that is popping up in my day job so I reckon it’s another subject that I need to blog about.

Microsoft partners are consumers of the technology too.  They face all the same challenges as their customers: money is tight and software can be expensive.  Good news: you can get it either cheap or even free.  What you get, and how much you get all depends on what type of partner you are and what grade and type of competency you have as a Microsoft partner company.

Piracy

A lot of Microsoft partners are using Microsoft software illegally.  That is a fact, and I suspect that it is quite common in the smaller/medium sized partner companies.  They can get a certain allocation of software, but often it is not enough. 

What is it that they are doing to be illegal?  They get their MSDN or TechNet subscription for a handful of users and start using it to deploy production desktops, applications, and servers all over the shop.  MSDN and TechNet have explicit usage rights, and they do not include widespread production usage, e.g your domain controller, file server, everyone’s PC/Office, etc.  The directors may not know this is happening, they may turn a blind eye to it (sticking fingers in ears and repeatedly shouting LAH-LAH-LAH-LAH when the sys-admin tells them the truth – been there), or they may even instruct it to happen (been there too, many years ago).

So how can you, as a Microsoft partner company, get a chunk of software legally for next to nothing?

Microsoft Partner Action Pack

This is an excellent bundle for small companies that are even at the most basic level in the Microsoft Partner Network: a registered partner.  In fact, you cannot have a silver or gold competency and subscribe to this pack!  The eligibility requirements are online.  The Irish rate (per year) is €289 and that includes a big list of software, really for that partner with up to 10 users.  Highlights include:

  • Office Professional Plus (10) + Project (5) + Visio Professional (10)
  • Exchange Standard: 1 servers + 10 CALs
  • SQL Enterprise: 1 server + 10 CALs
  • Window Server: Enterprise (1), CALs (10), Storage Server Essentials (1), SBS Standard (1), SBS CALs (10)
  • Windows 7: Pro (10), Ultimate (1)

A handful of Office on OVS will cost more than all that!

Silver and Gold Competency Holders

These folks tend to be bigger companies and are not suitable for the Partner Action Pack, nor are the elligible.  But don’t worry if you’re here, you get a much bigger allocation of software.  If you qualify for a competency, then you get an allocation of software that you are free to download and use.  What you get will depend on:

  • The competency: developers will get more relevant stuff for them, and systems management people will get more relevant stuff for them.
  • The grade: The gold competency rewards you more software than the silver one.

Microsoft could have published a nasty matrix.  Instead there, is a simple graphical calculator that allows you to punch in the competencies that your company has, as well as the grades, and it tells you what you are eligible to download and use.

For example, a company with Silver Systems Management and Silver Virtualisation competencies gets stuff including:

  • 2 Exchange Enterprise + 25 CALs + 25 ForeFront for Exchange (and SharePoint)
  • 25 Windows 7 Enterprise + 25 AD RMS + 25 Office Professional Plus
  • 2 Windows Server Datacenter + 4 Windows Server Standard/Enterprise
  • 15 Visio Professional + 5 Project Professional
  • All the System Center stuff
  • And LOTS more

Go Gold with those competencies and you get 100 copies of Office Pro Plus and Windows 7 Enterprise.  There is work to become a partner but you can see there is money to be saved.

Technorati Tags: ,
2011
07.22

Ever wonder what happened to those people that stuck to their horses (quite literally) in the early 1890’s and refused to admit that the automobile was replacing their horse & cart construction biz?

I am getting LOTS of emails from businesses from around the world who are looking for Hyper-V consulting. I’m not really in that business so I cannot help – I work in the “channel” now, working with those companies that do the actual implementation work.

This surge in interest and emails to me had me thinking overnight … there must be a real shortage of quality in Hyper-V/System Center expertise around the world. The demand is out there, boosted by certain announcements last week, and it seems like some folks want to stick to making carriages while their customers are looking for some V6 goodness. The customer wants what they want, so they’ll go looking for it, and the local carpenter goes without work.

One of the things that many of these consulting companies miss out on is the potential of a Hyper-V sale. They make the mistake of comparing it to a VMware sale. If you sell VMware virtualisation, you go in, install it, do some P2V, leave and maybe come back in 2-3 years to get a license renewal. If you sell Hyper-V + System Center Management Suite (often the most economic way to buy/sell SysCtr) then the customer has rights to all of System Center across all of their VMs. You might implement VMM, DPM and some of OpsMgr initially. But after that, you can easily go back to the customer to talk about future possibilities, and find yourself involved in every IT project that happens in that site, even if it is outside of your core skills, e.g. you implemented backup/monitoring and they hire someone else to do CRM and needs … backup/monitoring! Or you install ConfigMgr for the servers, and now can expand it to the desktops, then add on Forefront Endpoint Protection services, and then find yourself doing moer and more higher value security work for that client.

If you are a consulting practice, what do you make your best margin on?

  • Hardware? You’re lucky to make between 9-13% in this competitive environment.
  • Software? Hah! If you sell 4 VMware hosts at $40K you might make $4000 in margin? Maybe VMware will throw you a bone in a finders fee? And then that’s the end of your consulting for that virtualisation-only deal with the customer. You’ve also blown that customer’s budget for the year. Whoops!
  • Services? Ah here we go! This is where you are making between $1,000 and $1,800 (if not more) per day from the customer for each person-day on their site – with very large margins. Take that $40K of VMware sales, and call it around $26K in System Center sales (I’ve already shown in the last few days that Hyper-V is free). After the virtualisation project, you’ve left the customer with $14K more in their budget (versus the VMware job). And you’ve left them with licensing for System Center. What do sales people like? They love having reasons to talk to their customer – and now they do because the customer has licensing and budget to deal with technology and business issues and you can target that $14K with services.

If you find yourself being that carpenter, and what to be the money making Hyper-V/System consulting practice then here’s a few ideas:

  • No business has ever made a cent without investment. Despite what you may think, you cannot become an expert in virtualisation and systems management overnight based on some experience with 1980’s email technology. Your staff have to be given the time and the budget to learn. You cannot get anywhere without this real business investment.
  • Anyone fighting the business plan needs to be dealt with. It’s fine to speak about a strategy out of one side of your mouth, it’s another to actually do what’s required.
  • Sales & marketing staff must be trained. They are not too busy. Are you more concerned about selling horse carts in the next few weeks or having a sustainable business over the next 5+ years?
  • You cannot expect all consultants to become all things to all people. Divide them up and train each person on 1 or two things. For example, person A might learn Hyper-V and DPM. Person B might learn DPM and VMM. Person C might learn VMM and OpsMgr. Person D might learn OpsMgr and Hyper-V. You’ve spread the skills, allowing everyone time to learn, and given coverage to products in case someone is unavailable. Let them develop those skills on courses, in labs, and in certifications.
  • You will need to hire in skills. Someone has to have an overall view of the technologies.
  • Start the path of obtaining virtualisation and systems management competencies through the Microsoft Partner Network. This requires effort from consultants and Sales. You will not get a competency overnight – you do need past experience with customer satisfaction surveys.
  • Sales and marketing need to promote the service. The work is out there, but sales do not normally come knocking on your door. Here’s where you need to stretch. You may have a core market that you’ve sold to up to now, but the fact that they’ve been happy buying ancient crap from you up to now should tell you something. Find a new customer base. That requires some of that investment and buy-in from the relevant sales/marketing staff.
  • You may have to start small to prove yourself and develop a reputation. You may have to challenge old decision making rules. You may need to reach out to new strategic business partners to add expertise that is outside of your core business.

My inbox proves the work is out there. The ability to penetrate a customer site with virtualisation, and then expand into systems management and security beyond virtualisation seems like an obvious benefit of doing Hyper-V based services. By selling Hyper-V/System Center versus the alternative, you also are changing how the customer spends their budget with you: instead of selling lots of low margin software, you are selling less low margin software and more high value services. Finally: you’ll also have a business.

2011
07.21

I am not the person to approach if you have questions on Exchange Server or Lync Server.  But Nathan Winters is.  Nathan was an Exchange MVP until he “went blue” (had his firmware changed [some say upgraded] by Redmond) and has been doing large deployments of Exchange and OCS for years in the UK.  And it is good news for those wanting to learn Lync Server 2010 that Nathan is currently slaving away on writing Mastering Lync Server 2010 – in fact I believe the writing phase is nearly over and RTM will be before the end of the year (if not much sooner).  Both authors (and the tech reviewer too AFAIK) are insiders and you can be sure that this read will be as accurate and informative as it can get.  And who knows – the Core CAL Suite will include Lync licensing from August 2011 which makes this communications tool, that can eliminate travel and make home working possible, even more economic.

Technorati Tags:
2011
07.21

I am not a licensing expert (and hence my lawyer says you should consult a real one for your requirements), but I do work with a team of them, and every day I learn something.  Over the past few months, I’ve had a lot of conversations about virtualisation (XenServer, Hyper-V, and VMware) and licensing with various users/implementers of the technology.  And I’m finding that two mistakes are being commonly made … and putting those organisations into an illegal situation.

Let’s get to the first one … and it’s one that is common in VMware houses and in organisations that have P2V’d.

Using Windows Server Standard Edition to License Migrating VMs on a Virtual Cluster

I am assuming that you already know that you cannot legally reuse a P2V’d OEM license because that license is tied to the tin it was originally installed on or bought with.  That’s why it was so cheap.

A lot of organisations are licensing their virtual machines with Windows Server Standard, one at a time.  It’s fine to install that edition of Windows Server on a VM.  And there is no issue with using it … as long as that virtual machine does not move from physical host to physical host more than once every 90 days.  I also believe that there is a geographic distance limitation on legally moving that VM (and that one depends on what region you are in AFAIK).  In other words, if you build a virtualised cluster and are VMotion-ing or Live Migrating VMs (each licensed with individual copies of Windows Server Standard) around (manually, PRO, DRS) more than once every 90 days then you are breaking the licensing rules of Windows Server Standard edition and are subject to punishment.

A really common instance of this mistake is a VMware house.  They don’t realise or haven’t been educated by their VMware reseller/implementer about correct (and cheaper in dense environments!) Windows licensing in a virtualised environment.  The implementer either mistakenly sees it as irrelevant in VMware-world or is just plain uneducated.

Here’s the truth: ever since 2004 or 2005 (I can’t remember when and am too lazy to google it) we can license Windows as follows in a virtualised environment:

  • Windows Server Standard: Assign 1 license to the host (which may be used for  Hyper-V or not used for Xen or VMware) and get 1 free license for a VM on that host.
  • Windows Server Enterprise: Assign 1 license to the host (same as Standard) and get up to 4 free licenses (with downgrade rights) for VMs on that host.
  • Windows Server Datacenter: Assign 2 (minimum) per proc (socket, not core) licenses to the host and get unlimited free licenses (with downgrade rights) for VMs on that host.

It feels silly that I’m rehashing this.  This should be common knowledge, just like that you need to insert a power cable in a computer to start it up.  But it just does not seem all that common.

Have you made this “licensing with individual copies of Windows Server Standard” mistake?  Think you’ll get away with it?  Hah!  Your Microsoft reseller has records, their distributor(s) have records, and Microsoft has records.  And those records get looked at every quarter or half year.  It’s easy to see who has what, and these days it is assumed that virtualisation is being used.  For example, if one looks at a customer’s records and I see 40 copies of Windows Server Standard, they may assume that Windows Server Standard has been deployed on a reasonably sized virtualisation farm and that DRS/VMotion/Live Migration is enabled.  That customer is possibly illegally using those licenses and their name is added to the audit list of someone like the Business Software Alliance (BSA).

Under Licensing a Virtualisation Cluster with Windows Server Enterprise

This one is common in small/medium companies.  A customer wants/deploys a virtualisation cluster (Xen, VMware or Hyper-V) with two hosts and between 5-8 virtual machines.  The virtualisation cluster will be active-active and virtual machines will be balanced across both hosts.

image

Each host is licensed with Windows Server Enterprise edition.  That provides up to 4 free copies of Windows Server for VMs running on those hosts.  Sweet; everything is licensed pretty economically because it works out cheaper than buying lots of copies of Standard edition, even if using XenServer or VMware for the hosts.  It’s an active-active cluster.  So from time to time VMs might move around for performance load balancing (DRS or PRO).  That might mean there could be 5 VMs on one host and 3 on the other.  Or there could be a host failure/maintenance window and that would mean host A could have 8 VMs and host B would have 0.

image

Remember that Windows Server Enterprise gives you up to 4 free licenses for VMs on that host the license is assigned to.  In this case, 1 license is assigned to Host A and 1 license is assigned to host B.  This customer is now illegally licensed because they have 8 VMs on Host A running Windows Server, but are only covered for 4.  It doesn’t matter if it’s a temporary thing.  It is illegal.  And this is quite common.

The correct way to license this is to either:

  1. Purchase 2 copies of Windows Server Enterprise for each host allowing up to 8 VMs per host for those DRS/PRO/failover situations.  Remember that each host will be legally limited to a maximum 8 VMs now, even in emergencies.
  2. Purchase Windows Server Datacenter per processor (min 2 per host) per host allowing unlimited VMs per host, thus making it the most flexible option.

Summary

You need to understand how Standard/Enterprise/Datacenter licensing works in virtualisation, just like you need to know that you have to buy a copy of Office for every one you install.  Fro each deployment, you need to understand:

a) Will there be VMotion/Live Migration/DRS/Dynamic Optimization/Power Optimisation or whatever where the VMs will move around more than once every 90 days?

b) If you license VMs at the host level with Enterprise, will the number of VMs ever exceed the licensed number for that host, even if just for a very short period of time?

If you are at all confused, then call a real licensing expert, and not just your virtualisation reseller/implementer.

I know VMware marketing are reading this blog and try to misquote it or make smart comments here from time to time.  Everything here applies to the legal licensing of VMs, no matter what virtualisation is used.  In fact, license your host with Enterprise or Datacenter (and getting licensing for your VMs) and a fully featured Hyper-V is just a tick box and 2 reboots away, saving you on that ever icreasing vTax.  So take that, stuff it in your pipe, and smoke it Smile

Technorati Tags: ,
2011
07.20

Yesterday I wrapped up the deployment and proof-of-concept of deploying Office 2010 with SP1 via System Center Configuration Manager 2007 R3.  It was a nice one: branch distribution points, client deployment in a mature XP network, etc.

Here’s a rough idea of what I did:

  • Install a site server in the central site.  Local SQL installation to make backup/recovery more manageable via the ConfigMgr backup task.  Boundaries were defined (the IP subnets in the ConfigMgr site).  Enable auto discovery from AD every hour.  Small network (by ConfigMgr standards) and it’s good to get changes frequently if using groups for collections.
  • Deployed branch distribution point in the local site.  I set the sample one up as a protected BDP.  This associates the subnets of the branch office with the BDP, restricting access to clients in that site.
  • Deployed some ConfigMgr clients to test machines by hand.  I did not enable client push installation (proof of concept).
  • Packaged Office 2010 using setup /admin.  Note I used SETUP_REBOOT in the setup properties (Office Customization Tool) and set it to Never.  This prevents Office 2010 setup from rebooting the machine if previous versions of Office are running during setup.  If this situation occurs, Office 2010 setup would reboot the PC with no notice to the user – bad!  Instead, I’ configured the package program to let ConfigMgr reboot the PC (no matter what – probably not a bad thing anyway).
  • Slipstreamed Office 2010 Service Pack 1 into the package.
  • Distributed the package to the Site Server’s distribution point and to the BDP.  Force the BDP to download the package by running the BDP maintenance task in the BDP server’s Configuration Manager client (Control Panel).
  • Setup up a proof of concept collection. 
  • Advertised the package setup program to the collection.  Forced policy refresh on the test machines by running the machine policy refresh in the ConfigMgr client (Control Panel).
  • Sat back and watched the goodness.

For production deployment:

  • We wanted to restrict client deployment impact on the network.  I copied the client setup files into SYSVOL and created a .bat script to run CCMSETUP with the flag to define the site name.  That would copy the ConfigMgr client setup files to DCs in every site.  I setup a GPO to run a startup script that would execute this .bat file.  That GPO could be linked to appropriate objects in AD to force setup of the client on machines.  They’d install from the local SYSVOL and eliminate any WAN impact.  Eventually, the GPO can be removed/unlinked, and client push installation can be enabled, thus hitting those last few machines that haven’t rebooted (to get the startup script to run) or any new machines that are added to the domain.  I also find that this scripted solution tends to get me better results in a mature XP network.
  • Office 2010 is to be deployed 1 site at a time.  The AD sites/OUs don’t match the physical sites (not all that unusual) so I setup a collection definition where: (system role = workstation AND (network configuration IP address = 192.168.1.% OR network configuration IP address = 192.168.2.%).  This will include all XP (or later) PCs on the site’s subnets in the collection, and exclude server machines.

From there, a new advertisement can be created to run the Office 2010 SP1 install at a pre-scheduled time.  ConfigMgr reports can be monitored to see which exceptions (problems) need to be dealt with.  The clients in the site will install from the local BDP.

For following sites, one at a time:

  • Add the branch office subnets to the ConfigMgr site boundaries.
  • Install a BDP and protect it with the site’s subnets from the boundaries list.
  • Distribute the Office 2010 package to the BDP.
  • Create a new collection specifying the subnets with the % wildcard.
  • Advertise the Office 2010 package program.

For something like this, you need to test, test, test.  You cannot test enough.  Sounds like a lot of work, but your up front time investment saves a bunch of time and money on the back end, versus a manual install to hundreds or thousands of PCs.  This works out being not so bad if you license intelligently too: ConfigMgr + SQL combined with a (desktop) Core CAL Suite (includes a bunch of CALs and a ConfigMgr management license).  And after that, you have a fine solution in ConfigMgr to manage the entire life cycle of the PCs you manage:

  • Zero touch OS image deployment
  • Software deployment
  • Patching (MSFT and third party)
  • Desired configuration management (2012 adds auto rectify)
  • Software/hardware auditing
  • License auditing/usage measurement
  • Power monitoring/policy enforcement (saving money!)
  • 2012 also adds “user centric computing” and Android/iOS device management
  • Reporting on more than you could dream of … all the way to identifying those machines that you need to replace.
  • And Dell/HP are fully invested in it as a solution, recognising the power it adds for their customers.

Jeez, I’ve totally gone over to the dark side of sales Smile Despite that, I love ConfigMgr; it allows me to play out my megalomania fantasies, even if they are limited to absolutely everything in the AD forest that I can get a ConfigMgr client onto.

Technorati Tags: ,,
2011
07.20

Odds are you’ve already read about this on Ben Armstrong’s blog (I’ve been engaged on an intense deployment project with little time to keep the blog up to date), but it’s worth me posting just in case.  Microsoft showed off Windows Server 8 (2012? MSFT have a thing against the number 13 so I doubt it will be Windows Server 2013) for the first time and featured Hyper-V.  Hyper-V Replica was on show, allowing a VM to be replicated to another (possibly remote) Hyper-V host.

The video is online and you can jump to around the 37 minute mark to start hearing about Windows 8.

It seems we have two authentication methods for the replication:

  • HTTP aka Windows authentication: Probably for hosts inside the same forest.
  • HTTPS aka certificates: Maybe for hosts in different forests?  Could be great for replicating to a “public cloud”?  Pure guessing. 

File this under “we’ll learn more at/after the Build Conference in September”.

An interesting screen shot is this one:

image

Cool: we can optionally keep a history of replicas!  Maybe a VM’s OS or application corrupts in site A but we can restore a previous version in site B before the corruption started?  And it appears to allow us to use VSS to take snaps every X hours to get consistent replicas.  That’s critical for things like Exchange or SQL Server.

A big challenge for replication is getting that first big block of data over the WAN.  MSFT has thought of that, as you can see below.  We can schedule it for out of hours, export to removable media and import on the destination host, or use backup/restore (apparently).

image

This is just a simple wizard to get something complex (under the hood) to work.  And it’s software based so it should work with any Hyper-V supported hardware.

This was a very early build on show at WPC11 so things are subject to change.  We’ll learn more, I guess, at/after the Build conference.  Until then, everything is speculation so don’t plan your deployments until at least the RC release next year!

Jeff Woolsey says in the video that this will be an alternative to those very expensive hardware replication mechanisms that are the only option right now.  Yup.  Also an alternative to the vTax alternative by VMware because Hyper-V Replica will be a built-in feature at no extra cost.

Get Adobe Flash player