2014
08.14

There’s a new craze out there with famous people called the Ice Bucket Challenge. A person is dared to take a bucket of ice water over the head (and post the video online) or donate to charity, in in of of “raising awareness” of a disease called ALS. Nadella and Zuckerberg have done it. Gates has been challenged.

2014
08.13

I’ve recently started doing lots of presentation on Azure thanks to the release of Azure via Open licensing. People wonder what the scenarios ate where an SME would deploy machines in Azure and on premises. Here’s one I came up with this morning (an evolution of one I’d looked at before).

I was chatting with one of my colleagues about a scenario where a customer was looking deploying ADFS to provide Office 365 authentication for a medium-sized multinational company. I wondered why they didn’t look at using Azure. Here’s what I came up with.

Note: I know SFA about ADFS. My searches make me believe that deploying a stretch ADFS cluster with a mirrored SQL backend is supported.

image

The company has two on-premises networks, one in Ireland and one in the USA. We’ll assume that there is some WAN connection between the two networks with a single AD domain. They have users in Ireland, the USA, and roaming. They want ADFS for single sign-on and they need it to be HA.

This is where companies normally think about deploying ADFS on-premises. Two issues here:

  • You need local infrastructure: Not so bad if you have spare license and hardware capacity on your hosts, but that’s not a given in an SME.
  • Your ISP becomes a risk: You will place ADFS on premises. Your office has a single Internet connection. A stray digger or ISP issue can put the entire business (not just that office) out of action because ADFS won’t be there for roaming/remote users to authenticate with O365.

So my original design was to stretch the network into Azure. Create a virtual network in an Azure region that is local to your Office 365 account (for example, an Irish O365 customer would deploy a virtual network in Azure Europe North). Create a site-to-site VPN network to connect the on-premises network to the Azure VNet. Then deploy an additional DC, in the same domain as on-premises, in the Azure VNet. And now you can create an ADFS cluster in that site. All good … but what about the above multi-national scenario? I want HA and DR.

Deploy an Azure VNet for Ireland office (Azure Europe North) and for the USA office (Azure USA East) and place virtual DCs in both. Connect both VNets using a VPN. And connect both on-premises networks to both VNets via site-to-site VPNs. Then create an ADFS stretch cluster (mirrored SQL cluster) that resides in both VNets. Now the company’s users (local, roaming and remote) have the ability to authenticate against O365 using ADFS if:

  • Any or both local on-premises networks go offline
  • Either Azure region goes offline

As I said, I am not an ADFS person, so I’ll be interested in hearing what those how know ADFS think of this potential solution.

2014
08.13

Overnight, Microsoft released the August 2014 Update Rollup for WS2012 R2 and Windows 8. Lots of hotfixes!

2014
08.13

Microsoft released a hotfix that includes a microcode update for Intel processors to improve the reliability of Windows Server. It affects Windows Server 2012 R2, Windows Server 2012 and Windows Server 2008 R2 Service Pack 1 (SP1). The fix also solves a reliability problem for Hyper-V running on Ivy Bridge, Ivy Town, and Haswell processors.

A supported hotfix is available from Microsoft.

Note hotfix for Windows Server 2008 R2 SP1 will be available in September, 2014.

This update reminds me of a similar update that was released soon after the RTM of W2008 R2 to deal with issues in the Nehalem CPU. Without the fix, there were random BSODs. I got tired of telling people, so called expert consultants, to install the fix. Note this fix, test it if you want to deploy immediately, or wait one month and then install it. But make sure you install it – set something in your calendar NOW to remind yourself.

2014
08.13

A new KB by Microsoft covers a scenario where you get a "Access denied error" when Hyper-V Replica Broker goes online in a Windows Server 2012 or Windows Server 2012 R2 cluster.

Symptoms

Consider the following scenario:

  • You have a Windows Server 2012 R2 or Windows Server 2012 failover cluster that is in a domain, and the domain has a disjoint namespace. 
  • You set the primary Domain Name Service (DNS) suffix of the Windows Server 2012 failover cluster to the disjoint domain name.
  • You create a Hyper-V Replica Broker in the failover cluster, and then you bring the Hyper-V Replica Broker online.

In this scenario, this issue occurs, and an error message that resembles the following is logged in the cluster log:

Virtual Machine Replication Broker <Hyper-V Replica Broker BROKER>: ‘Hyper-V Replica Broker BROKER’ failed to register the service principal name: General access denied error.

The fix is included in the August 2014 update rollup.

2014
08.13

This KB informs us that Microsoft added much needed performance counters to Windows Server 2012 R2 for monitoring tiered Storage Spaces. You can find more details here. The new perfmon metrics are:

  • Avg. Tier Bytes/Transfer
  • Tier Transfer Bytes/sec
  • Avg. Tier Queue Length
  • Avg. Tier sec/Transfer
  • Tier Transfers/sec
  • Current Tier Queue Length
  • Avg. Tier Bytes/Write
  • Tier Write Bytes/sec
  • Avg. Tier Write Queue Length
  • Avg. Tier sec/Write
  • Tier Writes/sec
  • Avg. Tier Bytes/Read
  • Tier Read Bytes/sec
  • Avg. Tier Read Queue Length
  • Avg. Tier sec/Read
  • Tier Reads/sec
2014
08.12

Welcome to the SMB 3.02 edition of this update. Jose Barreto has been very busy!

Nanu nanu!

2014
08.11

I think we can call today’s issue “What’s New in Azure”:

2014
08.08

The San Francisco 49ers (an NFL or American Football team) are based in Santa Clara, California. Nearby you will find Cupertino, the HQ location of Apple. Also nearby, you will find Mountain View, the HQ location of Google.

image

What tablet did I see the 49ers using on the side line in a preseason game against the Ravens last night?

image

Let’s take a closer look:

image

Hmm, that’s not the Apple square button and it sure aint Android. The announcers went on to mention that the NFL has a sponsorship agreement with Microsoft Surface. Note the stylus? I reckon that’s a Surface Pro (not the 3 based on the shape). Apparently the league only allows side line tech such as this for analysing still pictures (a full field shot is taken just before and after a play starts for later analysis by coaches and players).

Previously a junior staff member printed out booklets of black and white photos and ran them to the coaches/players on the side line. That took at least 30 seconds. They must be a mess to use and keep organised. Now colour images (see above) are transmitted straight to the Windows tablets and presented in a tiled touch interface. You can see below that some coaches like the new system, and some do not:

image

Interesting to see a team such as the Niners, who have just built the most technology centric stadium on the planet in the shadows of Apple and Google, are using Windows and the Surface.

Technorati Tags: ,
2014
08.08

I read a comment today that Storage Spaces was great for small/medium deployments. And yup, it is. I use Storage Spaces to store my invaluable photo library at home (a pair of Toshiba USB 3.0 3 TB drives). At work, we use a single DataOn Storage DNS-1640 24 x slot JBOD that is dual SAS attached to a pair of 2U servers to create an economical Hyper-V cluster. And we have sold 2U DataOn Storage CiB-9220 “Cluster in a Box” units for similar deployments in SMEs.

But most of our sales of JBODs have actually been for larger deployments. Let me give you an example of scalability using an image from my software-defined storage slide decks:

image

In the above diagram there are 4 x DataOn Storage DNS-1660 JBODs. Each has 60 x 3.5” disk slots. Using 6 TB drives (recently certified by DataOn) that gives you up to 1440 TB or just over 1.4 petabytes of raw storage. That’s with 7200 RPM drives and that just won’t do. We can mix in some dual chanel SAS SSDs (using 3.5 to 2.5 adapters) to offer peak performance (read and write).

In the above design there are 4 SOFS cluster nodes, each having 2 x direct SAS connections to each JBOD – 4 JBODs therefore 8 SAS connections in each server. Remember that each SAS cable has 4 SAS ports. So a 6 Gb SAS cable actually offers 24 Gbps of throughput.

Tip from DataOn: If you’re using more than 48 drives then opt for 12 Gb SAS cards, even if your JBOD runs at 6 Gb; the higher spec cards circuitry performs better even with the lower speed SAS disks/JBODs.

Now this is where you say that this is all great in theory but surely no one is doing this. And there you would be wrong. Very wrong. MVP Carsten Rachfahl has been deploying large installations since late 201 in Germany. The same is also true of MVPs Thomas Maurer and Michael Rüefli in Switzerland. At my job, we’ve been selling quite a few JBODs. In fact, most of those have been to replace more expensive SAN installations from legacy vendors. This week I took this photo of the JBODs in the above architecture while they were passing through our warehouse:

Yup, that’s potentially over 1 PB of raw storage in 16U of rack space sitting on one shipping pallet. The new owner of that equipment is building a SAS solution that will run on Hyper-V and use SMB 3.0 storage. They’ll scale out bigger and cheaper than they would have done with their incumbent legacy storage vendor – and that’s why they’re planning on buying much more of this kind of storage.

2014
08.08

It looks like you will have to use the latest version of IE to be supported after January 2016. That’ll go down like the Hindenburg in businesses.

2014
08.07

Very little happening. These quiet times are great for rumours.

Oh – and don’t use Generation 2 virtual machines on WS2012 R2 Hyper-V.

2014
08.06

I’ve done photography in some of the most rural parts of the world, but I’ve never been without phone or Internet for 3 days before. *exaggeration alert*  Being in a dark valley in Scotland over a long weekend was like having an arm removed. Anywho, here’s the news from the last few days. Note that there is an “August Update for …” Windows 8.1 and Windows Server 2012 R2 coming out next week, what the media will probably called “Update 2 for …”.

2014
08.01

Talk about crappy timing. A federal court in the USA has determined that emails are not actually emails, and therefore Microsoft must turn over emails business records stored on Email servers in the Dublin region to the FBI. One must wonder why the FBI didn’t contact the Irish authorities who would have jumped at once if the case was legitimate and issued an order locally. Maybe the case is not actually legitimate?

On the eve of Azure going big through Open licensing, a federal judge has stuck a stake through the heart of the American IT industry – this is much bigger than Microsoft, affecting Google, Apple, Oracle, IBM, HP, Dell, and more. Microsoft has already lodged an appeal.

2014
08.01

It is August 1st, and today is the very first day that you can buy credit for usage on Azure through Open Licensing. This includes Open, OV, and OVS, as well as educational and government schemes.

How Does It Work?

The process is:

  1. A customer asks to buy X amount of credit from a reseller – the next bit of stuff is normal licensing operations that the customer does not see.
  2. The reseller orders if from a distributor.
  3. The distributor orders the credit from Microsoft.
  4. A notification email is sent out to the customer with a notification to download an OSA (online services activation) key from their VLSC account (used to manage their Open volume licensing). The customer is back in the process at this point.
  5. The customer/partner enters the OSA key in the Azure Account Portal.
  6. The customer/partner configures Azure administrator accounts and credit alerts.

Credit is purchased in blocks of $100. I believe that it is blocks of €75 in the Euro zone. So a customer can request $5000 in credit. They don’t get 50 OSA keys: they get one OSA key with a value of $5000.

Who’s Account Should We Use?

If you are a customer and the MSFT partner wants to set you up under their Azure account, tell them to frak right off. The VMs will be THEIR property. The data will be THEIR property. We have seen this situation with Office 365. Customers have lost access to data for months while MSFT’s legal people try to determine who really owns the data. It is MESSY.

The MSFT partner should always set up the customer’s Azure deployment using a Microsoft Account that is owned by the customer. Additional administrators can be configured. Up to 5 alerts can be configures to send them to the reseller and the customer.

Using Credit

“How much will doing X in Azure cost?” and “How much Azure credit do I need to buy?” will be the two most common questions we distributors will hear in the next 12 months. Ask me and I’ll respond with one of two answers:

  • If I’m in a good mood I’ll tell a consultant to go do some frakking consulting. How the frak am I meant to know what your customer’s needs are? And that’s if I’m in a good mood :)
  • If I’m in a bad mood I might award you with a LMGTFY award and make you famous :D

The answer is based on how credit is used. You buy credit, and everything you do in Azure “burns” that credit. It’s like having credit on a pay-as-you-go (aka “burner”) phone. If you do A then is costs X per minute. If you do B is costs Y per month. Go look at the Azure pricing calculator.

Not all “Azure” services can be purchased via credit. Examples include Azure AD Premium and AD RMS that are licensed via other means, i.e. SaaS like Office 365. Their branding under the Azure banner confuses things.

Credit Time Limits

Your credit in Azure will last for 12 months. It will not roll over. There are no cash-backs. Use it or lose it.

My advice is that you start off by being conservative with your purchasing, determine your burn rate and purchase for X months, rather than for Y years.

Topping Up Credit

You should have configured the email alerts for when credit runs low. If credit runs out then your services shut down. I hope you reserved VIP and server IP addresses!

When you get an alert you have two options:

  • Normal procedure will be to purchase additional credit via the above reseller model. With alerts, the MSFT partner can initiate the conversation with their customer. Obviously this takes a little while – hours/days  (I have no idea because I’m outside of the logistics of licensing).
  • If the customer runs out of credit and the reseller process will take too long or it’s a weekend, the customer can use a credit card to top up their account in the Azure Account Portal. This should be an emergency operation, adding enough credit for the time it will take to top up via the reseller.

Note that old credit is used first, to limit wastage because of the 12 month life of credit.

The Benefits of Open

For the customer, they can use Azure in a controlled manner. You don’t have to buy thousands of dollars of credit through a large enterprise EA license program. You don’t have unmanageable payment via a credit card. You buy up front, see how much it costs, and deploy/budget accordingly.

For the partner it opens up a new world of business opportunities. Resellers have a reason to care about Azure now, just like they did with Office 365 when it went to Open (and that business blew up overnight). They can offer the right solution for customers, private (virtual or cloud), hybrid cloud or public cloud. And they can build a managed services business where they manage the customers’ Azure installations via the Azure Management Portal.

Distributors also win under this scheme by having another product to distribute and build services around.

And, of course, Microsoft wins because they have a larger market that they can sell to. MSFT only sells direct to the largest customers. They rely on partners to sell to the “breadth market”, and adding Azure to Open gives a reason for those partners to resell Azure on Microsoft’s behalf.

2014
07.31

Microsoft published a KB article to help you when the Hyper-V Best Practice Analyzer (BPA) does not exit or appears to hang/crash.

Symptoms

Hyper-V Best Practice Analyzer (BPA) does not exit under the following conditions:

  • A virtual machine already exists.
  • The virtual machine is connected to a vhd or vhdx as the hard disk drive. However, the vhd or vhdx file itself is renamed or deleted, and does not exist in reality.

Cause

The PowerShell script as seen here runs internally when running the Hyper-V BPA:

C:\Windows\System32\BestPractices\v1.0\Models\Microsoft\Windows\Hyper-V\Hyper-V.ps1

However, due to a defect in the script, the information retrieval process goes into a loop, and the BPA does not exit until timeout.

Workaround

You need to delete the non-existing vhd or vhdx from the virtual machine settings, and then rerun BPA for Hyper-V by following these steps:

  1. Start Hyper-V Manager.
  2. Select the virtual machine that is connected to a non-existing vhd or vhdx, then right-click and open Settings.
  3. From the virtual machine settings window, click on the non-existing hard drive, and then click Delete.
  4. Click OK to close the virtual machine setting window.
  5. Rerun BPA for Hyper-V from Server Manager.

The article claims to apply to Windows Server 2012 (WS2012).

2014
07.31

Very quiet 24 hours in the Microsoft world. The only bit of news I have for you is Microsoft’s newest (48 hours old) statements regarding the US government trying to spy on non-USA located emails.

2014
07.30

The big news here for MSFT techies are the releases of update rollups for SysCtr 2012 SP1 and SysCtr 2012 R2. Please wait 1 month before deploying to avoid the inevitable issues (history indicates that I am probably right) and use that time to carefully review the installation instructions.

2014
07.30

I do not know what the root cause of my location-specific outage last Friday was. I know that my Vodafone Ireland broadband at home was affected. I also know that Sky Ireland broadband was affected. But others internationally and the ISPs at work had no issues. It was all very strange … and the problem appears to have sorted itself out today (the following Wednesday).

Anywho, business (and sarcy posts) as normal!

2014
07.29

Another slow 24 hours:

2014
07.28

It was a quiet weekend. Note a useful scripts for health checking a Scale-Out File Server (SOFS) by Jose Barreto.

2014
07.28

If you’re affected by this issue then you should have read this post. Microsoft posted a KB article for when virtual machines lose network connectivity when you use Broadcom NetXtreme 1-gigabit network adapters on Windows Server 2012 Hyper-V or Windows Server 2012 R2 Hyper-V.

Symptoms

When you have Hyper-V running on Microsoft Windows Server 2012 or Windows Server 2012 R2 together with Broadcom NetXtreme 1-gigabit network adapters (but not NetXtreme II network adapters), you may notice one or more of the following symptoms:

  • Virtual machines may randomly lose network connectivity. The network adapter seems to be working in the virtual machine. However, you cannot ping or access network resources from the virtual machine. Restarting the virtual machine does not resolve the issue.
  • You cannot ping or connect to a virtual machine from a remote computer.

These symptoms may occur on some or all virtual machines on the server that is running Hyper-V. Restarting the server immediately resolves network connectivity to all the virtual machines.

Cause

This is a known issue with Broadcom NetXtreme 1-gigabit network adapters that use the b57nd60a.sys driver when VMQ is enabled on the network adapter. (By default, VMQ is enabled.)

The latest versions of the driver are 16.2 and 16.4, depending on which OEM version that you are using or whether you are using the Broadcom driver version. Broadcom designates these driver versions as 57xx-based chipsets. They include 5714, 5715, 5717, 5718, 5719, 5720, 5721, 5722, 5723, and 5780.

These drivers are also sold under different model numbers by some server OEMs. HP sells these drivers under model numbers NC1xx, NC3xx, and NC7xx.

Workaround

Broadcom is aware of this issue and will release a driver update to resolve the issue. In the meantime, you can work around the issue by disabling VMQ on each affected Broadcom network adapter by using the Set-NetAdapterVmq Windows PowerShell command. For example, if you have a dual-port network adapter, and if the ports are named NIC 1 and NIC 2 in Windows, you would disable VMQ on each adapter by using the following commands:

Set-NetAdapterVmq -Name “NIC 1″ -Enabled $False
Set-NetAdapterVmq -Name “NIC 2″ -Enabled $False

You can confirm that VMQ is disabled on the correct network adapters by using the Get-NetAdapterVmq Windows PowerShell command.

Note By default, VMQ is disabled on the Hyper-V virtual switch for virtual machines that are using 1-gigabit network adapters. VMQ is enabled on a Hyper-V virtual switch only when the system is using 10-gigabit or faster network adapters. This means that by disabling VMQ on the Broadcom network adapter, you are not losing network performance or any other benefits because this is the default. However, you have to work around the driver issue.

Get-NetAdapterVmqQueue shows the virtual machine queues (VMQs) that are allocated on network adapters. You will not see any virtual machine queues that are allocated to 1-gigabit network adapters by default.

Sigh. I hope Broadcom are quicker about releasing a fix than Emulex (customers are waiting 10 or 11 months now?).

2014
07.25

My site is hosted on Azure in the Dublin (Europe North) region. On Friday morning, I was checking something when I saw my site was not loading correctly – it was either offline or VERY slow. So I check the Azure status and saw it was offline. I restarted the application pool and the problem remained. I rebooted. MySQL took an age to load, but the site was still not loading … from home.

I have endpoint monitoring configured. Notice that Amsterdam was showing an issue and Chicago was not. Strange, eh? I’ve worked in hosting and I know how localised these problems can be. So it was time to start digging.

I asked online and people in Denmark were OK. Folks in Belfast and Netherlands had connection problems. Later, Denmark went offline and Amsterdam came back!

image

 

From Home (Vodafone Ireland – very slow/no access) I ran a tracert:

image

From the lab at work (Magnet ISP – access OK) I had different results:

image

From a VM with an ISP (Blacknight – access OK) I had different results again:

image

It was very odd. Nothing was red on the Azure status site. I’m guessing there was a localized issue within Azure that affected just a subset of us, or there was an external routing issue that affected some ISPs.

It’s still like this as I post … in other words, the site is fine for some and offline for others.

EDIT (30/7/2014):

I came home today to find that my site was once again available via my ISP.

 

Technorati Tags: ,
2014
07.25

image

Well done, Simon! You win this award because you:

  • You asked someone else to do your searching even when the answer is easy to find.
  • Even when I responded with a LMGTFY response where the first 5 links gave you your answer, you still wanted me to do the clicking and reading for you.
  • And then you go uppity about it Smile

Heck, 2 of the links were written by Microsoft, one by me, one on Hyper-V.nu and one by Thomas Maurer. We community contributors spend a lot of time writing this stuff. Please don’t expect us to read it to you too.

You can lead a horse to water but you cannot make him drink.

2014
07.24

Congratulations to my MVP colleagues, Alessandro Cardoso and Benedict Berger on the recent publication of their respective books.

System Center 2012 R2 Virtual Machine Manager Cookbook, 2nd Edition

Alessandro wrote this second edition book to focus on SCVMM 2012 R2, available from Amazon:

Overview

  • Create, deploy, and manage datacenters and private and hybrid clouds with hybrid hypervisors using VMM 2012 R2
  • Integrate and manage fabric (compute, storages, gateways, and networking), services and resources, and deploy clusters from bare metal servers
  • Explore VMM 2012 R2 features such as Windows 2012 R2 and SQL 2012 support, converged networks, network virtualization, live migration, Linux VMs, and resource throttling and availability

What you will learn from this book

  • Plan and design a VMM architecture for real-world deployment
  • Configure network virtualization, gateway integration, storage integration, resource throttling, and availability options
  • Integrate SC Operations Manager (SCOM) with VMM to monitor your infrastructure
  • Integrate SC APP Controller (SCAC) with VMM to manage private and public clouds (Azure)
  • Deploy clusters with VMM Bare Metal
  • Create and deploy virtual machines from templates
  • Deploy a highly available VMM Management server
  • Manage Hyper-V, VMware, and Citrix from VMM
  • Upgrade from previous VMM versions

Hyper-V Best Practices

image

Benedict wrote this book, available from Amazon:

This is a step-by-step guide to implement best practice configurations from real-world scenarios, enhanced with practical examples, screenshots, and step-by-step explanations.This book is intended for those who already have some basic experience with Hyper-V and now want to gain additional capabilities and knowledge of Hyper-V.If you have used Hyper-V in a lab environment before and now want to close the knowledge gap to transfer your Hyper-V environment to production, this is the book for you!

Congratulations to both authors!

Before anyone asks – no, I am not planning an update to the WS2012 Hyper-V book. It’s too much work for too little return in too small a window (Windows Server vNext Preview will be announced in October, 12 months after the RTM of WS2012 R2).

Get Adobe Flash player