2014
08.20

Speaking at TechEd has been one of my career ambitions for years – it is the pinnacle of speaking in the Microsoft world. I started of presenting at MSFT community events and had no such goal. But eventually I reached the point with my knowledge of Hyper-V that I felt like I could contribute and that I wanted to speak on the bigger stage; certainly presenting one of the sessions at the WS2012 launch in London (1000 attendees in the room) fired me up even more. I submitted sessions to TechEd, but never got anywhere. I gave up on my goal last year.

Then things fell into place at TechEd North America. I wasn’t going to do Speaker Idol. But when I was asked, I had an idea and I said to myself “frak it, do it! It’ll be fun to do”. And I ended up winning a slot in “TechEd” int he USA next year. I also talked to some folks and they gave me some advice about submitting sessions for TEE14. I submitted one session and …

Getting good news is always a nice way to finish the day. Early yesterday evening I received an email informing me that Microsoft had picked their sessions/speakers for TEE14. I followed the link to check the status of my submission and there it said:

Approval Status: Approved

Yes; I did my happy dance :D My guess is that we cannot talk about our sessions yet, but you can safely guess that I’ll be talking about Hyper-V.

Hopefully I’ll see some of you there when I present … at TechEd!

2014
08.19

I know there’s a risk in telling you to delay deploying updates for 1 month. Some think that means switching to manual approval – and that is an oxymoron because manual approval rarely happens. No; I would rather see large enterprises use a model that automatically deploys updates after delaying them for 1 month, just as you can do with System Center 2012 (R2) Configuration Manager (SCCM).

I’m going to refer you to the excellent guides by SCCM MVP, Niall C. Brady. SCCM uses WSUS to download the Windows Catalog. When I configure SCCM I configure WSUS to automatically sync and to automatically supersede updates. That means if Microsoft releases a replacement update, the old version is automatically replaced. That’s important so keep that in mind when reading the rest of the solution.

I will configure automatic deployment rules (ADRs) for each product. The ADR will be set up as follows:

  • Software Available Time: Set this to something like 21 days. That means that SCCM will hold back any applicable update for 3 weeks. That gives Microsoft lots of time to fix an update and the replacement will supersede the dodgy update.
  • Installation Deadline: With this set to 7 days, we have 4 weeks before updates are pushed out … and that assuming that we haven’t applied maintenance windows to any collections (servers, VMs, call centre PCs, etc) that might further delay the deployment.

image

With the above configuration, the dodgy August updates would not have been deployed to PCs or servers on your network. Instead, a tested and fixed update will be released, SCCM will sit on it and automatically approve it at a later date.

BTW, I do a similar thing with Endpoint Protection updates by delaying approval for 4 hours with immediate deployment.

I don’t know of a method for accomplishing this in Windows Intune – I’d like to see it. The same goes for WSUS, but a commenter suggested using cmdlets from this site for WSUS to write a script; I’d rather see a clean solution from Microsoft similar to what we have in ConfigMgr but less granular.

2014
08.19

Does “fail fast” = “fail predictably often”? Automated testing of software for cloud services needs to be investigated and questioned. First we had the clusterfrak August updates for Windows. Then a significant chunk of Azure went offline.

image

2014
08.19

How I laughed back in 2003 when I read that Munich was “dumping” Windows to migrate all servers, desktops and productivity software to Linux and open source. At the time I was deploying an XP and Windows Server 2003 network in a German group, headquartered in Munich. I saw up close, how dumb some local IT people could be (hello Marco of HVB and Hypo Real Estate IT! – another case of “I told you so” muppetry).

You see, the Munich city government decided to dump all Microsoft software. Everyone, other than penguin huggers, told them that they were nuts. If you value productivity and collaboration, you go with Microsoft. Even a college student, educated with an open mind instead of brainwashed by a “son of Linus”, can tell you that off-the-shelf software that you pay for is cheaper to buy and own than free software that you have to customise and maintain.

And that’s the lesson that Munich has learned in the last 10 years.

Firstly it took from 2003 until 2013 for Munich to complete the migration. Sounds mad, right? The whole story is mired in secrecy, political rhetoric, and bullshi1t marketing. What we do know is that employees are complaining that they cannot get work done. They can’t figure out Linux workstations. Their productivity software is inferior to Office. And what they produce is incompatible with their customers/suppliers/partners.

Oh well! I guess Munich can find some open source scheiße to use over the next 10 years to migrate back to Microsoft. Or maybe they can hire a giant consulting firm that will cost too much.

2014
08.18

The big news this morning is that Microsoft has had to withdraw 4 of last weeks automatic updates. But in other news:

2014
08.18

I’m sick of this BS.

Microsoft is investigating behavior in which systems may crash with a 0×50 Stop error message (bugcheck) after any of the following updates are installed:

2982791 MS14-045: Description of the security update for kernel-mode drivers: August 12, 2014
2970228 Update to support the new currency symbol for the Russian ruble in Windows
2975719 August 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2
2975331 August 2014 update rollup for Windows RT, Windows 8, and Windows Server 2012

This condition may be persistent and may prevent the system from starting correctly.

If you are affected by any of the above then the repair process (see Known Issue 3) is an ungodly nightmare.

This is exactly why I tell people to delay deploying updates for 1 month. That’s easy using SCCM (an approval rule will do the delaying and supersede for you). WSUS – not so easy and that requires manual approval, which sadly we know almost never works.

Feedback, private and public from MVPs hasn’t worked. Negative press from the tech media hasn’t worked. What will, Microsoft? Nadella oversaw this clusterfrak of un-testing before he was promoted. Is sh1te quality the rule from now on across all of Microsoft? Should we tell our customers to remain un-patched, because catching malware is cheaper than being secure and up-to-date? Really? Does Microsoft need to be the defendant of a class action suit to wake up and smell the coffee? Microsoft has already lost the consumer war to Android. They’re doing their damndest to lose the cloud and enterprise market to their competition with this bolloxology.

2014
08.15

Here’s the latest from the last 24 hours:

2014
08.14

There’s a new craze out there with famous people called the Ice Bucket Challenge. A person is dared to take a bucket of ice water over the head (and post the video online) or donate to charity, in in of of “raising awareness” of a disease called ALS. Nadella and Zuckerberg have done it. Gates has been challenged.

2014
08.13

I’ve recently started doing lots of presentation on Azure thanks to the release of Azure via Open licensing. People wonder what the scenarios ate where an SME would deploy machines in Azure and on premises. Here’s one I came up with this morning (an evolution of one I’d looked at before).

I was chatting with one of my colleagues about a scenario where a customer was looking deploying ADFS to provide Office 365 authentication for a medium-sized multinational company. I wondered why they didn’t look at using Azure. Here’s what I came up with.

Note: I know SFA about ADFS. My searches make me believe that deploying a stretch ADFS cluster with a mirrored SQL backend is supported.

image

The company has two on-premises networks, one in Ireland and one in the USA. We’ll assume that there is some WAN connection between the two networks with a single AD domain. They have users in Ireland, the USA, and roaming. They want ADFS for single sign-on and they need it to be HA.

This is where companies normally think about deploying ADFS on-premises. Two issues here:

  • You need local infrastructure: Not so bad if you have spare license and hardware capacity on your hosts, but that’s not a given in an SME.
  • Your ISP becomes a risk: You will place ADFS on premises. Your office has a single Internet connection. A stray digger or ISP issue can put the entire business (not just that office) out of action because ADFS won’t be there for roaming/remote users to authenticate with O365.

So my original design was to stretch the network into Azure. Create a virtual network in an Azure region that is local to your Office 365 account (for example, an Irish O365 customer would deploy a virtual network in Azure Europe North). Create a site-to-site VPN network to connect the on-premises network to the Azure VNet. Then deploy an additional DC, in the same domain as on-premises, in the Azure VNet. And now you can create an ADFS cluster in that site. All good … but what about the above multi-national scenario? I want HA and DR.

Deploy an Azure VNet for Ireland office (Azure Europe North) and for the USA office (Azure USA East) and place virtual DCs in both. Connect both VNets using a VPN. And connect both on-premises networks to both VNets via site-to-site VPNs. Then create an ADFS stretch cluster (mirrored SQL cluster) that resides in both VNets. Now the company’s users (local, roaming and remote) have the ability to authenticate against O365 using ADFS if:

  • Any or both local on-premises networks go offline
  • Either Azure region goes offline

As I said, I am not an ADFS person, so I’ll be interested in hearing what those how know ADFS think of this potential solution.

2014
08.13

Overnight, Microsoft released the August 2014 Update Rollup for WS2012 R2 and Windows 8. Lots of hotfixes!

2014
08.13

Microsoft released a hotfix that includes a microcode update for Intel processors to improve the reliability of Windows Server. It affects Windows Server 2012 R2, Windows Server 2012 and Windows Server 2008 R2 Service Pack 1 (SP1). The fix also solves a reliability problem for Hyper-V running on Ivy Bridge, Ivy Town, and Haswell processors.

A supported hotfix is available from Microsoft.

Note hotfix for Windows Server 2008 R2 SP1 will be available in September, 2014.

This update reminds me of a similar update that was released soon after the RTM of W2008 R2 to deal with issues in the Nehalem CPU. Without the fix, there were random BSODs. I got tired of telling people, so called expert consultants, to install the fix. Note this fix, test it if you want to deploy immediately, or wait one month and then install it. But make sure you install it – set something in your calendar NOW to remind yourself.

2014
08.13

A new KB by Microsoft covers a scenario where you get a "Access denied error" when Hyper-V Replica Broker goes online in a Windows Server 2012 or Windows Server 2012 R2 cluster.

Symptoms

Consider the following scenario:

  • You have a Windows Server 2012 R2 or Windows Server 2012 failover cluster that is in a domain, and the domain has a disjoint namespace. 
  • You set the primary Domain Name Service (DNS) suffix of the Windows Server 2012 failover cluster to the disjoint domain name.
  • You create a Hyper-V Replica Broker in the failover cluster, and then you bring the Hyper-V Replica Broker online.

In this scenario, this issue occurs, and an error message that resembles the following is logged in the cluster log:

Virtual Machine Replication Broker <Hyper-V Replica Broker BROKER>: ‘Hyper-V Replica Broker BROKER’ failed to register the service principal name: General access denied error.

The fix is included in the August 2014 update rollup.

2014
08.13

This KB informs us that Microsoft added much needed performance counters to Windows Server 2012 R2 for monitoring tiered Storage Spaces. You can find more details here. The new perfmon metrics are:

  • Avg. Tier Bytes/Transfer
  • Tier Transfer Bytes/sec
  • Avg. Tier Queue Length
  • Avg. Tier sec/Transfer
  • Tier Transfers/sec
  • Current Tier Queue Length
  • Avg. Tier Bytes/Write
  • Tier Write Bytes/sec
  • Avg. Tier Write Queue Length
  • Avg. Tier sec/Write
  • Tier Writes/sec
  • Avg. Tier Bytes/Read
  • Tier Read Bytes/sec
  • Avg. Tier Read Queue Length
  • Avg. Tier sec/Read
  • Tier Reads/sec
2014
08.12

Welcome to the SMB 3.02 edition of this update. Jose Barreto has been very busy!

Nanu nanu!

2014
08.11

I think we can call today’s issue “What’s New in Azure”:

2014
08.08

The San Francisco 49ers (an NFL or American Football team) are based in Santa Clara, California. Nearby you will find Cupertino, the HQ location of Apple. Also nearby, you will find Mountain View, the HQ location of Google.

image

What tablet did I see the 49ers using on the side line in a preseason game against the Ravens last night?

image

Let’s take a closer look:

image

Hmm, that’s not the Apple square button and it sure aint Android. The announcers went on to mention that the NFL has a sponsorship agreement with Microsoft Surface. Note the stylus? I reckon that’s a Surface Pro (not the 3 based on the shape). Apparently the league only allows side line tech such as this for analysing still pictures (a full field shot is taken just before and after a play starts for later analysis by coaches and players).

Previously a junior staff member printed out booklets of black and white photos and ran them to the coaches/players on the side line. That took at least 30 seconds. They must be a mess to use and keep organised. Now colour images (see above) are transmitted straight to the Windows tablets and presented in a tiled touch interface. You can see below that some coaches like the new system, and some do not:

image

Interesting to see a team such as the Niners, who have just built the most technology centric stadium on the planet in the shadows of Apple and Google, are using Windows and the Surface.

Technorati Tags: ,
2014
08.08

I read a comment today that Storage Spaces was great for small/medium deployments. And yup, it is. I use Storage Spaces to store my invaluable photo library at home (a pair of Toshiba USB 3.0 3 TB drives). At work, we use a single DataOn Storage DNS-1640 24 x slot JBOD that is dual SAS attached to a pair of 2U servers to create an economical Hyper-V cluster. And we have sold 2U DataOn Storage CiB-9220 “Cluster in a Box” units for similar deployments in SMEs.

But most of our sales of JBODs have actually been for larger deployments. Let me give you an example of scalability using an image from my software-defined storage slide decks:

image

In the above diagram there are 4 x DataOn Storage DNS-1660 JBODs. Each has 60 x 3.5” disk slots. Using 6 TB drives (recently certified by DataOn) that gives you up to 1440 TB or just over 1.4 petabytes of raw storage. That’s with 7200 RPM drives and that just won’t do. We can mix in some dual chanel SAS SSDs (using 3.5 to 2.5 adapters) to offer peak performance (read and write).

In the above design there are 4 SOFS cluster nodes, each having 2 x direct SAS connections to each JBOD – 4 JBODs therefore 8 SAS connections in each server. Remember that each SAS cable has 4 SAS ports. So a 6 Gb SAS cable actually offers 24 Gbps of throughput.

Tip from DataOn: If you’re using more than 48 drives then opt for 12 Gb SAS cards, even if your JBOD runs at 6 Gb; the higher spec cards circuitry performs better even with the lower speed SAS disks/JBODs.

Now this is where you say that this is all great in theory but surely no one is doing this. And there you would be wrong. Very wrong. MVP Carsten Rachfahl has been deploying large installations since late 201 in Germany. The same is also true of MVPs Thomas Maurer and Michael Rüefli in Switzerland. At my job, we’ve been selling quite a few JBODs. In fact, most of those have been to replace more expensive SAN installations from legacy vendors. This week I took this photo of the JBODs in the above architecture while they were passing through our warehouse:

Yup, that’s potentially over 1 PB of raw storage in 16U of rack space sitting on one shipping pallet. The new owner of that equipment is building a SAS solution that will run on Hyper-V and use SMB 3.0 storage. They’ll scale out bigger and cheaper than they would have done with their incumbent legacy storage vendor – and that’s why they’re planning on buying much more of this kind of storage.

2014
08.08

It looks like you will have to use the latest version of IE to be supported after January 2016. That’ll go down like the Hindenburg in businesses.

2014
08.07

Very little happening. These quiet times are great for rumours.

Oh – and don’t use Generation 2 virtual machines on WS2012 R2 Hyper-V.

2014
08.06

I’ve done photography in some of the most rural parts of the world, but I’ve never been without phone or Internet for 3 days before. *exaggeration alert*  Being in a dark valley in Scotland over a long weekend was like having an arm removed. Anywho, here’s the news from the last few days. Note that there is an “August Update for …” Windows 8.1 and Windows Server 2012 R2 coming out next week, what the media will probably called “Update 2 for …”.

2014
08.01

Talk about crappy timing. A federal court in the USA has determined that emails are not actually emails, and therefore Microsoft must turn over emails business records stored on Email servers in the Dublin region to the FBI. One must wonder why the FBI didn’t contact the Irish authorities who would have jumped at once if the case was legitimate and issued an order locally. Maybe the case is not actually legitimate?

On the eve of Azure going big through Open licensing, a federal judge has stuck a stake through the heart of the American IT industry – this is much bigger than Microsoft, affecting Google, Apple, Oracle, IBM, HP, Dell, and more. Microsoft has already lodged an appeal.

2014
08.01

It is August 1st, and today is the very first day that you can buy credit for usage on Azure through Open Licensing. This includes Open, OV, and OVS, as well as educational and government schemes.

How Does It Work?

The process is:

  1. A customer asks to buy X amount of credit from a reseller – the next bit of stuff is normal licensing operations that the customer does not see.
  2. The reseller orders if from a distributor.
  3. The distributor orders the credit from Microsoft.
  4. A notification email is sent out to the customer with a notification to download an OSA (online services activation) key from their VLSC account (used to manage their Open volume licensing). The customer is back in the process at this point.
  5. The customer/partner enters the OSA key in the Azure Account Portal.
  6. The customer/partner configures Azure administrator accounts and credit alerts.

Credit is purchased in blocks of $100. I believe that it is blocks of €75 in the Euro zone. So a customer can request $5000 in credit. They don’t get 50 OSA keys: they get one OSA key with a value of $5000.

Who’s Account Should We Use?

If you are a customer and the MSFT partner wants to set you up under their Azure account, tell them to frak right off. The VMs will be THEIR property. The data will be THEIR property. We have seen this situation with Office 365. Customers have lost access to data for months while MSFT’s legal people try to determine who really owns the data. It is MESSY.

The MSFT partner should always set up the customer’s Azure deployment using a Microsoft Account that is owned by the customer. Additional administrators can be configured. Up to 5 alerts can be configures to send them to the reseller and the customer.

Using Credit

“How much will doing X in Azure cost?” and “How much Azure credit do I need to buy?” will be the two most common questions we distributors will hear in the next 12 months. Ask me and I’ll respond with one of two answers:

  • If I’m in a good mood I’ll tell a consultant to go do some frakking consulting. How the frak am I meant to know what your customer’s needs are? And that’s if I’m in a good mood :)
  • If I’m in a bad mood I might award you with a LMGTFY award and make you famous :D

The answer is based on how credit is used. You buy credit, and everything you do in Azure “burns” that credit. It’s like having credit on a pay-as-you-go (aka “burner”) phone. If you do A then is costs X per minute. If you do B is costs Y per month. Go look at the Azure pricing calculator.

Not all “Azure” services can be purchased via credit. Examples include Azure AD Premium and AD RMS that are licensed via other means, i.e. SaaS like Office 365. Their branding under the Azure banner confuses things.

Credit Time Limits

Your credit in Azure will last for 12 months. It will not roll over. There are no cash-backs. Use it or lose it.

My advice is that you start off by being conservative with your purchasing, determine your burn rate and purchase for X months, rather than for Y years.

Topping Up Credit

You should have configured the email alerts for when credit runs low. If credit runs out then your services shut down. I hope you reserved VIP and server IP addresses!

When you get an alert you have two options:

  • Normal procedure will be to purchase additional credit via the above reseller model. With alerts, the MSFT partner can initiate the conversation with their customer. Obviously this takes a little while – hours/days  (I have no idea because I’m outside of the logistics of licensing).
  • If the customer runs out of credit and the reseller process will take too long or it’s a weekend, the customer can use a credit card to top up their account in the Azure Account Portal. This should be an emergency operation, adding enough credit for the time it will take to top up via the reseller.

Note that old credit is used first, to limit wastage because of the 12 month life of credit.

The Benefits of Open

For the customer, they can use Azure in a controlled manner. You don’t have to buy thousands of dollars of credit through a large enterprise EA license program. You don’t have unmanageable payment via a credit card. You buy up front, see how much it costs, and deploy/budget accordingly.

For the partner it opens up a new world of business opportunities. Resellers have a reason to care about Azure now, just like they did with Office 365 when it went to Open (and that business blew up overnight). They can offer the right solution for customers, private (virtual or cloud), hybrid cloud or public cloud. And they can build a managed services business where they manage the customers’ Azure installations via the Azure Management Portal.

Distributors also win under this scheme by having another product to distribute and build services around.

And, of course, Microsoft wins because they have a larger market that they can sell to. MSFT only sells direct to the largest customers. They rely on partners to sell to the “breadth market”, and adding Azure to Open gives a reason for those partners to resell Azure on Microsoft’s behalf.

2014
07.31

Microsoft published a KB article to help you when the Hyper-V Best Practice Analyzer (BPA) does not exit or appears to hang/crash.

Symptoms

Hyper-V Best Practice Analyzer (BPA) does not exit under the following conditions:

  • A virtual machine already exists.
  • The virtual machine is connected to a vhd or vhdx as the hard disk drive. However, the vhd or vhdx file itself is renamed or deleted, and does not exist in reality.

Cause

The PowerShell script as seen here runs internally when running the Hyper-V BPA:

C:\Windows\System32\BestPractices\v1.0\Models\Microsoft\Windows\Hyper-V\Hyper-V.ps1

However, due to a defect in the script, the information retrieval process goes into a loop, and the BPA does not exit until timeout.

Workaround

You need to delete the non-existing vhd or vhdx from the virtual machine settings, and then rerun BPA for Hyper-V by following these steps:

  1. Start Hyper-V Manager.
  2. Select the virtual machine that is connected to a non-existing vhd or vhdx, then right-click and open Settings.
  3. From the virtual machine settings window, click on the non-existing hard drive, and then click Delete.
  4. Click OK to close the virtual machine setting window.
  5. Rerun BPA for Hyper-V from Server Manager.

The article claims to apply to Windows Server 2012 (WS2012).

2014
07.31

Very quiet 24 hours in the Microsoft world. The only bit of news I have for you is Microsoft’s newest (48 hours old) statements regarding the US government trying to spy on non-USA located emails.

2014
07.30

The big news here for MSFT techies are the releases of update rollups for SysCtr 2012 SP1 and SysCtr 2012 R2. Please wait 1 month before deploying to avoid the inevitable issues (history indicates that I am probably right) and use that time to carefully review the installation instructions.

Get Adobe Flash player