2014
07.03

When you replicate a virtual machine from site A to site B then typically the replica VM in site B is powered down. Note that I haven’t specified a hypervisor or replication method, so this article applies to Hyper-V and vSphere, and not just to Hyper-V Replica.

In the past, if you ran SQL Server in a VM in a production site, you could replicate that VM to a secondary site. If the replica VM was powered down, i.e. cold, then you were granted a free license for that cold VM. This has changed with the release of SQL Server 2014, as covered by this post. Now you must have Software Assurance (SA) to cover the cold VM’s license for SQL Server.

This brings SQL Server in line with Windows Server’s SA offsite cold replica benefit.

There are restrictions on failover in the secondary site:

  • You can perform a brief test failover (lasting 1 week) once every 90 days.
  • The production system in the primary site must be powered off to legally perform a failover.
  • You can power up the secondary site VM for a “brief time” during the disaster while the production system is running in the primary site.
2014
07.03

After a month of neglect, I have finally caught up with all of my feeds via various sources. Here are the latest bits of news, mixed up with other Microsoft happenings from the last month.

2014
07.02

It’s been a long times since I posted one of these! I’ve just trawled my feeds for interesting articles and came up with the following. I’ll be checking news and Twitter for more.

2014
07.02

Microsoft released a KB article for when Backing up virtual machines fails when using the CSV writer after installation of update 2919355 in Windows.

Symptoms

Assume that you install update 2919355 on a Windows 8.1-based or Windows Server 2012 R2-based computer. When you try to back up some Hyper-V virtual machines that reside on cluster shared volumes, you receive an error message that indicates the backup request has failed.
Here is a sample of the error messages that you may encounter when this issue occurs:

Error(s): vss_e_unexpected_provider_error
Csv writer is in failed state with unexpected error

Note The error message that an end-user will see is surfaced by the backup vendor products, and therefore it will vary by vendor.

A hotfix is available to resolve this issue.

2014
07.02

It has come to the attention of myself and several other Hyper-V MVPs that people are having a nightmare searching for the download ISO for Hyper-V Server 2012 R2. I’ve verified the problem on Bing and Google and Microsoft are aware of the issue.

In the meantime you here is the download page for Hyper-V Server 2012 R2.

2014
07.02

I will be one of the presenters in a webcast hosted by the Petri IT Knowledgebase and sponsored by Veeam on July 16th at 13:00 EDT (18:00 UK/Irish Time). In this presentation I’ll be explaining the technologies that enabled Windows Server 2012 R2 (WS2012 R2) software-defined storage and Hyper-V over SMB 3.0. Chris Henley from Veeam will also discuss their backup and disaster recovery technology. And then there will be a Q&A session. There will be a moderator so you can fire in your questions for us to answer.

image

2014
07.02

My 7th Microsoft MVP Award

Yesterday (July 1st) was that my Microsoft Most Valuable Professional (MVP) award either expired or was renewed. Thankfully, my status as a Hyper-V MVP was renewed by Microsoft, as confirmed by the below (edited by me) email that arrived in yesterday afternoon:

image

A lot of work goes into my efforts, either here on my blog, writing for the Petri IT Knowledgebase, answering questions on forums, or presenting. This is a nice recognition for those efforts, and quite honestly, it is a career changer thanks to the access to information that we MVPs get … and should share with the community.

My efforts are only made possible thanks to the support of friends and family, the flexibility of my employers at MicroWarehouse, those in Microsoft who value the MVP program, and other community members who give me opportunities in webcasts, podcasts, speaking at events, and so on. Thank you all!

Here’s looking forward to a very interesting and eventful FY2015 (Microsoft financial year runs July to June).

2014
07.01

Storage is the bedrock of all virtualisation. If you get the storage wrong, then you haven’t a hope. And unfortunately I have seen too many installs where the customer/consultant has focused on capacity, and the performance has been dismal; so bad, in fact, that IT are scared to do otherwise normal operations that will impact production systems because the storage system cannot handle the load.

Introducing AutoCache

I was approached by TechEd NA 2014 by some folks from a company called Proximal Data. Their product, AudoCache, which works with Hyper-V and vSphere is designed to improve the read performance of storage systems.

image

A read cache is created on the hosts. This cache might be an SSD that is plugged into each host. Data is read from the storage. Data deemed hot is cached on the SSD. The next time that data is required, it is read from the SSD, thus getting some serious speed potential. Cooler data is read from the storage, and writes go direct to the storage.

Installation and management is easy. There’s a tiny agent for each host. In the Hyper-V world, you license AutoCache, configure the cache volume, and monitor performance using System Center Virtual Machine Manager (SCVMM). And that’s it. AutoCache does the rest for you.

So how does it perform?

The Test Lab

I used the test lab at work to see how AutoCache performed. My plan was simple: I created a single generations 1 virtual machine with a 10 GB Fixed VHDX D: drive on the SCSI controller . I installed SQLIO in the virtual machine. I created a simple script to run SQLIO 10 times, one after the other. Each job would perform 120 seconds of random 4K reads. That’s 20 minutes of thumping the storage system per benchmark test.

I have two hosts: Dell R420s, each connected to the storage system via dual iWARP (10 GbE RMDA) SFP+ NICs. Each host is running a fully patched WS2012 R2 Hyper-V. The hosts are clustered.

One host, Demo-Host1, had AutoCache installed. I also installed a Toshiba Q Series Pro SATA SSD (554 MB/s and 512 MB/S write) into this host. I licensed AutoCache in SCVMM, and configured a cache drive on the SSD. Note: that for each test involving this host, I deleted and recreated the cache to start with a blank slate.

The storage was a Scale-Out File Server (SOFS). Two HP DL360 G7 servers are the nodes, each allowing hosts to connect via dual iWARP NICs. The HP servers are connected to a single DataOn DNS-1640 JBOD. The JBOD contains:

  • 8 x Seagate Savvio® 10K.5 600GB HDDs
  • 4 x SanDisk SDLKAE6M200G5CA1 200 GB SSDs
  • 2 x STEC S842E400M2 SSDs

There is single storage pool. A 3 column tiered 2-way mirrored virtual disk (50 SSD + 550 HDD) was used in the test. To get clean results, I pinned the virtual machine files either to the SSD tier or to the HDD tier; this allowed me to see the clear impact of AutoCache using a local SSD drive as a read cache.

Tests were run on Demo-Host1, with AutoCache and a Cache SDD, and then the virtual machine was live migrated to Demo-Host2, which does not have AutoCache or a cache SSD.

To be clear: I do not have a production workload. I create VMs for labs and tests, that’s it. Yes, the test is unrealistic. I am using a relatively large cache compared to my production storage and storage requirements. But it’s what I have and the results do show what the product can offer. In the end, you should test for your storage system, servers, network, workloads, and work habits.

The Results – Using HDD Storage

My first series of tests on Demo-Host1 and Demo-Host2 were set up with the virtual machine pinned to the HDD tier. This would show the total impact of AutoCache using a single SSD as a cache drive on the host. First I ran the test on Demo-Host2 without AutoCache, and then I ran the test on Demo-Host1 with AutoCache. The results are displayed below:

image

image

We can see that the non-enhanced host offered and average of 4143 4K random reads per second. That varied very little. However, we can see that once the virtual machine was on a host with AutoCache, running the tests quickly populated the cache partition and led to increases in read IOPS, eventually averaging at around 52522 IOPS.

IOPS is interesting but I think the DBAs will like to see what happened to read latency:

image

image

Read latency average 14.4 milliseconds without AutoCache. Adding AutoCache to the game reduced latency almost immediately, eventually settling at a figure so small that SQLIO reported it as zero milliseconds!

So, what does this mean? AutoCache did an incredible job, boosting throughput 12 times above it’s original level using a single consumer grade SSD as the local cache in my test. I think those writing time sensitive SQL queries will love that latency will be near 0 for hot data.

The Results – Using SSD Storage

I thought it might be interesting to see how AutoCache would perform if I pinned the virtual machine to the SSD tier. Here’s why: My SSD tier consists of 6 SSDs (3 columns). 6 SSDs is faster than 1! The raw data is presented below:

image

image

Now things get interesting. The SSD tier of my storage system offered up an average of 62,482 random 4K read operations without AutoCache. This contrasts with the AutoCache-enabled results where we got an average of 52,532 IOPS once the cache was populated. What happened? I already alluded to the cause: the SSD tier of my virtual disk offered up more IOPS potential than the single local SSD that AutoCache was using as a cache partition.

So it seems to me, that if you have a suitably sized SSD tier in your storage spaces, then this will offer superior read performance to AutoCache and the SSD tier will also give you write performance via a Write-Back Cache.

HOWEVER, I know that:

  • Not everyone is buying SSD for Storage Spaces
  • Not everyone is buying enough SSDs for their working set of data

So there is a market to use fewer SSDs in the hosts as read cache partitions via AutoCache.

What About Other Kinds Of Storage?

From what I can see, AutoCache doesn’t care what kind of storage you use for Hyper-V or vSphere. It operates in the host and works by splitting the IO stream. I decided to run some tests using a WS2012 R2 iSCSI target presented directly to my hosts as a CSV. I moved the VM onto that iSCSI target. Once again, I saw almost immediate boosts in performance. The difference was not so pronounced (around 4.x), because of the different nature of the physical storage that the iSCSI target VM was on (20 HDDs offering more IOPS than 8), but it was still impressive.

Would I Recommend AutoCache?

Right now, I’m saying you should really consider evaluating AutoCache on your systems to see what it can offer.

2014
06.27

This is my presentation from TechCamp 2014 where I showed attendees how to build the Hyper-V on SMB 3.0 storage known as a Scale-Out File Server (SOFS) based on JBODs/Storage Spaces, Windows Server 2012 R2 (WS2012 R2) Failover Clustering, and SMB 3.0 networking.

2014
06.27

This presentation was an introduction for IT pros to deploying hybrid cloud solutions based on Microsoft Azure, in conjunction with on-premises Hyper-V / System Center deployments. Here’s the deck that I presented … and yes … there are LOTS of slides because there is constantly new stuff in Azure.

 

2014
06.16

You might have heard of “The Hyper-V Amigos” podcast – this is something that has a history that runs back quite a while with a number of us European Hyper-V MVPs. Carsten (Rachfahl) and Didier (Van Hoye) asked myself and Hans Vredevoort to join them in their latest show to talk about TechEd North America 2014.

2014
06.05

Assuming that the bronchitis and tonsillitis that I was diagnosed with at 1:15 am this morning clears up, I will be attending the TechEd Europe roundtable meeting in Barcelona on Monday/Tuesday. The Microsoft folks in attendance are some of the planners of this massive event. My role: give feedback and discuss any ideas at the table.

Here’s your opportunity:

Do you have any feedback or ideas that you’d like me to bring to the table for TechEd Europe 2014? If so, post a comment below.

EDIT: Please keep the comments relevant to the TechEd event itself.

2014
06.04

A few bits and pieces from the last 24-48 hours:

2014
06.03

My work laptop for the last 3 years has been a modified HP EliteBook 8740w. It’s usefulness shrank pretty quickly as System Center grew bigger and my Hyper-V demos started to require more and more machines, 10 Gbps networking and JBODs. A lab has been built and I routinely access it remotely – and I’ve been known to record some demos using Camtasia when Internet access is dodgy.

image

An opportunity arose to replace my work laptop – I could move from “the best” to an Ultrabook. This would kill a few birds with one stone:

  • Use a brand of machine in work presentations that my employers actually distribute (Toshiba)
  • Use a lighter machine
  • Donate “the beast” to the lap where it can be reused as a host, maybe as an NVGRE gateway host.

We ordered in some Toshiba KIRAbooks, Toshiba’s premium consumer ultrabook. This is a mad laptop; i7-4550U, 8 GB RAMM, 256 GB SSD, and …. a screen running at 2560 x 1440. It’s unusable without Windows 8.1 screen scaling.

image

First impressions: Very nice (touch) display. Nice functional build. It looks nice on the desk. Good keyboard. Nice big mouse pad. Slim. Obviously lighter than “the beast”. It has 3 x USB, 1 x SD, and 1 x full sized HDMI. Battery is listed at 9.16 hours (probably by using the custom ECO power profile). It came with Windows 8.1 Pro with the April 2014 update. There is no stylus. And yes, I had to uninstall some crapware from MuckAfee, Spotify, and others. I will have to get USB/VGA and RJ45 dongles (I already use those for my personal Lenovo Yoga).

Price-wise, this seems to come in at $1,699,99 on Amazon.com. It’s just started shipping in Europe, and I didn’t see it on Amazon UK or Germany. AFAIK, Toshiba are selling to consumers via exclusive retailers.

I’ll write up a bit more when I have had time to work with it.

Technorati Tags: ,
2014
06.03

It was a bank (national) holiday weekend here in Ireland. I was also attending and speaking at E2EVC in Brussels, where I talked about designing and building Hyper-V over SMB 3.0 storage – that seemed to go down well to a full room.

Here’s a break down of the news from a slow weekend:

2014
05.30

Greetings from Belgium where I will be presenting a Hyper-V over SMB 3.0 session (designing & implementing a SOFS) at E2EVC, a community virtualization conference. Here is the Microsoft news of the last 24 hours. It appears that the momentum to signing up to support and partner with Azure is growing.

 

2014
05.29

I was doing some work with some SSDs yesterday that had previously had some firmware issues. I wanted to verify that everything was OK, so I popped the disks into the DataOn 1640 JBOD that is in the lab at work. The firmware was upgraded, and the disks were eligible to join a storage pool, but they were not reporting a physical location.

A Storage Spaces certified JBOD (there is a special HCL category) must be able to report disk locations using SCSI Enclosure Services (SES). You can see my problem below; 4 SSDs are not reporting their enclosure or slot locations, but the other disks in the JBOD are just fine.

image

I contacted the folks in DataOn and had a near instant response. Run the following cmdlet twice:

Update-StorageProviderCache -DiscoveryLevel Full

I did that, refreshed Server Manager and … no change.

Ah … but this isn’t a simple Storage Spaces build. This is a clustered Storage Spaces installation. I jumped over to the other node in my SOFS, the “read-write server”, and ran the cmdlets there. One refresh later and everything was as it should be.

image

Now all of the disks are reporting both their enclosure and their slots.

Thanks to Rocky in DataOn for his help!

2014
05.29

Not much going on in the last 24 hours:

2014
05.28

I do a lot of messing around with Storage Spaces. This can involve reusing disks that have been used in other pools – I want to erase the disks but I encounter an error:

Error deleting virtual disk: The storage pool could not complete the operation because its configuration is read-only.

This is easy to fix … with PowerShell. Get the name of the Storage Pool, also known as the friendly name – for example Pool1. Then run:

Get-StoragePool –FriendlyName “Pool1” | Set-StoragePool –IsReadOnly $false

Then if you are sure, you can delete the storage pool, thus cleaning the disks for reuse:

Get-StoragePool –FriendlyName “Pool1” | Remove-StoragePool

2014
05.28

It’s been a slow few days for news. Here’s what popped up overnight.

2014
05.27

A monumental change is happing in IT right now. You can fight it all you want, but cloud is a disrupting force that will effect our entire environment. IT pros are scared of “the cloud” … but is their fear justified?

This is why a bunch of us are presenting on the IT pro aspects of the Microsoft Cloud OS on June 19th and 20th. It’s a 2 day event in Dublin Citywest, where you can register for the Hybrid Cloud stuff (infrastructure as a service or IaaS) on June 19th, the Office365/etc stuff (software as a service or SaaS) on June 20th, or even register for both days.

The content on June 19th will span on-premises IT, building private clouds, automation, and mixing your on-premise infrastructure with Microsoft Azure. On June 20th we move on to SaaS where there will be lots of Office 365, Windows Intune, and Power BI. All presenters have been instructed to present demo-heavy “here’s how to …” technical sessions.

Now is the time to learn and evolve. Don’t be a dinosaur; get on board with the cloud now and be the person who is employable in 5 years time. You can choose to cover your ears and close your eyes, but you’ll be dug up from an IT tar pit in a few million years time.

dinosaur (3)

IT pros that ignored the cloud as it made them extinct

This event WILL NOT BE REPEATED. This is a once-off collection of subject expert speakers. No roadshow, no Microsoft Ireland event, and no partner event will repeat what we’re doing at TechCamp.

And consultants … this message goes double for you.

2014
05.23

Microsoft just announced the coming of a new edition of Windows 8.1 for low cost devices called Windows 8.1 with Bing. The goal of this SKU is to reduce the overall cost of low price machines, making Windows PCs more accessible.

I talked with one of my colleagues who manages our consumer device market, spanning Windows, Chromebooks, Android, etc. He just told me that Chromebooks have 20 percent to 25 percent of the U.S. market for laptops that cost less than $300 – note that story was from 2013 so the Google gains might be larger now.

This is an important market – it’s where the education market resides. Ever hear the crude phrase: “get ‘em young and rear them as pets”? That’s what Google is trying to do … get kids into the ecosystem and keep them for life.

Microsoft has no choice but to react; they’re used to owning 90%+ of the PC market so losing an important demographic such as this is not good. Losing a large market to the Google ecosystem at such a young age makes it more difficult to win them back.

Many OEMs take payment to change the browser and search engine to something other than the default Microsoft services. Windows 8.1 with Bing will ship on devices with IE set as the default browser and Bing as the default search engine. In return, we believe that OEMs will get lower cost copies of Windows, and this will allow Windows laptops to compete against Google’s machines … and hopefully (for Microsoft) bring those young users into the Microsoft world of Bing, Outlook, and more.

Technorati Tags:
2014
05.23

Folks of the Bay Area and surrounding counties – if you want to learn about Microsoft commercial technology such as Azure, Lync, Hybrid Cloud, ADFS, OS deployment, and more, then you need to check out TechDays. If I lived in 49er country then I would register.

The speaker list is a whos-who from the west coast Microsoft community. The location is easy to find – it’s the MSFT office near the terminus of the Powell cable car. There’s loads of public transport routes in/out – I know this and I’ve only visited the Bay three times from Ireland.

So check out the agenda, register, attend, and learn something to advance your career.

2014
05.23

1,000,000 IOPS from Hyper-V VMs using a SOFS? Talk about nerd-vana!!! Here are the links I found interesting over the last 48 hours:

2014
05.21

I did not expect this announcement until WPC, but it’s come out today. Microsoft announced, via a video, that Microsoft Azure will be available for resellers to sell, and customers to buy, through Open licensing on August 1st 2014. Yes, Azure is coming to the channel. Previously Azure has only been available direct (credit card) or via Enterprise Agreements.

Phil Sorgen took to the webcam to record this message. A blog post was also written by Josh Waldo, Senior Directory, Cloud Partner Strategy. There is also a FAQ for Azure in Open licensing. There will be a “ramp up” online event on Microsoft Azure in Open Licensing on June 4th. Register here.

image

Sorgen starts off by saying that Microsoft believes in joint success with partners, and in making business with Microsoft easier for partners. These two pillars are central to an exciting new opportunity for partners.

He announces it: Azure will be available through the distribution channel via Open licensing for partners to resell to their customers.

Azure allows partners to serve more customers without increasing their footprint. Successful cloud partners have learned how to expand their services beyond basic deployments. Think business IT-enabled consulting. Partners have increased revenues, but they had to evolve their business models.

Personally, I know of one services business that automates to an incredible level and cloud services fits their model perfectly. Before the recession they shifted tin like everyone; they evolved and now they are flourishing, and taking business from legacy service providers.

“Moving to cloud is a process not an event”: true for partners and customers. Azure can become even moer compelling. Note that Azure contains many hybrid cloud services, enabling “on ramps” to services that extend the functionality of on-premises IT, making it easier for businesses to explore and adopt Microsoft’s public and hybrid cloud offering.

Azure in Open will be flexible, provide compliance manageability, and provide value for customers. The consumption based billing provides a low barrier to entry, making it easier for SMEs to deploy services without huge CapEx costs. “Consumption aligned billing” is one of the buzz phrases. Focus on services instead of tin.

There is a new licensing model with Azure in Open.

Moving over to the blog post:

The cloud is growing 5 times faster than traditional IT. Microsoft alone is thought to purchase 17% of all servers on the planet in a year. “Additionally, partners that are building strong cloud businesses have 1.6X of recurring revenue as a portion of total revenue versus other partners”.

How does this licensing model work?

When you resell Azure in Open Licensing, you purchase tokens from your preferred Distributor and apply the credit to the customer’s Azure Portal in increments of $100. The credits can be used for any consumption-based service available in Azure. To add more credit, you simply purchase new tokens and add them to the account. This gives you the opportunity to manage your customer’s portal, setup services, and monitor consumption, all while maintaining a direct relationship.

In other words, you will buy Azure credit in the form of $100 tokens (I guess there will be localized versions). You can then use that credit in any way on Azure. It will be up to you (the end customer) to have enough credit to do what you need to do or to keep your services online. The advantage here is that you’re controlling costs (unlike post-usage credit card) and you don’t need to pre-purchase a huge credit (like with EA) before you know what your services will cost. I suspect that if partners want to, they can operate a service to help customers manage their credit.

A token comes in the form of an Online Services Activation (OSA) key. If you want $1000 in credit, you buy 10 SKUs of $100 and get 1 OSA key for the sum credit. The value has a 12 month life, starting from when the customer redeems the OSA key online – this credit will not roll over so don’t over purchase for a year. A customer can top up at any time. If they cannot reach a reseller (weekend), the customer can top up using a credit card. The program will be available through:

  • Open commercial
  • Open Academic
  • Open Government

Partners can request co-administrator accounts on their customers’ accounts to help them manage their service. Alerts can be configured for when credit runs low and needs to be topped up.

image

IMO, this is great news for partners. They can now choose to resell Azure if they want, and keep the billing/customer relationship – something that caused fear in the past (“cloud vendor X is trying to steal my customers”). Some might not want billing overhead and might go with another option.

Also, this announcement reinforces Microsoft’s unique selling point in the cloud wars. They are the only company with a private/public hybrid cloud model that spans on-premises customer owned, hosting partners, and Azure. Microsoft is also the only cloud vendor with a partner-enabling model.

By the way, partners & customers in Ireland, if you want your techies to learn about Hybrid Cloud then you might want to send them to TechCamp 2014 in June.

Get Adobe Flash player