2014
11.24

Do you have a new laptop that refuses to boot from USB? You’re failing to get Windows to install from a removable device? Don’t have an RJ45 port to do PXE installs?

If so, I think I have a hack for you. This is what I used for my Toshiba KIRAbook when wiping Windows 10 Techniacl Preview to reinstall Windows 8.1 – it took a lot of Googling and experimentation to get the thing to boot from USB. My fix is not perfect because you sacrifice Secure Boot, but it works. And no, this page from Microsoft, which is copied endlessly on the Internet, is Bull$h1t.

The cause of the issue is UEFI, the successor to BIOS. You are going to have to configure 3 things:

1) Disable Secure Boot

Reboot your laptop into the UEFI setup (probably one of the function keys – this page is pretty good).

2) Enable CSM Boot/Disable UEFI Boot

In my Toshiba KIRAbook, I found this under Advance > System Configuration. The setting name changes depending on if it is enabled or not.

Note that this setting might be greyed out if you haven’t disabled Secure Boot yet.

3) Prepare a Boot Stick

I used a free tool called Rufus to prepare a USB stick from the Windows 8.1 with Update ISO file.

You can now install Windows on your laptop. You’ve lost Secure Boot and UEFI Boot (Windows 8.1 will not start when they are enabled), but you are able to install Windows. I’ll update this post if anyone comes up with something better.

Note: I hate this bolloxology. This stuff should be much easier.

2014
11.24

It’s been a slow few news days in the Microsoft world. Stuff I’m not linking to: the infinitely linked webcasts on mobility management and the Reign malware infecting computers in Ireland, Russia, and Saudi Arabia.

Windows Server

Windows Client

Azure

Office 365

Miscellaneous

2014
11.20

There are a lot of upset people because of (1) the Azure outage and (2) how Microsoft communicated during the outage. We had a couple of affected customers. The only advice I can give to Microsoft is:

  1. Don’t deploy your updates to everything at the same time.
  2. Now you know how customers feel when bad updates are issued. Bring back complete testing.
  3. Communicate clearly during an issue – that includes sending emails to affected customers. You’ve got monitoring systems & automation – use them. Heck, you even blogged about how (Azure) Automation could be used by customers to trigger actions.

Hyper-V

Azure

Miscellaneous

2014
11.19

Microsoft released November 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 yesterday. This rollup includes lots of fixes, including improved performance of a SOFS cluster during parallelized restores. As usual, I recommend waiting 4 weeks to let others be Microsoft’s testing canaries.

Correction: There are no known problems with the above update.

However, an update rollup released at the same time for Windows Server 2012 DOES in fact have a problem. Microsoft Hyper-V PM, Taylor Brown, tweeted that applying KB2996928 fixes the issue.

2014
11.19

Pay attention to the security update for Windows that was released out of band last night. It’s an important one that prevents people from crafting custom Kerberos tickets.

Hyper-V

Windows Server

Azure

Security

Office 365

2014
11.17

I’ve had a crazy few weeks with TechEd Europe 2014, followed by the MVP Summit, followed by a week of events and catchup at work. Today, I’ve finally gotten to go through my news feeds. There is a LOT of Azure stuff from TEE14.

Hyper-V

Windows Server

System Center

Windows Client

  • Windows 10 – Making Deployment Easier: Using an in-place upgrade instead of the traditional wipe-and-load approach that organizations have historically used to deploy new Windows versions. This upgrade process is designed to preserve the apps, data, and configuration from the existing Windows installation, taking care to put things back the way they need to be after Windows 10 has been installed on the system. And support for traditional deployment tools.
  • Windows 10 – Manageability Choices: Ensuring that Windows works better when using Active Directory and Azure Active Directory together. When connecting the two, users can automatically be signed-in to cloud-based services like Office 365, Microsoft Intune, and the Windows Store, even when logging in to their machine using Active Directory accounts. For users, this will mean no longer needing to remember additional user IDs or passwords.

Azure

clip_image001

ASR SAN replication topology

Office 365

Intune

Operational Insights

Licensing

2014
11.14

My sixth  TechEd Europe 2014 demo was a fun one: Extended Port ACLs, which is the ability to apply network security rules in the virtual switch port, which cannot be overruled by the guest OS admin.

There is a demo VM that is running IIS with a default site. The Windows Firewall is turned off in the guest OS. The script will:

  1. Clean up the demo lab
  2. Open a window with a continuous ping to the VM, showing the open network status
  3. Starts IE and browses to the VM’s site
  4. Kills IE and applies an extended port ACL to block everything.
  5. IE is re-opened (with flushed cache) and fails to load the site. Ping packets are dropping in the continuous ping.
  6. Kills IE and creates another extended port ACL to allow inbound TCP 80
  7. Reopens IE to show the site is accessible. Meanwhile, pings continue to fail.

There’s plenty of process management, and controlling IE in this script.

cls
#Clean up the demo to start up with
Get-VMNetworkAdapterExtendedAcl -VMName PortACLs | Remove-VMNetworkAdapterExtendedAcl

$DemoVM = "PortACLS"

Write-Host "Extended Port ACLs Demo"

#Clear IE Cache
RunDll32.exe InetCpl.cpl, ClearMyTracksByProcess 8

#Ping the VM
Start-Process Ping -ArgumentList "-t","PortACLS"

#Start IE
$ie = new-object -comobject InternetExplorer.Application
$ie.visible = $true
$ie.top = 200; $ie.width = 900; $ie.height = 600 ; $ie.Left = 100
$ie.navigate("http://portacls.demo.internal")

#Block all traffic script block
Read-Host "Block all traffic to the VM"
#Kill IE
Get-Process -Name IEXPLORE | Stop-Process
RunDll32.exe InetCpl.cpl, ClearMyTracksByProcess 8
Write-Host "`nAdd-VMNetworkAdapterExtendedAcl –VMName PortACLs –Action `“Deny`” –Direction `“Inbound`” –Weight 1"
Sleep 3
Write-Host "`nAll inbound traffic to the virtual machine is blocked" -foregroundcolor red -backgroundcolor yellow
Add-VMNetworkAdapterExtendedAcl –VMName PortACLs –Action “Deny” –Direction “Inbound” –Weight 1
#Start IE to show the site is offline
$ie = new-object -comobject InternetExplorer.Application
$ie.visible = $true
$ie.top = 200; $ie.width = 900; $ie.height = 600 ; $ie.Left = 100
$ie.navigate("http://portacls.demo.internal")

#Put in web traffic exception script block
Read-Host "`n`n`nAllow HTTP traffic to the VM"
#Kill IE
Get-Process -Name IEXPLORE | Stop-Process
RunDll32.exe InetCpl.cpl, ClearMyTracksByProcess 8
Write-Host "Add-VMNetworkAdapterExtendedAcl –VMName PortACLs –Action `“Allow`” –Direction `“Inbound`” –LocalPort 80 –Protocol `“TCP`” –Weight 10"
Sleep 3
Write-Host "`nAll inbound traffic to the virtual machine is blocked EXCEPT for HTTP" -foregroundcolor red -backgroundcolor yellow
Add-VMNetworkAdapterExtendedAcl –VMName PortACLs –Action “Allow” –Direction “Inbound” –LocalPort 80 –Protocol “TCP” –Weight 10
#Start IE to show that the website is now back online, despite all other traffic being blocked
$ie = new-object -comobject InternetExplorer.Application
$ie.visible = $true
$ie.top = 200; $ie.width = 900; $ie.height = 600 ; $ie.Left = 100
$ie.navigate("http://portacls.demo.internal")

Read-Host "`n`n`nEnd the demo"

#Clean up after the demo
Get-Process -Name Ping | Stop-Process
Get-Process -Name IEXPLORE | Stop-Process
Get-VMNetworkAdapterExtendedAcl -VMName PortACLs | Remove-VMNetworkAdapterExtendedAcl

2014
11.07

I’ve had a number of friends “go blue” over the years, that is, they joined Microsoft as full time employees (FTEs). All were like me, some things they liked and others they didn’t like at Microsoft … BEFORE they joined the company. Not long after joining, they flew to the mother ship in Redmond for “training” or “meetings” and returned very different people. Everything was awesome; even the dodgiest endeavours by Microsoft were the best things ever.

I and others would joke about our friends having their firmware updated. That was a joke … until now. I have the evidence that something mysterious is indeed happening. I was behind the curtains yesterday, and went to get a Coke from the fridge when I spotted this:

IMAG0318

Sparking water made by … Microsoft! Of course, I will be taking this evidence to a lab to be analysed and searched for traces of psychotropic substances. I suspect this may indeed be the actual firmware upgrade that is supplied to unwitting new blue badges when they are transported to Redmond, WA. I shall follow up as soon as the results are in from the lab.

Note: This article is written with my tongue firmly in my cheek. If you are offended or think I am being serious in any way, then please visit a reality consultant.

2014
11.05

In my fight demo at TechEd Europe 2014, the topic was OOB File Copy, the ability to place a file into a VM’s storage, via the VMBus, and without network connectivity to the VM (e.g. tenant isolation).

The script does the following:

  1. Cleans up the demo
  2. Opens up notepad. I manually copy and paste text from a website into the file and save it.
  3. Enable the Guest Service Interface for the VM to enable OOB File Copy
  4. Copy the file to the VM
  5. Disable Guest Service Interface
  6. Connect to the VM. I manually log in and open the file to verify that the file I created is now inside of the VM
  7. Clean up the demo

 

function KillProcess ($Target)
{
    $Processes = Get-Process
    Foreach ($Process in  $Processes)
    {
        if ($Process.ProcessName -eq $Target)
        {
            Stop-Process $Process
        }   
    }
}

cls

$DemoHost1 = "Demo-Host1"
$DemoVM1 = “OOBFileCopy”
$DemoFile = "CopyFile.txt"
$DemoFilePath = "C:\Scripts\TechEd\$DemoFile"
$VMConnect = "C:\Windows\system32\vmconnect.exe"
$VMConnectParams =  "$DemoHost1 $DemoVM1"

#Prep the demo
#Use a remote command to delete the file from the VM
Invoke-Command -ComputerName $DemoVM1 -ScriptBlock {Remove-Item -ErrorAction SilentlyContinue "C:\CopyFile.txt" -Confirm:$False | Out-Null}
Disable-VMIntegrationService $DemoVM1 -Name "Guest Service Interface"
Remove-Item -ErrorAction SilentlyContinue $DemoFilePath -Confirm:$False | Out-Null
New-Item $DemoFilePath -ItemType File | Out-Null

#Start the demo

#Note to self – script the network disconenct of the VM along with a continuous ping to confirm it.

Read-Host "`nStart the demo"
Write-Host "`nCreate a file to be copied into the virtual machine" -foregroundcolor red -backgroundcolor yellow
Start-Process "c:\windows\system32\notepad.exe" -ArgumentList $DemoFilePath

#Copy the file
Read-Host "`nEnable the Guest Service Interface integration service"
Write-Host "`nEnable-VMIntegrationService $DemoVM1 -Name `"Guest Service Interface`""
Enable-VMIntegrationService $DemoVM1 -Name "Guest Service Interface"

Read-Host "`nCopy the file to the VM"
Write-Host "`nCopy-VMFile $DemoVM1 -SourcePath $DemoFilePath -DestinationPath C: -FileSource Host"
Copy-VMFile $DemoVM1 -SourcePath $DemoFilePath -DestinationPath C: -FileSource Host

Read-Host "`nDisable the Guest Service Interface integration service"
Write-Host "`nDisable-VMIntegrationService $DemoVM1 -Name `"Guest Service Interface`""
Disable-VMIntegrationService $DemoVM1 -Name "Guest Service Interface"

#Check the file
Read-Host "`nLog into the virtual machine to check the file"

Set-VMHost -EnableEnhancedSessionMode $true | Out-Null
Start-Process $VMConnect -ArgumentList $VMConnectParams

#End the demo
Read-Host "`nEnd the demo"
KillProcess "vmconnect"
Disable-VMIntegrationService $DemoVM1 -Name "Guest Service Interface"
Remove-Item -ErrorAction SilentlyContinue $DemoFilePath -Confirm:$False | Out-Null
#Use a remote command to delete the file from the VM
Invoke-Command -ComputerName $DemoVM1 -ScriptBlock {Remove-Item -ErrorAction SilentlyContinue "C:\CopyFile.txt" -Confirm:$False | Out-Null}

 

2014
11.05

In my fight demo at TechEd Europe 2014, the topic was OOB File Copy, the ability to place a file into a VM’s storage, via the VMBus, and without network connectivity to the VM (e.g. tenant isolation).

The script does the following:

  1. Cleans up the demo
  2. Opens up notepad. I manually copy and paste text from a website into the file and save it.
  3. Enable the Guest Service Interface for the VM to enable OOB File Copy
  4. Copy the file to the VM
  5. Disable Guest Service Interface
  6. Connect to the VM. I manually log in and open the file to verify that the file I created is now inside of the VM
  7. Clean up the demo

 

function KillProcess ($Target)
{
    $Processes = Get-Process
    Foreach ($Process in  $Processes)
    {
        if ($Process.ProcessName -eq $Target)
        {
            Stop-Process $Process
        }   
    }
}

cls

$DemoHost1 = "Demo-Host1"
$DemoVM1 = “OOBFileCopy”
$DemoFile = "CopyFile.txt"
$DemoFilePath = "C:\Scripts\TechEd\$DemoFile"
$VMConnect = "C:\Windows\system32\vmconnect.exe"
$VMConnectParams =  "$DemoHost1 $DemoVM1"

#Prep the demo
#Use a remote command to delete the file from the VM
Invoke-Command -ComputerName $DemoVM1 -ScriptBlock {Remove-Item -ErrorAction SilentlyContinue "C:\CopyFile.txt" -Confirm:$False | Out-Null}
Disable-VMIntegrationService $DemoVM1 -Name "Guest Service Interface"
Remove-Item -ErrorAction SilentlyContinue $DemoFilePath -Confirm:$False | Out-Null
New-Item $DemoFilePath -ItemType File | Out-Null

#Start the demo

#Note to self – script the network disconenct of the VM along with a continuous ping to confirm it.

Read-Host "`nStart the demo"
Write-Host "`nCreate a file to be copied into the virtual machine" -foregroundcolor red -backgroundcolor yellow
Start-Process "c:\windows\system32\notepad.exe" -ArgumentList $DemoFilePath

#Copy the file
Read-Host "`nEnable the Guest Service Interface integration service"
Write-Host "`nEnable-VMIntegrationService $DemoVM1 -Name `"Guest Service Interface`""
Enable-VMIntegrationService $DemoVM1 -Name "Guest Service Interface"

Read-Host "`nCopy the file to the VM"
Write-Host "`nCopy-VMFile $DemoVM1 -SourcePath $DemoFilePath -DestinationPath C: -FileSource Host"
Copy-VMFile $DemoVM1 -SourcePath $DemoFilePath -DestinationPath C: -FileSource Host

Read-Host "`nDisable the Guest Service Interface integration service"
Write-Host "`nDisable-VMIntegrationService $DemoVM1 -Name `"Guest Service Interface`""
Disable-VMIntegrationService $DemoVM1 -Name "Guest Service Interface"

#Check the file
Read-Host "`nLog into the virtual machine to check the file"

Set-VMHost -EnableEnhancedSessionMode $true | Out-Null
Start-Process $VMConnect -ArgumentList $VMConnectParams

#End the demo
Read-Host "`nEnd the demo"
KillProcess "vmconnect"
Disable-VMIntegrationService $DemoVM1 -Name "Guest Service Interface"
Remove-Item -ErrorAction SilentlyContinue $DemoFilePath -Confirm:$False | Out-Null
#Use a remote command to delete the file from the VM
Invoke-Command -ComputerName $DemoVM1 -ScriptBlock {Remove-Item -ErrorAction SilentlyContinue "C:\CopyFile.txt" -Confirm:$False | Out-Null}

 

2014
11.05

The 4th of my 10 demos at TechEd North America 2014 was based on Enhanced Session Mode and all the RemoteFX via the VMBus goodness that it provides admins with when interacting with VMs on WS2012 R2 Hyper-V.

It was a complicated demo to script – but certainly not the most complicated! The logic is:

  1. Clean up the environment – this involved disabling enhanced session mode (I normally use it)
  2. Connect to a VM and show the lack of copy/paste etc – note how I directly run VMConnect
  3. Enable enhanced session mode
  4. Log into the VM and show off the features of the RemoteFX-powered connect
  5. Copy/paste etc
  6. Clean up the demo

Some of things I do in this script are used in some of the later, more complicated demo scripts. You’ll soon see lots more invoke-command, PSEXEC, and process manipulation.

 

function KillProcess ($Target)
{
    $Processes = Get-Process
    Foreach ($Process in  $Processes)
    {
        if ($Process.ProcessName -eq $Target)
        {
            Stop-Process $Process
        }   
    }
}

CLS
$DemoHost1 = "Demo-Host1"
$DemoVM1 = “OOBFileCopy”
$VMConnect = "C:\Windows\system32\vmconnect.exe"
$VMConnectParams =  "$DemoHost1 $DemoVM1"

#Prep the demo
KillProcess "vmconnect"
Set-VMHost -EnableEnhancedSessionMode 0 | Out-Null

#Start the demo
Read-Host "Start the demo"
Write-Host "`nThe host is configured as default – same old VMConnect:" -foregroundcolor red -backgroundcolor yellow
Write-Host "`n(Get-VMHost).EnableEnhancedSessionMode"
(Get-VMHost).EnableEnhancedSessionMode | Out-Host

Read-Host "`nConnect to the demo virtual machine"
Start-Process $VMConnect -ArgumentList $VMConnectParams

Read-Host "`nStop VMConnect"
KillProcess "vmconnect"

#Enable enhanced session mode
Read-Host "`nEnabled Enhanced Session Mode"
Write-Host "`nLet’s get the new administrator experience:" -foregroundcolor red -backgroundcolor yellow
Write-Host "`nSet-VMHost -EnableEnhancedSessionMode `$true"
Set-VMHost -EnableEnhancedSessionMode $true | Out-Null
Write-Host "`n(Get-VMHost).EnableEnhancedSessionMode"
(Get-VMHost).EnableEnhancedSessionMode | Out-Host

Read-Host "`nConnect to the demo virtual machine"
Start-Process $VMConnect -ArgumentList $VMConnectParams
Write-Host "`nLog in and demonstrate Enhanced Session Mode" -foregroundcolor red -backgroundcolor yellow

Read-Host "`nEnd the demo"
KillProcess "vmconnect"
Set-VMHost -EnableEnhancedSessionMode 1 | Out-Null

2014
11.05

My third demo at TechEd Europe 2014 focused on Resource Metering, enabling granular reporting of per-VM resource utilisation, primarily for the purposes of show-back reporting or cross-charging/billing. This feature can be used to satisfy one of the traits of a cloud, as defined by NIST: measured usage.

In this demo, I:

  1. Clean up the demo
  2. Enable metering on a VM
  3. Modify the reporting interval from 1 hour to 10 seconds to suit the demo
  4. Use memory in the VM
  5. Copy a file to the VM (I might also run some network consuming process in the VM)
  6. Report on resource usage
  7. Dive deeper into network metering
  8. Clean up the demo

$DemoVM = "Metering"
$DemoFile = "C:\Scripts\TechEd\ResourceMeteringDemoFile.exe"

CLS
Get-VM $DemoVM | Disable-VMResourceMetering
Set-VMHost –ComputerName Demo-Host2 –ResourceMeteringSaveInterval 00:00:10

#Enable metering
Read-Host "`nEnable Resource Metering on the VM"
Write-Host "`nGet-VM $DemoVM | Enable-VMResourceMetering"
Get-VM $DemoVM | Enable-VMResourceMetering
Write-Host "`nResource Metering is enabled on $DemoVM" -foregroundcolor red -backgroundcolor yellow

#Use some resources
Sleep 1
Write-Host "`nUsing RAM in the VM $DemoVM" -foregroundcolor red -backgroundcolor yellow
#Loop to consume RAM in the VM
Invoke-Command -ComputerName $DemoVM -ScriptBlock {1..28|%{$x=1}{[array]$x+=$x}} -ErrorAction SilentlyContinue
#Copy a file to the VM
Write-Host "`nCopying a file to the VM $DemoVM" -foregroundcolor red -backgroundcolor yellow
Remove-Item "\\Metering\C$\ResourceMeteringDemoFile.exe" -ErrorAction SilentlyContinue
Copy-Item -Path $DemoFile -Destination "\\Metering\C$\ResourceMeteringDemoFile.exe"
Remove-Item "\\Metering\C$\ResourceMeteringDemoFile.exe" -ErrorAction SilentlyContinue
Copy-Item -Path $DemoFile -Destination "\\Metering\C$\ResourceMeteringDemoFile.exe"
Remove-Item "\\Metering\C$\ResourceMeteringDemoFile.exe" -ErrorAction SilentlyContinue

#Check usage data
Read-Host "`nCheck usage data"
Write-Host "`nMeasure-VM –VMName $DemoVM"
Measure-VM –VMName $DemoVM | Out-Host

#Check network usage data
Read-Host "`nCheck network usage data"
Write-Host "`n(Measure-VM –VMName $DemoVM).NetworkMeteredTrafficReport"
(Measure-VM –VMName $DemoVM).NetworkMeteredTrafficReport | Out-Host

 

Read-Host "`nEnd the demo"
Get-VM $DemoVM | Disable-VMResourceMetering
Set-VMHost –ComputerName Demo-Host2 –ResourceMeteringSaveInterval 01:00:00

2014
11.04

The second demo in my presentation focused on being able to export running virtual machines. We can also export a checkpoint to create a merged export. And then we can import a VM to clone it, maybe for troubleshooting, diagnostics, performance testing, upgrade testing, and rollback testing …. all on a “production” VM with “production” data and services.

This script will do:

  1. Clean up the lab
  2. Show the running VM
  3. Export the VM
  4. Show the export
  5. Remove the export
  6. Checkpoint the VM
  7. Export the checkpoint
  8. Import the checkpoint to create a new VM
  9. Highlight the new VM is running alongside the old VM

CLS
$DemoVM1 = “NUMA”
$ExportPath = “D:\Exports\”
$ImportedVMName = “Newly Imported VM”
$ImportVMPath = “D:\Virtual Machines\$ImportedVMName”

#Clean up the demo
Start-VM $DemoVM1 | Out-Null
CLS
If (Test-Path $ExportPath)
{
Remove-Item $ExportPath -Recurse -Force | Out-Null
}
Remove-VMSnapshot $DemoVM1 -ErrorAction Ignore | Out-Null
Stop-VM $ImportedVMName -Force -ErrorAction Ignore | Out-Null
Remove-VM $ImportedVMName -Force -ErrorAction Ignore | Out-Null
Remove-Item $ImportVMPath -Recurse -Confirm:$false -ErrorAction Ignore | Out-Null

#Start the demo
Read-Host “Start the demo”
Write-Host “`nThis is the virtual machine $DemoVM that we will be working with” -foregroundcolor red -backgroundcolor yellow
Get-VM $DemoVM1 | Select Name, Status | Out-Host

#Export the VM
Read-Host “`nExport the running VM”
Write-Host “`nCreating an export of the virtual machine $DemoVM while it is running” -foregroundcolor red -backgroundcolor yellow
Write-Host “`nExport-VM $DemoVM1 -Path $ExportPath”
Export-VM $DemoVM1 -Path $ExportPath | Out-Host
Write-Host “`nHere is the export of the still running virtual machine” -foregroundcolor red -backgroundcolor yellow
Dir $ExportPath\NUMA

#Create a VM checkpoint
Read-Host “`nCreate a checkpoint of the VM $DemoVM1″
Write-Host “`nCreating a checkpoint (formerly known as a snapshot) of the virtual machine $DemoVM1″ -foregroundcolor red -backgroundcolor yellow
Write-Host “`nCheckpoint-VM $DemoVM1 -SnapshotName `”Demo Checkpoint AKA Snapshot`”"
Checkpoint-VM $DemoVM1 -SnapshotName “Demo Checkpoint AKA Snapshot”
Write-Host “`nThis is the new checkpoint” -foregroundcolor red -backgroundcolor yellow
Get-VMSnapshot $DemoVM1 | Out-Host

 

#Export the VM checkpoint
Read-Host “`nDo an export of the VM $DemoVM1 checkpoint”
If (Test-Path $ExportPath)
{
Remove-Item $ExportPath -Recurse -Force | Out-Null
}
Write-Host “`nWe can export a checkpoint of a running virtual machine” -foregroundcolor red -backgroundcolor yellow

Write-Host “`nNew-Item -ItemType Directory $ExportPath\$DemoVM1″
New-Item -ItemType Directory $ExportPath\$DemoVM1

Write-Host “`nExport-VMSnapshot -Name “Demo Checkpoint AKA Snapshot” -VMName $DemoVM1 -Path $ExportPath”
Export-VMSnapshot -Name “Demo Checkpoint AKA Snapshot” -VMName $DemoVM1 -Path $ExportPath | Out-Host

Write-Host “`nHere is the export” -foregroundcolor red -backgroundcolor yellow
Dir $ExportPath\NUMA

#Import the VM checkpoint to create a new VM
Read-Host “`nImport the exported checkpoint to create a new VM”
Write-Host “`nNow we will create a whole new virtual machine from the exported checkpoint” -foregroundcolor red -backgroundcolor yellow

Write-Host “`n`$XML = gci `”$ExportPath$DemoVM1\Virtual Machines`” | Where-Object {$_.Extension -eq `”.XML`”}”
$XML = gci “$ExportPath$DemoVM1\Virtual Machines” | Where-Object {$_.Extension -eq “.XML”}

Write-Host “`n`$NewVM = IMPORT-VM -path `$XML.FullName -Copy -VhdDestinationPath `”$ImportVMPath\Virtual Hard Disks`” -VirtualMachinePath `”$ImportVMPath`” -SnapshotFilePath `”$ImportVMPath\Snapshots`” -SmartPagingFilePath `”$ImportVMPath`” -GenerateNewId”
$NewVM = IMPORT-VM -path $XML.FullName -Copy -VhdDestinationPath “$ImportVMPath\Virtual Hard Disks” -VirtualMachinePath $ImportVMPath -SnapshotFilePath “$ImportVMPath\Snapshots” -SmartPagingFilePath $ImportVMPath -GenerateNewId

Write-Host “`nRename-VM `$NewVM $ImportedVMName”
Rename-VM $NewVM $ImportedVMName

Write-Host “`nStart-VM $ImportedVMName”
Start-VM $ImportedVMName

Write-Host “`nHere is the original virtual machine $DemoVM1 and the new virtual machine $ImportedVMName” -foregroundcolor red -backgroundcolor yellow
Get-VM $ImportedVMName,$DemoVM1 | Select Name, Status | Out-Host

#Clean up the demo
Read-Host “`nEnd the demo”
Start-VM $DemoVM1 | Out-Null
If (Test-Path $ExportPath)
{
Remove-Item $ExportPath -Recurse -Force | Out-Null
}
CLS
Remove-VMSnapshot $DemoVM1 -ErrorAction Ignore | Out-Null
Stop-VM $ImportedVMName -Force -ErrorAction Ignore | Out-Null
Remove-VM $ImportedVMName -Force -ErrorAction Ignore | Out-Null
Remove-Item $ImportVMPath -Recurse -Confirm:$false -ErrorAction Ignore | Out-Null

2014
11.03

As promised at Teched Europe 2014, I am sharing each of the PowerShell scripts that I used to drive my feature demos in my session. The first of these scripts focuses on non-uniform memory access, or NUMA.

image

All of my demo scripts work in this kind of fashion:

  1. Clean up the lab
  2. Create demo environment variables for hosts, clusters, machines
  3. Write-Host “some cmdlet and parameters”
  4. Do the cmdlet and some parameters
  5. Optionally display the results
  6. Do more stuff
  7. Clean up the lab.

Most of the code in these scripts is fluff, purely for display and lab prep/reset.

In this script I:

  1. Clean up the lab, ensure the VM (spans NUMA nodes cos of vCPU count) is just the way I want it.
  2. Get the NUMA config of the host.
  3. Get the NUMA config of the VM.
  4. Show that the VM is not NUMA aligned.
  5. Retrieve the VM’s advanced NUMA configuration.
  6. Shutdown the VM, set it to use static memory, and restart it.
  7. Query the VMs NUMA alignment, and see that it is aligned now, but we have use static memory.
  8. Reset the lab back to the start.

CLS
$DemoVM1="NUMA"

#Reset the demo
Stop-VM $DemoVM1 -Force | Out-Null
Set-VMMemory $DemoVM1 -DynamicMemoryEnabled:$true -StartupBytes 512MB -MaximumBytes 8GB -MinimumBytes 256MB
Start-VM $DemoVM1 | Out-Null

#Start the demo
Read-Host "Start the demo"
Write-Host "`nGet-VMHostNumaNode"
Get-VMHostNumaNode
Write-Host "`nThe host has 2 NUMA nodes. Large VMs should also have 2 NUMA nodes for best performance" -foregroundcolor red -backgroundcolor yellow

Read-Host "`nCheck the NUMA and Dynamic Memory configuration of the VM $DemoVM1"
Write-Host "`nGet-VM $DemoVM1 | Select Name, ProcessorCount, MemoryMaximum, DynamicMemoryEnabled, NumaAligned, NumaNodesCount, NumaSocketCount"
Get-VM $DemoVM1 | Select Name, ProcessorCount, MemoryMaximum, DynamicMemoryEnabled, NumaAligned, NumaNodesCount, NumaSocketCount | Out-Host
Write-Host "`nGuest NUMA isn’t alligned and the 24 vCPU virtual machine has only 1 NUMA node" -foregroundcolor red -backgroundcolor yellow

Read-Host "`nGet the advanced NUMA configuration of the VM $DemoVM1"
Write-Host "`nGet-VMProcessor $DemoVM1 | Select Count, MaximumCountPerNumaNode, MaximumCountPerNumaSocket`n"
Get-VMProcessor $DemoVM1 | Select Count, MaximumCountPerNumaNode, MaximumCountPerNumaSocket
Write-Host "`nGet-VMMemory $DemoVM1 | Select MaximumPerNumaNode`n"
Get-VMMemory $DemoVM1 | Select MaximumPerNumaNode
Write-Host "`nThis is the NUMA node configuration that Hyper-V can present to the VM via Guest-Aware NUMA" -foregroundcolor red -backgroundcolor yellow

Read-Host "`nDisable Dynamic Memory for the VM $DemoVM1 & restart it"
Stop-VM $DemoVM1 -Force
Write-Host "`nSet-VMMemory $DemoVM1 -DynamicMemoryEnabled:$false -StartupBytes 8GB"
Set-VMMemory $DemoVM1 -DynamicMemoryEnabled:$false -StartupBytes 8GB
Start-VM $DemoVM1
Write-Host "`nGet-VM $DemoVM1 | Select Name, ProcessorCount, MemoryMaximum, DynamicMemoryEnabled, NumaAligned, NumaNodesCount, NumaSocketCount`n"
Get-VM $DemoVM1 | Select Name, ProcessorCount, MemoryMaximum, DynamicMemoryEnabled, NumaAligned, NumaNodesCount, NumaSocketCount | Out-Host
Write-Host "`nThe VM now is NUMA aligned and has a NUMA configuration that matches the host hardware" -foregroundcolor red -backgroundcolor yellow

#End the demo
Read-Host "`nEnd the demo"
Stop-VM $DemoVM1 -Force
Set-VMMemory $DemoVM1 -DynamicMemoryEnabled:$true -StartupBytes 512MB -MaximumBytes 8GB -MinimumBytes 256MB
Start-VM $DemoVM1 | Out-Null

2014
11.03

I’m going to do my best (no guarantees – I only have one body and pair of ears/eyes and NDA stuff is hard to track!) to update this page with a listing of each new Windows Server vNext Hyper-V and Hyper-V Server vNext (and related) features as they are revealed and discussed publicly by Microsoft.

Note, that the features of WS2012 can be found here and the features of WS2012 R2 can be found here.

This list was last updated on 3/November/2014.

Backup Change Tracking Microsoft will include change tracking so third-party vendors do not need to update/install dodgy kernel level file system filters for change tracking of VM files.
Binary VM Configuration Files Microsoft is moving away from text-based files to increase scalability and performance.
Cluster Cloud Witness You can use Azure as a witness for quorum for a multi-site cluster.
Cluster Compute Resiliency Prevents the cluster from failing a host too quickly after a transient error. A host will go into isolation, allowing services to continue to run without disruptive failover.
Cluster Functional Level A rolling upgrade requires mixed-mode clusters, i.e. WS2012 R2 and Windows Server vNext hosts in the same cluster. The cluster will stay and WS2012 R2 functional level until you finish the rolling upgrade and then manually increase the cluster functional level.
Cluster Quarantine If a cluster node is flapping (going into & out of isolation too often) then the cluster will quarantine a node, and drain it of resources (Live Migration – see MoveTypeThreshold and DefaultMoveType).
Cluster Rolling Upgrade You do not need to create a new cluster or do a cluster migration to get from WS2012 R2 to Windows Server vNext. The new process allows hosts in a cluster to be rebuilt IN THE EXISTING cluster with Windows Server vNext.
Delivery of Integration Components This will be done via Windows Update
Distributed Storage QoS Enable per-virtual hard disk QoS for VMs stored on a Scale-Out File Server
File-Based Backup Hyper-V is decoupling from volume backup for scalability and reliability reasons
Hot-Add & Hot-Remove of vNICs You can hot-add and hot-remove virtual NICs to/from a running virtual machine.
Hyper-convergence Microsoft does not believe in hyper-convergence. Compute and storage scale at different levels, therefore VMs run on one cluster, and storage is a different tier that can scale separately.
Hyper-V Manager Alternative Credentials With CredSSP-enabled PCs and hosts, you can connect to a host with alternative credentials.
Hyper-V Manager Down-Level Support You can manage Windows Server vNext, WS2012 R2 and WS2012 Hyper-V from a single console
Hyper-V Manager WinRM WinRM is used to connect to hosts.
Network Adapter Identification Not vCDN! You can create/name a vNIC in the settings of a VM and see the name in the guest OS.
Network Controller A new fabric management feature built-into Windows Server, offering many new features that we see in Azure.
Power Management Hyper-V has expanded support for power management, including Connected Standby
Production Checkpoints Using VSS in the guest OS to create a consistent snapshots that workload services should be able to support. Applying a checkpoint is like performing a VM restore from backup
Replica Support for Hot-Add of VHDX When you hot-add a VHDX to a running VM that is being replicated by Hyper-V Replica, the VHDX is available to be added to the replica set (MSFT doesn’t assume that you want to replicate the new disk).
Runtime Memory Resize You can increase or decrease the memory assigned to Windows Server vNext guests.
Secure Boot for Linux Enable protection of the boot loader in Generation 2 VMs
Storage Replica Built-in, hardware agnostic, synchronous and asynchronous replication of Windows Storage, performed at the file system level. Enables campus or multi-site clusters.
Storage Spaces Shared Nothing A “low cost” solution for archive (backup) or second tier VMs. Runs only on prescribed OEM hardware (why I say “low cost) with SATA disks. A cluster of nodes using internal disks to create a consistent storage spaces pools that stretch across the servers. Compute is on a different cluster.
VM Storage Resiliency A VM will pause when the physical storage of that VM goes offline. Allows the storage to come back (maybe Live Migration) without crashing the VM.
VM Upgrade Process VM versions are upgraded manually, allowing VMs to the migrated to WS2012 R2 hosts with support from Microsoft.

 

2014
10.31

I am live blogging this session. Refresh to see more.

Speaker: Claus Joergensen

I arrived in 15 minutes late so the start of this is missing. Claus was finishing off a refresh on Storage Spaces.

The session so far seems to be aimed at beginners to SOFS – of which there are plenty. I will not take detailed notes on this piece unless I hear something I haven’t heard before.

FAQ

  • Can I use SOFS for IQ workloads. Not recommended. Design for the files of Hyper-V, SQL.
  • CSV Cache Size? As big as you can. e.g. 64 GB
  • Uses SOFS as file share witness for Hyper-V clusters? yes, but specific instructions
  • How many nodes? 2-4 nodes in a SOFS
  • Evaluate performance? Not file copy. Use DiskSpd
  • Disable NetBIOS? Yes. It can reduce failover times.

CPS

2014
10.30

I am live blogging this so hit refresh to see more

Speaker: Mark Russinovich, CTO of Azure

Stuff Everyone Knows About Cloud Deployment

  • Automate: necessary to work at scale
  • Scout out instead of scale up. Leverage cheap compute to get capacity and fault tolerance
  • Test in production – devops
  • Deploy early, deploy often

But there are many more rules and that’s what this session is about. Case studies from “real big” customers on-boarding to Azure. He omits the names of these companies, but most are recognisable.

Customer Lessons

30-40% have tried Azure already. A few are considering Azure. The rest are here just to see Russinovich!

Election Tracking – Vote Early, Vote Often

Customer (a US state) create an election tracking system for live tally of US, state and local elections. Voters can see a live tally online. A regional election worked out well. Concerned because it was a little shaky with this light-load election. Called in MSFT to analyze the architecture/scalability. The system was PaaS based.

Each TM load balanced (A/P) view resulted in 10 SQL transactions. Expected 6,000,000 views in the peak hour or nearly 17,000 queries per sec. Azure DB scales to 5000 connects, 180 concurrent requests and 1000 requests per sec.

image

MSFT CAT put a caches between the front-end and DB with 40,000 requests per instance capability. Now the web roles hit the cache (now called Redis) and the cache hit the Results Azure DB.

At peak load, the site hit 45,000 hits/sec, well over the planned 17,000. They did a post-mortem. The original architecture would have failed BADLY. With the cache, they barely made it through the peak demand. Buffering the databases saved their bacon.

To The Cloud

A customer that does CAD for bildings, plants, cicil and geospatial engineering.

Went with PaaS: web roles on the front, app worker roles in the middles, and IaaS SQL (mirrored DB) on the backed. When they tested the Azure system had 1/3 of the work capacity of the on-premises system.

The web/app tier were on the same server on-premises. Adding a network hop and serialization of data transfer in the Azure implementation reduced performance. They merged them in Azure … web role and worker roles. They decided colocation in the same VMs was fine: they didn’t need independent scalability.

Then they found IOPS of a VHD in Azure was too slow. They used multiple VHDs to create two storage spaces pools/vdisks for logs and databases. They then created a 16 VHD pool with 1 LUN for DBs and logs. And they got 4 times the IOPS.

What Does The Data Say?

A company that does targeted advertising, and digests a huge amount of date to report to advertisers.

Data sources imported to Azure blobs. Azure worker roles sucked the data into an Azure DB. They used HDInsight to report on 7 days of data. They imported 100 CSV files between 10 MB and 1.4GB each. Average of 50 GB/day. Ingestion took 37 hours (over 1 day so fell behind in analysis).

  1. They moved to Azure DB Premium.
  2. They parallelized import/ingestion by having more worker roles.
  3. They created a DB table for each day. This allowed easy 8th day data truncation and ingestion of daily data.

This total solution solved the problem … .now an ingestion took 3 hours instead of 37.

Catch Me If You Can

A Movie Company called Link Box or something. Pure PaaS streaming. Web role, talking using WCF Binary Remotiing over TCP to a multi-instance cache worker roles tier. A Movie meta database, and the movies were in Azure blobs and cached by CDN.

If the cache role rebooted or updated, the web role would overwhelm the DB. They added a second layer of cache in the web worker roles – removed pressure from worker roles and dependency on the worker role to be “always on”.

Calling all Cars

A connected car services company did pure PaaS on Azure. A web role for admin and a web role for users.  The cars are Azure connected to Azure Service Bus – to submit data to the cloud. The bus is connected to multi-instances of message processor worker roles. This included cache, notifications, and message processor worker roles. Cache worked with a backend Azure SQL DB.

  • Problem 1: the message processing worker (retrieving messages from bus) role was synchronous – 1 message processed at a time. Changed this to asynchronous – “give me lots of messages at once”.
  • Problem 2: Still processing was one at a time. They scaled out to process asynchronously.

Let me Make You Comfortable

IoT… thermostats that would centralize data and provide a nice HVAC customer UI. Data is sent to the cloud service. Initial release failed to support more than 35K connected devices. But they needed 100K connected devices. Goal was to get to 150K devices.

Synchronous processing of messages by a web role that wrote to an Azure DB. A queue sent emails to customers via an SMTP relay. Another web role, accessing the same DB, allowed mobile devices to access the system for user admin. Synchronous HTTP processing was the bottleneck.

Changed it so interacting queries were synchronous. Normal data imports (from thermostats) switched to asynchronous. Changed DB processing from single-row to batch multi-row. Moved hot DB tables from standard Azure SQL to Premium. XML client parameters were converted into DB info to save CPU.

A result of the redesign was increase in capacity and reduced the number of VMs by 75%.

2014
10.30

Microsoft has published my session from TEE14 (From Demo to Reality: Best Practices Learned from Deploying Windows Server 2012 R2 Hyper-V) onto the event site on Channel 9; In this session I cover the value of Windows Server 2012 R2 Hyper-V:

  • How Microsoft backs up big keynote claims about WS2012 R2 Hyper-V
  • How they enable big demos, like 2,000,000 IOPS from a VM
  • The lesser known features of Hyper-V that can solve real world issues

The deck was 84 slides and 10 demos … in 74 minutes. The final feature I talk about is what makes all that possible.

 

2014
10.30

Speaker: Spencer Shepler

He’s a team member in the CPS solution, so this is why I am attending. Linked in says he is an architect. Maybe he’ll have some interesting information about huge scale design best practices.

A fairly large percentage of the room is already using Storage Spaces – about 30-40% I guess.

Overview

A new category of cloud storage, delivering reliability, efficiency, and scalability at dramatically lower price points.

Affordability achieved via independence: compute AND storage clusters, separate management, separate scale for compute AND storage. IE Microsoft does not believe in hyperr-convergence, e.g. Nutanix.

Resiliency: Storage Spaces enclosure awareness gives enclosure resiliency, SOFS provides controller fault tolerance, and SM3 3.0 provides path fault tolerance. vNext compute resiliency provides tolerance for brief storage path failure.

Case for Tiering

Data has a tiny current working set and a large retained data set. We combine SSD ($/IOPS) and HDDs (big/cheap) for placing data on the media that best suits the demands in scale VS performance VS price.

Tiering done at a sub file basis. A heat map tracks block usage. Admins can pin entire files. Automated transparent optimization moves blocks to the appropriate tier in a virtual disk. This is a configurable scheduled task.

SSD tier also offers a committed write persistent write-back cache to absorb spikes in write activity. It levels out the perceived performance of workloads for users.

$529/TB in a MSFT deployment. IOPS per $: 8.09. TB/rack U: 20.

Customer exaple: got 20x improvement in performance over SAN. 66% reduction in costs in MSFT internal deployment for the Windows release team.

Hardware

Check the HCL for Storage Spaces compatibility. Note, if you are a reseller in Europe then http://www.mwh.ie in Ireland can sell you DataOn h/w.

Capacity Planning

Decide your enclosure awareness (fault tolerance) and data fault tolerance (mirroring/partity). You need at least 3 enclosures for enclosure fault tolerance. Mirroring is required for VM storage. 2-way mirror gives you 50% of raw capacity as usable storage. 3-way mirroring offers 33% of raw capacity as usable storage. 3-way mirroring with enclosure awareness stores each interleave on each of 3 enclosures (2-way does it on 2 enclosures, but you still need 3 enclosures for enclosure fault tolerance).

Parity will not use SDDs in tiering. Parity should only be used for archive workloads.

Select drive capacities. You size capacity based on the amount of data in the set. Customers with large working sets will use large SSDs. Your quantity of SSDs is defined by IOPS requirements (see column count)  and the type of disk fault tolerance required.

You must have enough SSDs to match the column count of the HDDs, e.g. 4 SSDs and 8 HDDs in a 12 disk CiB gives you a 2 column 2-way mirror deployment. You would need 6 SSDs and 15 HDDs to get a 2-column 3-way mirror. And this stuff is per JBOD because you can lose a JBOD.

Leave write-back cache at the default of 1 GB. Making it too large slows down rebuilds in the event of a failuire.

Understanding Striping and Mirroring

Any drive in a pool can be used by a virtual disk in that pool. Like in a modern SAN that does disk virtualization, but very different to RAID on a server. Multiple virtual disks in a pool share physical disks. Avoid having too many competing workloads in a pool (for ultra large deployments).

Performance Scaling

Adding disks to Storage Spaces scales performance linearly. Evaluate storage latency for each workload.

Start with the default column counts and interleave settings and test performance. Modify configurations and test again.

Ensure you have the PCIe slots, SAS cards, and cable specs and quantities to achieve the necessary IOPS. 12 Gbps SAS cards offer more performance with large quantities of 6 Gbps disks (according to DataOn).

Use LB policy for MPIO. Use SMB Multichannel to aggregate NICs for network connections to a SOFS.

VDI Scenario

Pin the VDI template files to the SSD tier. Use separate user profile disks. Run optimization manually after creating a collection. Tiering gives you best of both worlds for performance and scalability. Adding dedup for non-pooled VMs reduces space consumption.

Validation

You are using off-the-shelf h/w so test it. Note: DataOn supplied disks are pre-tested.

There are scripts for validating physical disks and cluster storage.

Use DiskSpd or SQLIO to test performance of the storage.

Health Monitoring

A single disk performing poorly can affect storage. A rebuild or a single application can degrade the overall capabilities too.

If you suspect a single disk is faulty, you can use PerfMon to see latency on a per physical disk level. You can also pull this data with PowerShell.

Enclosure Health Monitoring monitors the health of the enclosure hardware (fans, power, etc). All retrievable using PowerShell.

CPS Implementation

LSI HBAs and Chelsio iWARP NICs in Dell R620s with 4 enclosures:

image

Each JBOD has 60 disks with 48 x 4 TB HDDs and 12 x 800 GB SSDs. They have 3 pools to do workload separation. The 3rd pool is dual parity vDisks with dedupe enabled – used for backup.

Storage Pools should be no more an 80-90 devices on the high end – rule of thumb from MSFT.

They implement 3-way mirroring with 4 columns

Disk Allocation

4 groups of 48 HDDs + 12 SSDs. A pool shold have equal set of disks in each enclosure.

vimage

A tiered space has 64 HDDs and 20 SSs. Write cahce – 1GB Tiers = 555 GB and HDD – 9 TB. Interleave == 64 KB. Enclusre aware = $true. RetiureMissing Physical Disks = Always. Physical disk redundancy = 2 (3-way mirror). Number of columns = 2.

image

In CPS, they don’t have space for full direct connections between the SOFS servers and the JBODs. This reduces max performance. They have just 4 SAS cables instead of 8 for full MPIO. So there is some daisy chaining. They can sustain 1 or maybe 2 SAS cable failures (depending on location) before they rely on disk failover or 3-way mirroring.

2014
10.30

Speaker Murali KK

Business Continuity Challenges

Too many roadblocks out there:

  • Too many complications, problems and mistakes.
  • Too much data with insufficient protection
  • Not enough data retention
  • Time-intensive media management
  • Untested DR & decreasing recovery confidence
  • Increasing costs

Businesses need simpler and standardized DR. Costs are too high in terms of OPEX, CAPEX, time, and risk.

Bypassing Obstacles

  • Automate, automate, automate
  • Tigther integration between systems availablity and data protection
  • Increase bradth and depth of continuity protection
  • Eliminate the tape problem. Object? You still using punch cards?
  • Implement simple failover and testing
  • Get predictable and lower costs and operations availability

Moving into Microsoft Solutions …

There is not one solution. There are multiple solutions in the MSFT portfolio.

  • HA is built into clustering for on-premise availability on infrastructure
  • Guest OS HA can be achieved with NLB, clustering, SQL, and Exchange
  • Simple backup protection with Windows Server Backup (for small biz)
  • DPM for scalable backup
  • Integrate backup (WSB or DPM) into Azure to automate off-site backup to affordable tapeless and hugely scalable backup vaults
  • Orchestrated physical, Hyper-V, and VMware replication & DR using Azure Site Recovery. Options include on-premises to on-premises orchestration, or on-premises to Azure orchestration and failover.

image

 

Heterogeneous DR

Covering physical servers and VMware virtual machines. This is a future scenario based on InMage Scout.

A process server is a physical or virtual appliance deployed in the customer site. An Image  Scout data channel allows replication into the customers virtual network/storage account. A configuration server (central managemetn of scout) and master target (repository and retention) run in Azure. A multi-tenant RX server runs in Azure to manage InMage service.

How VMware to VMware Replication Works Now

This is to-on-premises replication/orchestration:

image

Demo

There are two vSphere environments. He is going to replicate from one to another. CS and RX VMs are running as VMs in the secondary site.

There is application consistency leveraging VSS. A bookmarking process (application tags) in VMs enables failover consistency of a group of servers, e.g. a SharePoint farm.

In Scout vContinuum he enters the source vSphere details and credentials. A search brings up the available VMs. Selecting a VM shows the details and allows you to select virtual disks (exclude temp/paging file disks to save bandwidth). Then he enters the target vSphere farm details. A master target (a source Windows VM) that is responsible for receiving the data is selected. The replication policy is configured. You can pick a data store. You can opt to use Raw Device Mapping for larger performance requirements. You can configure retention – the ability to move back to an older copy of the VM in the DR site (playback). This can be defined by hours, days, or a quote of storage space. Application consistency can be enabled via VSS (flushes buffers to get committed changes).

MA Offers

  • Support to migrate heterogenous workloads to Azure. Physical (Windows), Virtual and AWS workloads to Azure
  • Multi-tenant migration portal.
  • And more Smile I can’t type fast enough!

You require a site-to-site VPM or a NAT IP for the cloud gateway. You need to run the two InMage VMs (CS and MT) running in your subscription.

There was a little bit more, but not much. Seems like a simple enough solution.

2014
10.29

Phew!

I have finally had the opportunity to speak at TechEd, TechEd Europe 2014 to be precise. My session had a looong title: From Demo to Reality: Best Practices for Deploying WS2012 R2 Hyper-V. The agenda was twofold:

  • Explain how Microsoft justifies big keynote claims about Hyper-V achievements and how they power big demos, e.g. 2 millions IOPS from a VM.
  • Discus the lesser known features of Hyper-V and related tech that can make a difference to real world consultants and engineers.

image

I had a LOT of material. When someone reviewed my deck they saw 84 slides and 10 demos and the comments were always started with: you have a lot there; are you sure you can fit it into 75 minutes. Yes I am …. now … I can fit it into just under 74 minutes Smile

All of my demos were scripted using PowerShell. I ran the script, it would pre the lab, wrote-host the cmdlets, run them, explain what was going on, get the results, and clean up the demo. I will be sharing the scripts over the coming weeks on this blog.

It was fun to do. I had some issues switching between the PPT machine and my demo laptop. And the clicker fought me at one point. But it was FUN.

image

Thank you to everyone who gave me feedback, who supported me, who advised me, and to those who helped. A special mention to Ben, Sarah, Rick, Joey, Mark, Didier, and especially Nicole.

2014
10.29

Speaker: Jeffrey Snover, uber genius, Distinguished Engineer, and father of PowerShell.

Tale of 3 Parents

  • UNIX: Small unit composition with pipes: A | B | C. Lacks consistency and predictability.
  • VMS/DCL: The consistent predictable nature impacted Jeffrey. Verb & noun model.
  • AS400/CL: Business oriented – enable people to do “real business”.

Keys to Learning PowerShell

  • Learn how to learn: requires a sense of exploration. I 100% agree. That’s what I do: explore the cmdlets and options and properties of objects.
  • Get-Help and Update-Help. The documentation is in the product. The help is updated regularly.
  • Get-Command and Show-Command
  • Get-Member and Show-Object –> the latter is coming.
  • Get-PSDrive HOw hierarchical systems like  drives are explored.

Demo

Into ISE to do some demo stuff.

He uses a OneGet and PowerShellGet modules to pull down modules from trusted libraries on the Internet (v5 from vNext).

Runs Show-Object to open a tree explorer of a couple of cmdlets.

Dir variable …. explore the virtual variable drive to see the already defined variables available to you.

$c = get-command get-help

get-object $c

$c.parameters

$c.parameters.path

get-command –noun disk

Get-something | out-gridview

Get-Help something –ShowWindow

$ConfirmPreference = “Low”

2014
10.28

Speaker: Siddhartha Roy

Software-Defined Storage gives you choice. It’s a breadth offering and unified platform for MSFT workloads and public cloud scale. Economical storage for private/public cloud customers.

About 15-20% of the room has used Storage Spaces/SOFS.

What is SDS? Cloud scale storage and cost economics on standard, volume hardware. Based on what Azure does.

Where are MSFT in the SDS Journey Today?

In WS2012 we got Storage Spaces as a cluster supported storage system. No tiering. We could build a SOFS using cluster supported storage, and present that to Hyper-V hosts via SMB 3.0.

  • Storage Spaces: Storage based on economical JBOD h/w
  • SOFS: Transparent failover, continuously available application storage platform.
  • SMB 3.0 fabric: high speed, and low latency can be added with RDMA NICs.

What’s New in Preview Release

  • Greater efficiency
  • More uptime
  • Lower costs
  • Reliability at scale
  • Faster time to value: get customers to adopt the tech

Storage QoS

Take control of the service and offer customers different bands of service.

image

Enabled by default on the SOFS. 2 metrics used: latency and IOPS. You can define policies around IOPS by using min and max. Can be flexible: on VHD level, VM level, or tenant/service level.

It is managed by System Center and PoSH. You have an aggregated end-end view from host to storage.

Patrick Lang comes on to do a demo. There is a file server cluster with 3 nodes. The SOFS role is running on this cluster. There is a regular SMB 3.0 file share. A host has 5 VMs running on it, stored on the share. One OLTP VM is consuming 8-10K IOPS using IOMETER. Now he uses PoSH to query the SOFS metrics. He creates a new policy with min 100 and max 200 for a bunch of the VMs. The OLTP workload gets a policy with min of 3000 and max of 5000. Now we see its IOPS drop down from 8-10K. He fires up VMs on another host – not clustered – the only commonality is the SOFS. These new VMs can take IOPS. A rogue one takes 2500 IOPS. All of the other VMs still get at least their min IOPS.

Note: when you look at queried data, you are seeing an average for the last 5 minutes. See Patrick Lang’s session for more details.

Rolling Upgrades – Faster Time to Value

Cluster upgrades were a pain. They get much easier in vNext. Take a node offline. Rebuild it in the existing cluster. Add it back in, and the cluster stays in mixed mode for a short time. Complete the upgrades within the cluster, and then disable mixed mode to get new functionality. The “big red switch” is a PoSH cmdlet to increase the cluster functional level.

image

Cloud Witness

A third site witness for multi-site cluster, using a service in Azure.

image

Compute Resiliency

Stops the cluster from being over aggressive with transient glitches.

image

Related to this is quarantine of flapping nodes. If a node is in and out of isolation too much, it is “removed” from the cluster. The default quarantine is 2 hours – give the admin a chance to diagnose the issue. VMs are drained from a quarantined node.

Storage Replica

A hardware agnostic synchronous replication system. You can stretch a cluster with low latency network. You get all the bits in the box to replicate storage. It uses SMB 3.0 as a transport. Can use metro-RDMA to offload and get low latency. Can add SMB encryption. Block-level synchronous requires <5MS latency. There is also an asynchronous connection for higher latency links.

image

The differences between synch and asynch:

image

Ned Pyle, a storage PM, comes on to demo Storage Replica. He’ll do cluster-cluster replication here, but you can also do server-server replication.

There is a single file server role on a cluster. There are 4 nodes in the cluster. There is assymetric clustered storage. IE half the storage on 2 nodes and the other half on the other 2 nodes. He’s using iSCSI storage in this demo. It just needs to be cluster supported storage. He right-clicks on a volume and selects Replication > Enable Replication … a wizard pops up. He picked a source disk. Clustering doesn’t do volumes … it does disks. If you do server-server repliction then you can replicate a volume. Picks a source replication log disk. You need to use a GPT disk with a file system. Picks a destination disk to replicate to, and a destination log disk. You can pre-seed the first copy of data (transport a disk, restore from backup, etc). And that’s it.

Now he wants to show a failover. Right now, the UI is buggy and doesn’t show a completed copy. Check the event logs. He copies files to the volume in the source site. Then moves the volume to the DR site. Now the replicated D: drive appears (it was offline) and all the files are there in the DR site ready to be used.

After the Preview?

Storage Spaces Shared Nothing – Low Cost

This is a no-storage-tier converged storage cluster. You create storage spaces using internal storage in each of your nodes. To add capacity you add nodes.

You get rid of the SAS layer and you can use SATA drives. The cost of SSD plummets with this system.

image

You can grow pools to hundreds of disks. A scenario is for primary IaaS workloads and for storage for backup/replication targets.

There is a prescriptive hardware configuration. This is not for any server from any shop. Two reasons:

  • Lots of components involved. There’s a lot of room for performance issues and failure. This will be delivered by MSFT hardware partners.
  • They do not converge the Hyper-V and storage clusters in the diagram (above). They don’t recommend convergence because the rates of scale in compute and storage are very different. Only converge in very small workloads. I have already blogged this on Petri with regards to converged storage – I don’t like the concept – going to lead to a lot of costly waste.

VM Storage Resiliency

A more graceful way of handling a storage path outage for VMs. Don’t crash the VM because of a temporary issue.

image

CPS – But no … he’s using this as a design example that we can implement using h/w from other sources (soft focus on the image).

image

Not talked about but in Q&A: They are doing a lot of testing on dedupe. First use case will be on backup targets. And secondary: VDI.

Data consistency is done by a Storage Bus Layer in the shared notching Storage Spaces system. It slips into Storage Spaces and it’s used to replicate data across the SATA fabric and expands its functionality. MSFT thinking about supporting 12 nodes, but architecturally, this feature has no limit in the number of nodes.

2014
10.28

I am live blogging. My battery is also low so I will blog as long as possible (hit refresh) but I will not last the session. I will photograph the slides and post later when this happens.

Speakers: Bala Rajagopalan & Rajeev Nagar.

The technology and concepts that you will see in Windows Server vNext come from vNext where they are deployed, stressed and improved at huge scales, and then we get that benefit of hyper-scale enterprise grade computing.

Traditional versus Software-Defined Data Centre

Traditional:

  • Tight coupling between infrastructure and services
  • Extensive proprietary and vertically integrated hardware
  • Siloed infrastructure and operations
  • Highly customized processes and configurations.

Software-Defined Datacenter:

  • Loosely couple
  • Commodity industry standard hardware
  • Standarized deployments
  • Lots of automation

Disruptive Technologies

Disaggerated s/w stack + disaggregation of h/w + capable merchant (commonly available) solution.

Flexibility limited by hardware defined deployments. Blocks adoption of non-proprietary solutions that can offer more speed. Slower to deploy and change. Focus is on hardware, and not on services.

Battery dying …. I’ll update this article with photos later.

2014
10.28

I am live blogging. My battery is also low so I will blog as long as possible (hit refresh) but I will not last the session. I will photograph the slides and post later when this happens.

Speakers: Bala Rajagopalan & Rajeev Nagar.

Get Adobe Flash player