2014
10.30

I am live blogging this so hit refresh to see more

Speaker: Mark Russinovich, CTO of Azure

Stuff Everyone Knows About Cloud Deployment

  • Automate: necessary to work at scale
  • Scout out instead of scale up. Leverage cheap compute to get capacity and fault tolerance
  • Test in production – devops
  • Deploy early, deploy often

But there are many more rules and that’s what this session is about. Case studies from “real big” customers on-boarding to Azure. He omits the names of these companies, but most are recognisable.

Customer Lessons

30-40% have tried Azure already. A few are considering Azure. The rest are here just to see Russinovich!

Election Tracking – Vote Early, Vote Often

Customer (a US state) create an election tracking system for live tally of US, state and local elections. Voters can see a live tally online. A regional election worked out well. Concerned because it was a little shaky with this light-load election. Called in MSFT to analyze the architecture/scalability. The system was PaaS based.

Each TM load balanced (A/P) view resulted in 10 SQL transactions. Expected 6,000,000 views in the peak hour or nearly 17,000 queries per sec. Azure DB scales to 5000 connects, 180 concurrent requests and 1000 requests per sec.

image

MSFT CAT put a caches between the front-end and DB with 40,000 requests per instance capability. Now the web roles hit the cache (now called Redis) and the cache hit the Results Azure DB.

At peak load, the site hit 45,000 hits/sec, well over the planned 17,000. They did a post-mortem. The original architecture would have failed BADLY. With the cache, they barely made it through the peak demand. Buffering the databases saved their bacon.

To The Cloud

A customer that does CAD for bildings, plants, cicil and geospatial engineering.

Went with PaaS: web roles on the front, app worker roles in the middles, and IaaS SQL (mirrored DB) on the backed. When they tested the Azure system had 1/3 of the work capacity of the on-premises system.

The web/app tier were on the same server on-premises. Adding a network hop and serialization of data transfer in the Azure implementation reduced performance. They merged them in Azure … web role and worker roles. They decided colocation in the same VMs was fine: they didn’t need independent scalability.

Then they found IOPS of a VHD in Azure was too slow. They used multiple VHDs to create two storage spaces pools/vdisks for logs and databases. They then created a 16 VHD pool with 1 LUN for DBs and logs. And they got 4 times the IOPS.

What Does The Data Say?

A company that does targeted advertising, and digests a huge amount of date to report to advertisers.

Data sources imported to Azure blobs. Azure worker roles sucked the data into an Azure DB. They used HDInsight to report on 7 days of data. They imported 100 CSV files between 10 MB and 1.4GB each. Average of 50 GB/day. Ingestion took 37 hours (over 1 day so fell behind in analysis).

  1. They moved to Azure DB Premium.
  2. They parallelized import/ingestion by having more worker roles.
  3. They created a DB table for each day. This allowed easy 8th day data truncation and ingestion of daily data.

This total solution solved the problem … .now an ingestion took 3 hours instead of 37.

Catch Me If You Can

A Movie Company called Link Box or something. Pure PaaS streaming. Web role, talking using WCF Binary Remotiing over TCP to a multi-instance cache worker roles tier. A Movie meta database, and the movies were in Azure blobs and cached by CDN.

If the cache role rebooted or updated, the web role would overwhelm the DB. They added a second layer of cache in the web worker roles – removed pressure from worker roles and dependency on the worker role to be “always on”.

Calling all Cars

A connected car services company did pure PaaS on Azure. A web role for admin and a web role for users.  The cars are Azure connected to Azure Service Bus – to submit data to the cloud. The bus is connected to multi-instances of message processor worker roles. This included cache, notifications, and message processor worker roles. Cache worked with a backend Azure SQL DB.

  • Problem 1: the message processing worker (retrieving messages from bus) role was synchronous – 1 message processed at a time. Changed this to asynchronous – “give me lots of messages at once”.
  • Problem 2: Still processing was one at a time. They scaled out to process asynchronously.

Let me Make You Comfortable

IoT… thermostats that would centralize data and provide a nice HVAC customer UI. Data is sent to the cloud service. Initial release failed to support more than 35K connected devices. But they needed 100K connected devices. Goal was to get to 150K devices.

Synchronous processing of messages by a web role that wrote to an Azure DB. A queue sent emails to customers via an SMTP relay. Another web role, accessing the same DB, allowed mobile devices to access the system for user admin. Synchronous HTTP processing was the bottleneck.

Changed it so interacting queries were synchronous. Normal data imports (from thermostats) switched to asynchronous. Changed DB processing from single-row to batch multi-row. Moved hot DB tables from standard Azure SQL to Premium. XML client parameters were converted into DB info to save CPU.

A result of the redesign was increase in capacity and reduced the number of VMs by 75%.

2014
10.30

Microsoft has published my session from TEE14 (From Demo to Reality: Best Practices Learned from Deploying Windows Server 2012 R2 Hyper-V) onto the event site on Channel 9; In this session I cover the value of Windows Server 2012 R2 Hyper-V:

  • How Microsoft backs up big keynote claims about WS2012 R2 Hyper-V
  • How they enable big demos, like 2,000,000 IOPS from a VM
  • The lesser known features of Hyper-V that can solve real world issues

The deck was 84 slides and 10 demos … in 74 minutes. The final feature I talk about is what makes all that possible.

 

2014
10.30

Speaker: Spencer Shepler

He’s a team member in the CPS solution, so this is why I am attending. Linked in says he is an architect. Maybe he’ll have some interesting information about huge scale design best practices.

A fairly large percentage of the room is already using Storage Spaces – about 30-40% I guess.

Overview

A new category of cloud storage, delivering reliability, efficiency, and scalability at dramatically lower price points.

Affordability achieved via independence: compute AND storage clusters, separate management, separate scale for compute AND storage. IE Microsoft does not believe in hyperr-convergence, e.g. Nutanix.

Resiliency: Storage Spaces enclosure awareness gives enclosure resiliency, SOFS provides controller fault tolerance, and SM3 3.0 provides path fault tolerance. vNext compute resiliency provides tolerance for brief storage path failure.

Case for Tiering

Data has a tiny current working set and a large retained data set. We combine SSD ($/IOPS) and HDDs (big/cheap) for placing data on the media that best suits the demands in scale VS performance VS price.

Tiering done at a sub file basis. A heat map tracks block usage. Admins can pin entire files. Automated transparent optimization moves blocks to the appropriate tier in a virtual disk. This is a configurable scheduled task.

SSD tier also offers a committed write persistent write-back cache to absorb spikes in write activity. It levels out the perceived performance of workloads for users.

$529/TB in a MSFT deployment. IOPS per $: 8.09. TB/rack U: 20.

Customer exaple: got 20x improvement in performance over SAN. 66% reduction in costs in MSFT internal deployment for the Windows release team.

Hardware

Check the HCL for Storage Spaces compatibility. Note, if you are a reseller in Europe then http://www.mwh.ie in Ireland can sell you DataOn h/w.

Capacity Planning

Decide your enclosure awareness (fault tolerance) and data fault tolerance (mirroring/partity). You need at least 3 enclosures for enclosure fault tolerance. Mirroring is required for VM storage. 2-way mirror gives you 50% of raw capacity as usable storage. 3-way mirroring offers 33% of raw capacity as usable storage. 3-way mirroring with enclosure awareness stores each interleave on each of 3 enclosures (2-way does it on 2 enclosures, but you still need 3 enclosures for enclosure fault tolerance).

Parity will not use SDDs in tiering. Parity should only be used for archive workloads.

Select drive capacities. You size capacity based on the amount of data in the set. Customers with large working sets will use large SSDs. Your quantity of SSDs is defined by IOPS requirements (see column count)  and the type of disk fault tolerance required.

You must have enough SSDs to match the column count of the HDDs, e.g. 4 SSDs and 8 HDDs in a 12 disk CiB gives you a 2 column 2-way mirror deployment. You would need 6 SSDs and 15 HDDs to get a 2-column 3-way mirror. And this stuff is per JBOD because you can lose a JBOD.

Leave write-back cache at the default of 1 GB. Making it too large slows down rebuilds in the event of a failuire.

Understanding Striping and Mirroring

Any drive in a pool can be used by a virtual disk in that pool. Like in a modern SAN that does disk virtualization, but very different to RAID on a server. Multiple virtual disks in a pool share physical disks. Avoid having too many competing workloads in a pool (for ultra large deployments).

Performance Scaling

Adding disks to Storage Spaces scales performance linearly. Evaluate storage latency for each workload.

Start with the default column counts and interleave settings and test performance. Modify configurations and test again.

Ensure you have the PCIe slots, SAS cards, and cable specs and quantities to achieve the necessary IOPS. 12 Gbps SAS cards offer more performance with large quantities of 6 Gbps disks (according to DataOn).

Use LB policy for MPIO. Use SMB Multichannel to aggregate NICs for network connections to a SOFS.

VDI Scenario

Pin the VDI template files to the SSD tier. Use separate user profile disks. Run optimization manually after creating a collection. Tiering gives you best of both worlds for performance and scalability. Adding dedup for non-pooled VMs reduces space consumption.

Validation

You are using off-the-shelf h/w so test it. Note: DataOn supplied disks are pre-tested.

There are scripts for validating physical disks and cluster storage.

Use DiskSpd or SQLIO to test performance of the storage.

Health Monitoring

A single disk performing poorly can affect storage. A rebuild or a single application can degrade the overall capabilities too.

If you suspect a single disk is faulty, you can use PerfMon to see latency on a per physical disk level. You can also pull this data with PowerShell.

Enclosure Health Monitoring monitors the health of the enclosure hardware (fans, power, etc). All retrievable using PowerShell.

CPS Implementation

LSI HBAs and Chelsio iWARP NICs in Dell R620s with 4 enclosures:

image

Each JBOD has 60 disks with 48 x 4 TB HDDs and 12 x 800 GB SSDs. They have 3 pools to do workload separation. The 3rd pool is dual parity vDisks with dedupe enabled – used for backup.

Storage Pools should be no more an 80-90 devices on the high end – rule of thumb from MSFT.

They implement 3-way mirroring with 4 columns

Disk Allocation

4 groups of 48 HDDs + 12 SSDs. A pool shold have equal set of disks in each enclosure.

vimage

A tiered space has 64 HDDs and 20 SSs. Write cahce – 1GB Tiers = 555 GB and HDD – 9 TB. Interleave == 64 KB. Enclusre aware = $true. RetiureMissing Physical Disks = Always. Physical disk redundancy = 2 (3-way mirror). Number of columns = 2.

image

In CPS, they don’t have space for full direct connections between the SOFS servers and the JBODs. This reduces max performance. They have just 4 SAS cables instead of 8 for full MPIO. So there is some daisy chaining. They can sustain 1 or maybe 2 SAS cable failures (depending on location) before they rely on disk failover or 3-way mirroring.

2014
10.30

Speaker Murali KK

Business Continuity Challenges

Too many roadblocks out there:

  • Too many complications, problems and mistakes.
  • Too much data with insufficient protection
  • Not enough data retention
  • Time-intensive media management
  • Untested DR & decreasing recovery confidence
  • Increasing costs

Businesses need simpler and standardized DR. Costs are too high in terms of OPEX, CAPEX, time, and risk.

Bypassing Obstacles

  • Automate, automate, automate
  • Tigther integration between systems availablity and data protection
  • Increase bradth and depth of continuity protection
  • Eliminate the tape problem. Object? You still using punch cards?
  • Implement simple failover and testing
  • Get predictable and lower costs and operations availability

Moving into Microsoft Solutions …

There is not one solution. There are multiple solutions in the MSFT portfolio.

  • HA is built into clustering for on-premise availability on infrastructure
  • Guest OS HA can be achieved with NLB, clustering, SQL, and Exchange
  • Simple backup protection with Windows Server Backup (for small biz)
  • DPM for scalable backup
  • Integrate backup (WSB or DPM) into Azure to automate off-site backup to affordable tapeless and hugely scalable backup vaults
  • Orchestrated physical, Hyper-V, and VMware replication & DR using Azure Site Recovery. Options include on-premises to on-premises orchestration, or on-premises to Azure orchestration and failover.

image

 

Heterogeneous DR

Covering physical servers and VMware virtual machines. This is a future scenario based on InMage Scout.

A process server is a physical or virtual appliance deployed in the customer site. An Image  Scout data channel allows replication into the customers virtual network/storage account. A configuration server (central managemetn of scout) and master target (repository and retention) run in Azure. A multi-tenant RX server runs in Azure to manage InMage service.

How VMware to VMware Replication Works Now

This is to-on-premises replication/orchestration:

image

Demo

There are two vSphere environments. He is going to replicate from one to another. CS and RX VMs are running as VMs in the secondary site.

There is application consistency leveraging VSS. A bookmarking process (application tags) in VMs enables failover consistency of a group of servers, e.g. a SharePoint farm.

In Scout vContinuum he enters the source vSphere details and credentials. A search brings up the available VMs. Selecting a VM shows the details and allows you to select virtual disks (exclude temp/paging file disks to save bandwidth). Then he enters the target vSphere farm details. A master target (a source Windows VM) that is responsible for receiving the data is selected. The replication policy is configured. You can pick a data store. You can opt to use Raw Device Mapping for larger performance requirements. You can configure retention – the ability to move back to an older copy of the VM in the DR site (playback). This can be defined by hours, days, or a quote of storage space. Application consistency can be enabled via VSS (flushes buffers to get committed changes).

MA Offers

  • Support to migrate heterogenous workloads to Azure. Physical (Windows), Virtual and AWS workloads to Azure
  • Multi-tenant migration portal.
  • And more Smile I can’t type fast enough!

You require a site-to-site VPM or a NAT IP for the cloud gateway. You need to run the two InMage VMs (CS and MT) running in your subscription.

There was a little bit more, but not much. Seems like a simple enough solution.

2014
10.29

Phew!

I have finally had the opportunity to speak at TechEd, TechEd Europe 2014 to be precise. My session had a looong title: From Demo to Reality: Best Practices for Deploying WS2012 R2 Hyper-V. The agenda was twofold:

  • Explain how Microsoft justifies big keynote claims about Hyper-V achievements and how they power big demos, e.g. 2 millions IOPS from a VM.
  • Discus the lesser known features of Hyper-V and related tech that can make a difference to real world consultants and engineers.

image

I had a LOT of material. When someone reviewed my deck they saw 84 slides and 10 demos and the comments were always started with: you have a lot there; are you sure you can fit it into 75 minutes. Yes I am …. now … I can fit it into just under 74 minutes Smile

All of my demos were scripted using PowerShell. I ran the script, it would pre the lab, wrote-host the cmdlets, run them, explain what was going on, get the results, and clean up the demo. I will be sharing the scripts over the coming weeks on this blog.

It was fun to do. I had some issues switching between the PPT machine and my demo laptop. And the clicker fought me at one point. But it was FUN.

image

Thank you to everyone who gave me feedback, who supported me, who advised me, and to those who helped. A special mention to Ben, Sarah, Rick, Joey, Mark, Didier, and especially Nicole.

2014
10.29

Speaker: Jeffrey Snover, uber genius, Distinguished Engineer, and father of PowerShell.

Tale of 3 Parents

  • UNIX: Small unit composition with pipes: A | B | C. Lacks consistency and predictability.
  • VMS/DCL: The consistent predictable nature impacted Jeffrey. Verb & noun model.
  • AS400/CL: Business oriented – enable people to do “real business”.

Keys to Learning PowerShell

  • Learn how to learn: requires a sense of exploration. I 100% agree. That’s what I do: explore the cmdlets and options and properties of objects.
  • Get-Help and Update-Help. The documentation is in the product. The help is updated regularly.
  • Get-Command and Show-Command
  • Get-Member and Show-Object –> the latter is coming.
  • Get-PSDrive HOw hierarchical systems like  drives are explored.

Demo

Into ISE to do some demo stuff.

He uses a OneGet and PowerShellGet modules to pull down modules from trusted libraries on the Internet (v5 from vNext).

Runs Show-Object to open a tree explorer of a couple of cmdlets.

Dir variable …. explore the virtual variable drive to see the already defined variables available to you.

$c = get-command get-help

get-object $c

$c.parameters

$c.parameters.path

get-command –noun disk

Get-something | out-gridview

Get-Help something –ShowWindow

$ConfirmPreference = “Low”

2014
10.28

Speaker: Siddhartha Roy

Software-Defined Storage gives you choice. It’s a breadth offering and unified platform for MSFT workloads and public cloud scale. Economical storage for private/public cloud customers.

About 15-20% of the room has used Storage Spaces/SOFS.

What is SDS? Cloud scale storage and cost economics on standard, volume hardware. Based on what Azure does.

Where are MSFT in the SDS Journey Today?

In WS2012 we got Storage Spaces as a cluster supported storage system. No tiering. We could build a SOFS using cluster supported storage, and present that to Hyper-V hosts via SMB 3.0.

  • Storage Spaces: Storage based on economical JBOD h/w
  • SOFS: Transparent failover, continuously available application storage platform.
  • SMB 3.0 fabric: high speed, and low latency can be added with RDMA NICs.

What’s New in Preview Release

  • Greater efficiency
  • More uptime
  • Lower costs
  • Reliability at scale
  • Faster time to value: get customers to adopt the tech

Storage QoS

Take control of the service and offer customers different bands of service.

image

Enabled by default on the SOFS. 2 metrics used: latency and IOPS. You can define policies around IOPS by using min and max. Can be flexible: on VHD level, VM level, or tenant/service level.

It is managed by System Center and PoSH. You have an aggregated end-end view from host to storage.

Patrick Lang comes on to do a demo. There is a file server cluster with 3 nodes. The SOFS role is running on this cluster. There is a regular SMB 3.0 file share. A host has 5 VMs running on it, stored on the share. One OLTP VM is consuming 8-10K IOPS using IOMETER. Now he uses PoSH to query the SOFS metrics. He creates a new policy with min 100 and max 200 for a bunch of the VMs. The OLTP workload gets a policy with min of 3000 and max of 5000. Now we see its IOPS drop down from 8-10K. He fires up VMs on another host – not clustered – the only commonality is the SOFS. These new VMs can take IOPS. A rogue one takes 2500 IOPS. All of the other VMs still get at least their min IOPS.

Note: when you look at queried data, you are seeing an average for the last 5 minutes. See Patrick Lang’s session for more details.

Rolling Upgrades – Faster Time to Value

Cluster upgrades were a pain. They get much easier in vNext. Take a node offline. Rebuild it in the existing cluster. Add it back in, and the cluster stays in mixed mode for a short time. Complete the upgrades within the cluster, and then disable mixed mode to get new functionality. The “big red switch” is a PoSH cmdlet to increase the cluster functional level.

image

Cloud Witness

A third site witness for multi-site cluster, using a service in Azure.

image

Compute Resiliency

Stops the cluster from being over aggressive with transient glitches.

image

Related to this is quarantine of flapping nodes. If a node is in and out of isolation too much, it is “removed” from the cluster. The default quarantine is 2 hours – give the admin a chance to diagnose the issue. VMs are drained from a quarantined node.

Storage Replica

A hardware agnostic synchronous replication system. You can stretch a cluster with low latency network. You get all the bits in the box to replicate storage. It uses SMB 3.0 as a transport. Can use metro-RDMA to offload and get low latency. Can add SMB encryption. Block-level synchronous requires <5MS latency. There is also an asynchronous connection for higher latency links.

image

The differences between synch and asynch:

image

Ned Pyle, a storage PM, comes on to demo Storage Replica. He’ll do cluster-cluster replication here, but you can also do server-server replication.

There is a single file server role on a cluster. There are 4 nodes in the cluster. There is assymetric clustered storage. IE half the storage on 2 nodes and the other half on the other 2 nodes. He’s using iSCSI storage in this demo. It just needs to be cluster supported storage. He right-clicks on a volume and selects Replication > Enable Replication … a wizard pops up. He picked a source disk. Clustering doesn’t do volumes … it does disks. If you do server-server repliction then you can replicate a volume. Picks a source replication log disk. You need to use a GPT disk with a file system. Picks a destination disk to replicate to, and a destination log disk. You can pre-seed the first copy of data (transport a disk, restore from backup, etc). And that’s it.

Now he wants to show a failover. Right now, the UI is buggy and doesn’t show a completed copy. Check the event logs. He copies files to the volume in the source site. Then moves the volume to the DR site. Now the replicated D: drive appears (it was offline) and all the files are there in the DR site ready to be used.

After the Preview?

Storage Spaces Shared Nothing – Low Cost

This is a no-storage-tier converged storage cluster. You create storage spaces using internal storage in each of your nodes. To add capacity you add nodes.

You get rid of the SAS layer and you can use SATA drives. The cost of SSD plummets with this system.

image

You can grow pools to hundreds of disks. A scenario is for primary IaaS workloads and for storage for backup/replication targets.

There is a prescriptive hardware configuration. This is not for any server from any shop. Two reasons:

  • Lots of components involved. There’s a lot of room for performance issues and failure. This will be delivered by MSFT hardware partners.
  • They do not converge the Hyper-V and storage clusters in the diagram (above). They don’t recommend convergence because the rates of scale in compute and storage are very different. Only converge in very small workloads. I have already blogged this on Petri with regards to converged storage – I don’t like the concept – going to lead to a lot of costly waste.

VM Storage Resiliency

A more graceful way of handling a storage path outage for VMs. Don’t crash the VM because of a temporary issue.

image

CPS – But no … he’s using this as a design example that we can implement using h/w from other sources (soft focus on the image).

image

Not talked about but in Q&A: They are doing a lot of testing on dedupe. First use case will be on backup targets. And secondary: VDI.

Data consistency is done by a Storage Bus Layer in the shared notching Storage Spaces system. It slips into Storage Spaces and it’s used to replicate data across the SATA fabric and expands its functionality. MSFT thinking about supporting 12 nodes, but architecturally, this feature has no limit in the number of nodes.

2014
10.28

I am live blogging. My battery is also low so I will blog as long as possible (hit refresh) but I will not last the session. I will photograph the slides and post later when this happens.

Speakers: Bala Rajagopalan & Rajeev Nagar.

The technology and concepts that you will see in Windows Server vNext come from vNext where they are deployed, stressed and improved at huge scales, and then we get that benefit of hyper-scale enterprise grade computing.

Traditional versus Software-Defined Data Centre

Traditional:

  • Tight coupling between infrastructure and services
  • Extensive proprietary and vertically integrated hardware
  • Siloed infrastructure and operations
  • Highly customized processes and configurations.

Software-Defined Datacenter:

  • Loosely couple
  • Commodity industry standard hardware
  • Standarized deployments
  • Lots of automation

Disruptive Technologies

Disaggerated s/w stack + disaggregation of h/w + capable merchant (commonly available) solution.

Flexibility limited by hardware defined deployments. Blocks adoption of non-proprietary solutions that can offer more speed. Slower to deploy and change. Focus is on hardware, and not on services.

Battery dying …. I’ll update this article with photos later.

2014
10.28

I am live blogging. My battery is also low so I will blog as long as possible (hit refresh) but I will not last the session. I will photograph the slides and post later when this happens.

Speakers: Bala Rajagopalan & Rajeev Nagar.

2014
10.28

Speaker: Ben Armstrong

Almost everyone in the room using Hyper-V. Large number also using VMware. About 1/3 using public cloud. Maybe 20% doing hybrid cloud.

Hybrid Cloud

Microsoft believes that hybrid cloud is the endpoint – seamless movement between on-premises and the public cloud.

Hyper-V scales. Azure runs on stock Hyper-V. It required a lot of work for WS2012, but it’s stock Hyper-V and that’s over 1 million servers running Hyper-V. If 1 in 10,000 installs shows a bug, and you run a hypervisor on that many host deploying 500m VM per day, then you test the product heavily. We benefit from this with our on-premises deployment.

image

What have Microsoft learned from Azure: Standardize your build – Keep the hosts simple and standardized. Don’t vary. Change does not scale.

Private Cloud Improvements

  • Large scale VMs and clusters
  • Accelerated live migration
  • Dynamic memory with hot add
  • Comprehensive host and guest clustering support
  • Rolling upgrades
  • Mixed mode cluster support
  • VM compute resiliency
  • Cluster-aware updating
  • Broad linux distro support
  • In-guest vRSS support
  • hot add and online resize virtual disk storage
  • Live backup
  • Comprehensive management

Hybrid Cloud

Hybrid Cloud is about extending your data centre, not replace it. In the MSFT Cloud OS, that’s Hyper-V, with SysCtr/WAP for private cloud, and Azure/partner run hosting cloud for public cloud. MSFT makes it seamless.

Right now, only Microsoft is listed as a leader in 4 categories of hybrid cloud computing by Gartner.

Linux and Windows parity on Hyper-V

Run Linux without compromises on a single host: Hyper-V. you don’t have to partition hosts. A single UI for managing Linux. Backup, monitoring, capacity planning, etc. All too often, the Linux people want to run their own virtualization, and it makes no sense. It’s a waste of time, effort, and importantly, money.

Open Source

Yes, Hyper-V is supported in OpenStack. And it’s supported in something called Vagrant. Microsoft has been working closely with them.

USP

Only company offering on-premises IaaS, public IaaS, public PaaS, and Public SaaS.

Change

People are running more VMs on:

  • More hardware
  • Less hardware

Hmm! How we scale is different now. Half a rack can run thousands of VMs. And in hyper scale clouds, you see a lower density for cost effectiveness and performance SLA. In private cloud, we focus on smaller clusters.

Virtualization is now assumed. Physical is no longer the default.

Workload mobility is assumed: People expect Live Migration or vMotion.

Secure isolation is assumed. Customers in different VMs expect that they are secure from other tenants’ VMs.

Hardware failure fault tolerance is assumed.

“I am the fabric administrator”. This is a new job title for the person who runs virtualization, network, and storage. What happens inside the VMs is not their worry. MSFT hearing from businesses that they want fabric admins have no access to data in the VMs. No solution to that today. In contradiction to this, that person used to be the domain admin that fixed everything. But now, it’s not uncommon that they don’t have sign-in credentials for the tenants’ VMs and cannot provide support.

Cluster Rolling Upgrades

Hyper-V upgrades are frequent. Downtime is hated by admins and tenants alike. Admins want to hide the fact that an upgrade is happening. This new process allows mixed mode clusters and Live Migration so you can rebuild nodes in a cluster with a new OS and LM VMs around without anyone noticing. Yes: you keep the cluster – it’s a host rebuild within the cluster and not a cluster migration of the past.

Compute Resiliency

Hyper-V failure are nearly always caused by hardware, drivers, firmware by OEMs. Big area of investment for Microsoft, including transient failures.

Backup

I know that this has been a focus point for Ben. Hyper-V is decoupling VM backup from the underlying storage. File based backup is the way forward, with efficient change tracking for backup. Provides reliability, scale, and performance. This session is on right now (Taylor Brown) so watch the recording in 24 hours.

Many more changes

  • Delayed VM upgrade
  • New IC servicing model
  • Secure boot for Linux Generation 2 VMs
  • Distributed Storage QoS
  • Resilienvt VM Configuration
  • And more.

Demo: Compute Resiliency

Clustering saves people over and over. But clustering is complex and it can break. Often caused by a transitory error, such as a cable being unplugged, etc. When there is a heartbeat failure, then you get a 30 second outage while VMs are failed over, and then there’s a wait time for the VMs to boot.

Ben demos with 3 nodes. A script will kill the cluster service on one of the nodes. In 2012 R2, the cluster would panic and do a failover. In vNext, the server is marked as isolated – there’s a problem. VMs are still “running” but market as unmanaged. A failover won’t happen immediately in case the node comes back online. The wait time is 4 minutes by default, but it is configurable. This behaviour is only applied to running VMs.

Another new feature is quarantine. When a host is frequently going in and out of isolated state, then it will be quarantined. It’s a disruptive server that causes a lot of churn. It is quarantined. VMs are migrated off (green quarantine) and then moved into red quarantine. Now it’s persona non-grata (no new workloads placed there) until you resolve the intermittent issue. There is a time for automatic quarantine so a host can come out of quarantine automatically.

Microsoft Were The First to Do Lots in Virtualization

  • Hardware assisted live migration for balzing performance.
  • SR-IOV with Live Migration
  • Fibre Channel in VMs with Live Migration.
  • TRIM and UNMAP

Is VMware really the market leader and inniovator?

Ben goes into Q&A.

Question: Is Hyper-V Manager going away? No. Emphatically. It’s used even by the happiest SysCtr and fabric controller admins, especially when things go wrong.

That’s a wrap!

2014
10.28

Speakers: Jeff Woolsey and Matt McSprit.

I am bursting – and I don’t just mean to use the toilet. Here comes the grand reveal for Windows Server & System Center vNext.

image

Here we go with a video: your data centre is an orchestra and you are the conductor. Left: compute. Right: networking. In front: storage. Keeping everything is the rhythm of management. Software-define all of it, make it possible in your data centre with Windows Server & System Center. Extend it with Azure.

Jeff Woolsey starts things off. We get the 3 clouds in one obligatory slide. Hundreds of new features that couldn’t be shown in the keynote. This foundation session will dive a little deeper. Jeff talks about “software-defined everything”.

MSFT Cloud OS hybrid cloud:

  • Empower enterprise mobility
  • Create internet of things
  • Enable application innovation
  • Unlock insights on any data
  • Transform the data center

Ugh: CPS. Yawn IMO.

More on WAP. You can run an Azure-consistent cloud on premises. Use this internally or as a service provider. Expect big pushes on WAP: it’s the front-end for enterprise deployments of Hyper-V/System Center for vNext onwards.

MSFT not bothering to change the scalability figures for Hyper-V because they haven’t had a customer hit the WS2012 numbers yet. The numbers were Top Gear numbers – big whoah but so high that they aren’t a blocker.

There is a major emphasis on guest clustering in Hyper-V. No artificial scale limitiations. You can do in-place or rolling upgrades of clusters in vNext from WS2012 R2. This includes mixed mode and live migration within the cluster.

Linux is getting vRSS support for network scalability.

Networking

Software-defined networking still puzzles people. Decouples the application/service from the underlying network. Doing lots to increase reliability and manageability.

Now RDMA to be added in network virtualization. Supporting VXLAN and NVGRE for SDN.

A new Netwrok Controller from Azure is being added to Windows Server.

A software load-balancer based on Azure is being added in the box in vNext.

Distributed firewall and cloud-scale network traffic management.

Storage

There is no such thing as a happy storage customer – Jeff Woolsey.

2012 gave us storage spaces. 2012 R2 added auto scaling. In vNext you get more. Microsoft does not use proprietary storage from the usual names. They use software-defined storage.

Storage Replica is synchronous replication in the box that works with any storage – you can even do it with a couple of laptops (allegedly).

Storage QoS is a killer feature for service providers.

Patrick Lang comes on stage to do Storage QoS demo. Perfmon is running, showing storage throughput from a bunch of VMs. VM1 is dominating.

He creates SLAs and applies them to VMs. Note: all PowerShell. He starts a bunch of more VMs. Some rogue ones try to take the storage bandwidth but the heavy user (a file server)  gets the throughput that it needs for its SLA.

In 2012 they demod 1m IOPS from a single Hyper-V VM. Last year, they did it with 1.6m IOPS. In Server vNext, right now, they can do 2 millions IOPS from a single Hyper-V VM.

something Winter comes in to talk System Cetner. About 1/3 using SysCtr 2012. One or two hands using older. 2/3 of the room NOT USING SYSTEM CENTER.

MSFT will “ship another version of System Center in the Summer along with Widows Server”.

Making CPS work was an eye opener for System Center. They took over 500 improvements into SysCtr 2012 R2 and vNext. It was too complex to install/integrate the suite.

There is a cultural shift happening. Cloud is now. Users want services now, not in 4 hours or tomorrow. Do on-premises cloud or they’ll do it directly in public cloud. The solution is WAP offering service, SysCtr offering management, and Windows Server/Hyper-V offering compute, networking and storage.

You can do Azure Operational Insights with or without System Center:

image

 

Matt McSpirit comes on. He’s between us and lunch.

Azure Site Recovery now manages DR replication for:

  • Between Hyper-V and Azure
  • Between two Hyper-V sites
  • Between two VMware sites using InMage
  • Between two Hyper-V sites using array replication (just gone into preview)

Coming soon: From VMware to Azure DR replication using Azure Site Recovery Services.

Matt demos the setup of ASR and configuring a one-click failover plan.

Lunch time!

Summary: Azure is more than just cloud. It’s tricking down to on-premises infrastructure.

2014
10.28

Welcome to TechEd Europe 2014, blogged live to you by me from Barcelona, Spain. It’s early, I got in to near of the front of the hall, and the crowd is streaming in as a DJ Joey Snow mixes.

image

The stage is lit blue and purple, with the press sitting front and centre.

image

The crowd is awaiting the show to start.

image

Cameras are rolling.

image

And here we go ….

Alex Zander VP of Azure comes out. He starts on the pitch about the number of devices. The number of connected devices now outnumbers the number of humans on the planet. This brings up IoT. Here comes mobile-first, cloud-first.

image

 

What are Microsoft enabling in enterprise devices to expand your digital work and personal lives?

Here comes Joe Belfiore to talk about Windows 10 in the Enterprise.

image

1) Windows 10 delivers a single platform across a wide range of devices to ensure your investment covers a wide array of devices

2) Provide users with a platform that they will love to use.

3) Provide protection against modern security threats.

4) A way to manage all devices in a way that makes sense for businesses

Breadth of Devices

This covers everything from sensors in a jet plan to PCs, to tablets, to phones, to giant computing systems.

Love to Use

Interesting topic: Windows 8 has some “mixed response”. Customer satisfaction for keyboard/mouse users of Windows 8 was lower than that for touch users. Now they are making non-touch and emphasis point.

They have focused on that large group of Windows 7 users on classic PCs. The Start Menu is shown. Search is now a part of the Start Menu and is shown – this includes web searches so they are adding value to “Windows 7 features”. Windows 8 Live Tiles are added to the familiar start menu – adding value to familiar features. So this isn’t a big disruptive change for users – it’s more evolutionary.

Live Tiles add personalisation to a work environment – to make Windows more enjoyable for users.

Now he starts on the apps and the store. Today, they are not being used as much as MSFT would like because “the apps behave so differently”. Apps of all kinds are in the start menu and launch in Windows that run on the desktop.

And then he gets a big round of applause for CTRL+V at the command prompt:

image

Two more power user features coming in the next flight of Windows Insiders releases.

He has a multiple monitors display set up. Right now you cannot snap a window to the joining edge of a multi-monitor display. But Snap in Windows 10 allows you to snap a window to the “join”.

Now he moves over to the Surface Pro. Touchpads are all differently by the OEMs. MSFT are adding their own multi touch gestures on Windows 10 for the touchpad. 3 finger up/down hides/reveals all windows. Left/right does alt-tab with 3 finger swipe.

End user/consumer stuff will come in the new year. Then he shows the Continuum UI for hybrid devices (see previous posts).

Protecting Corporate Data

IT can control the PC’s apps that are used on the corporate network – allegedly.

Demo: Windows 10 PC that the user logs into. The company authorizes some apps to use company data and appear in the Start Menu. The user can also run non-authorized apps (including 3rd party). When she hits save as in Word she has Personal and Company stores that she can see. The user cannot save company data into a non-corporate store. For example, she cannot paste from Word (company app) into Twitter (non-company app). Policy allows a user override … assuming that the user enters a reason, and this goes into an audit log that IT is managing.

You’ll see this in Windows Phone too – one OS, remember?

Protecting User ID

Lots MSFT thinks they can do to protect against modern security threats. Today you can do 2 factor authentication but it’s cumbersome to deploy. They are going to enable cheaper two factor auth and fingerprint biometrics.

They user the Windows Phone as a second factor. When you log into the PC, the phone prompts you to enter a pin on the phone via Bluetooth. Do that, and now your log in on the PC is completed. No additional devices – just the company phone that you might have been buying anyway.Demo was done with Windows Phone.

Windows 10 Management for continuous Innovation

Improving the app store so you can use it to deploy your own or your licensed s/w. Hmm, SCCM? You’ll have a choice of GPO or MDM to manage all kinds of devices – “it’s your choice” – MSFT will faciliate 3rd party MDM.

Volume License support is coming via license claim and reuse in the Store. No MSA is required to use the Store infrastructure in the future. You can set up your own company store to manage your licensing.

Managed in-place upgrades are coming. They are ending the era of wipe and reload. Making OOBE more user friendly in the biz: a user gets a PC, goes through OOBE and corporate policy will be applied. There’s a “my organization owns it” option in OOBE. There’s a sign-in (looks like workplace join) dialog and policy is then applied accordingly. There will be 2 factor auth via Admin managed SMS. Now policy and pre-assigned apps are deployed. Custom data protection, authentication, security policy, etc are all deployed.

This is like a merger of SCCM and AD GPO into a cloud-based solution. I like the message. Lets see what the final product looks like.

Cloud

Back to Alex Zander again to talk cloud. Let’s watch the crowd to see what happens to them. It didn’t go well in Houston in May.

Asked to store more data and increase agility, security, and data privacy. Costs must be reduced while increasing flexibility for everyone. The pace of innovation is advancing at a dizzying rate. Businesses that adapt to this will thrive. Right now, SMEs are doing this.

image

MSFT cloud is more than Azure and O365. It’s also on-premises and with partner hosting companies. Three USPs to the cloud OS:

  • Hybrid
  • Enterprise grade
  • Hyper-scale

Key investments in Windows Server vNext in software-defined everything, such as the new Network Controller. This can run your software-defined networking.

Many are coming off of W2003 and are looking for new features, etc. MSFT wants to make that seamless: www.microsoft.com/ws2003eos.

A way to get started with the cloud is to just connect and extend functionalities using hybrid solutions, such as Azure Site Recovery Services for DR in the cloud.

Announcing: Azure Operational Insights. Install an agent on existing on-premises machines and start to log information into th cloud to do deep insights on how things are running and visualize that data. There are security, capacity planning and change management insight packs. You can do a fast search and fix incidents. See System Center Advisor *cough*

Bring Azure to your on-premises data centre. This is Windows Azure Pack (WAP). You get the same skin as Azure, powered by the same hypervisor (Hyper-V) and System Center.

Jeff Woolsey comes out to talk new stuff.

Storage Replica: Storage replication, storage agnostic, built into the box. Do replication between clusters or stretch clusters between sites.  Demo: 2 notes in NY and 2 nodes in NJ. Seemless failover with no data loss thanks to synchronous replication. A cloud witness gives you quorum with a virtual witness site. Doesn’t require SANs and it woks with standalone servers. SIMPLE to set up.

image

System Center Advisor has come a long way:

image

Capacity planning allows you to project future usage based on empirical data and usage. Lots of information presented in a nice layout with lots of graphs. All powered by search. You can create personalized dashboards.

Manage your infrastructure using WAP to create Azure consistent clouds on premises using Windows Server and System Center.

Back to Alex Zander. He’s now going to pitch CPS. This is MSFT sold hardware running pre-packaged on-premises cloud, based on Dell h/w with lots of custom work done on drivers and firmware. Only Fortune 100’s need apply.

Half of the Microsoft hosting partners running the Cloud OS are in Europe.

On to Hyper-Scale. Over the last few decades, the industry is defined by the scarcity of resources: we are always struggling to find more, squeeze in more, etc. What if that was flipped on its head and we could use a hyper scale cloud with effectivly infinite resources.

Australia went live yesterday – now there are 19 Azure regions. The immense scale of Azure makes them cheaper and we can deploy cheaper “infrastructure” and services. Over 30 trillion storage objects in Azure. Over 1.2 million SQL DBs. Over 140m WAAD users.

image

Reminder of the G series of large memory VMs – the largest available on the public market. Intended for data processing. Also announced durable SSD storage in Azume Premium Storage with 50K IOPS with <1 MS read latency. Intended for workloads that might have been on bare metal.

Azure Batch preview is a job scheduling service in the cloud at a massive scale. Rich API and simple portal. Do batch jobs more quickly with massive elastic compute scale. You might use it for batch scale our and in on a scheduled basis to reduce VM costs.

1/5 of VMs in Azure are running Linux. CoreOS is supported now – a containerized tiny Linux OS.

Mark Russinovich, CTO and Azure, comes out to demo Azure Batch. He demos an open source 3d rendering app called Blender. He has a basic model that he will ray trace to make complete. he shows it before batch and it’s like watching paint dry. Now he adds a plug in to submit work to Azure Batch. How many VM instances you want are entered ina  dialog. He uses 8 x A8 compute intensive VMs with 40 GBps Infiniband networking. submits the job and now he can track the job status via the plugin. The rendering accelerates. We get a nice picture. He compares with the non-optimized job and it’s barely got started.

He now starts to talk about Docker containers on CoreOS. Docker is normally managed from Linux We see Docker management from Windows for the first time:

image

He manages containers running in a Ubuntu VM. He creates a wordpress site from Windows, via the CoreOS management host, running in a container on the Ubuntu VM. Takes about 1 second to fire up.

Now he moves on to premium storage. There are 3 VMs, one on standard storage. IOMETER running in the VM to stress test the IOPS of the VM. Hits 500-600 IOPS (min guarantee is 500). The second is a D-Series VM with premium storage. Same test gives 4082 IOPS (single premium disk). 3rd VM has 16 disks on premium storage and they’re striped. Appears as 16 TB volume. IOMETER gives 61623 IOPS.

image

Microsoft are the only big 3 cloud vendor with enterprise grade, hyper scale, and hybrid cloud. Gartner has Microsoft as the leading cloud vendor in 4 key areas:

image

Amazon only has 12 MPLS WAN networked locations for hybrid cloud. Google has none.

Azure Marketplace offers ahuge collection of partner provided and curated VM services. See names like Kemp, Oracle, SAP, IBM, Riverbed, Dell, Symantec, Kaspersky, Barracuda, and many more.

Enterprise Productivity

Users expect to be connected from anywhere with access to resources with no IT-created complications. Workers coming into the workforce work very differently than my generation. Touch, connectivity, collaboration, discoverability of information are their norm. BYOD .. that’s a cultural thing that affects the USA more, according to IDC.

We go back to device management, applications, and identity.

Some old info here on MDM. Sleepy time.

New Windows Intune updates arriving in the coming months. Manage Office mobile apps, MDM for Office 365 so you can manage docs and email and do selective wipe of O365 data on lost devices.

Office 365

Julia White is out to demo. She shows the new Azure AD Connect Preview tool for linking on-premises AD to WAAD. Goal is to simplify a previous complicated process.

Azure AD app proxy allows you to bring all apps into a single control plane. She has a Sharepoint on-prem app that she adds to Azure AD. Users now go to one place for authentication and authorization. Is AD MOVING (not just extending) to the cloud? User logs into the app via an iDevice.

Feedback on Office for iPad is that IT wants to manage those apps and corporate data. Intune will enable this in near future. White sets up a configuration policy. Can set up so managed apps can only copy/paste to other managed apps. Can manage deployment of managed apps. Make available the app out from the admin portal. Back to the iPad. Runs Ourlook. There’s an email with an Excel attachment and opens that. The only app possible in the selection is Excel. That’s the only managed spread sheet tool so the unmanged ones are not available. Tries to copy/paste into the Apple email tool – cannot. But can paste into Word because it is managed.

There’s a new O365 SDK for iPad apps. Devs can reach into O35 data from the Apple tablet.

MSFT is the only global provider to be approved for Article 29 pan European data privacy. O365 data is encrypted at rest. DLP is a feature of the E3 plan that allows you to protect against data leakage. Users can see it in action and understand the purpose of it – therefore no excuse for trying to work around it.

Brings up a report to see amount of overrides on opt-in DLP policy. Too high, so she decides to change the policy. There’s a credit card DLP policy that’s being overriden. Modifies it, and adds an action for overrides. Adds and RMS policy to disable forwards when the policy is overridden. If it’s overriden, a notification can be sent to auditors.

Creates a new email with an attachment. Straight away Office detects the DLP rule and notifies the user. The user overrides. The recipient gets the doc in an email – RMS prevents snipit, forward, print, etc. os the credit card details are secure.

That’s a wrap, folks!

 

2014
10.24

This is the last of my news posts before TechEd Europe. Expect crazy flurries of news on Tuesday morning during the keynote. I’ll be live blogging so my updates will be there.

System Center

Azure

Miscellaneous

2014
10.23

Life has been crazy for me lately. I’ve spent near 100% of the last 3 months at work (and a lot at home) working on material for lots of different Azure educational events. And in addition to that, I’ve been preparing my session for TechEd Europe, starting with the keynote at 08:30 in Barcelona (Spain, CET) on Tuesday morning.

I am there as a speaker this year, so I will not be in the media pit, and I might not even get into the keynote hall at all! I will be live blogging in any case, and I will also be attending lots and lots of sessions, mostly on Windows Server vNext Hyper-V, storage, and networking, and plenty of Azure too. I’ll be doing my best to live blog those sessions that I attend. I will also be covering virtualization and related content for Petri.com.

My tools of choice this time around are:

  • Toshiba KIRAbook (Windows 10 Technical Preview): The battery life is incredible in this machine.
  • OneNote: This is my preferred tool of choice for note taking.
  • Windows Live Writer: I’m blogging and this is how I publish to my site.
  • Canon 1D mk IV and 24-70 L IS mk II: A good camera is useful.

Of course, I am presenting, so that’s a big focus point for me. My session is all about squeezing the most our of Hyper-V. I’ve got information about features of Hyper-V that you might not know about or have been afraid to try. I’ll dive into some of the mechanisms used to enable some of those amazing demos of the recent past. And I have LOTS of demos (10 as it stands at the moment).

So come along to CDP-B329 From Demo to Reality: Best Practices Learned from Deploying Windows Server 2012 R2 Hyper-V – it’s a fine big room with 1100 seats to fill!!!! That’s pressure, but it’s nothing compared to competing in Speaker Idol (5 minutes is hell!).

2014
10.23

It’s the calm before the storm of announcements from TechEd Europe 2014.

Window Server

Desktop

Azure

Events

2014
10.22

Hyper-V

Windows Server

Azure

System Center

Microsoft Partners

Miscellaneous

2014
10.20

I tuned in a minute or two late to see Satya Nadella rehashing his cloud first, mobile first thing that has started to bore people. Substance, not mantras, please.

image

It’ the same small room in San Francisco as the non-streamed Windows 10 announcement.  He starts off talking about the cloud being the most complete cloud:

  • Productivity with CRM Online and Office 365
  • Hyper scale cloud with hybrid and public and private cloud offerings

image

 

He starts to talk about San Franciso and San Jose governments that adopted Office 365 for supporting mobile workers. Not just big enterprise, but also government sector and small businesses. NBC does encoding and live streaming of events via Azure. German company ThyssenKrupp manages over 1 million elevators using a service they built on Azure.

Azure compute power and research tools are being made available to Ebola researchers.

Paul Smith stores are using Hyper-V and are using ASR for DR. Datacenters are in a constant purchase cycle for storage – here’s the push on a non-selling StorSimple (it’s virtually an EA benefit that customers pay the shipping/import costs of – and pay for the Azure storage).

image

At this point, there is nothing new here. This is like a marketing operation for the media.

Scott Guthrie comes out wearing read (read that as announcements coming). G-Series of huge VMs are announced. A new premium storage account offering is accounted with much greater scalability and performance:

image

This is unparalleled scalability in the cloud. This is stuff that on-premises VMs cannot do.

He goes on to talk about on-premises and hybrid solutions, supporting any infrastructure including bare metal, Linux, and vSphere:

image

Microsoft provides the only consistent experience across public and private cloud, thanks to Windows Azure Pack.

Here comes a new hardware plus software solution called Cloud Platform System to bring Azure to your datacenter (San Diego codename). You get WAP, management APIs (REST) and hypervisor, similar to Azure. This is a partnership with Dell, available starting in November. This will be a flop. Dell are clueless about their current massive portfolio, and they usually prefer to sell Dell-owned management products over System Center, not to mention their general lack of knowledge of Hyper-V.

Now he talks about Docker to enable greater densities and to allow app mobility to the cloud.

CoreOS Lunix is coming to Linux, to give a memory optimized memory footprint. It’s the fifth Linux distro on Azure.

A dude from Cloudera comes on stage. Cloudeera is announced on Azure. Here’s a demo of the new Azure preview portal running Windows 10. There’s a Cloudera Enterrpsei offering in Data, Services etc.

And that was that. Event over. I bet the media were glad that they travelled across a continent for all that.

2014
10.17

VMware posted this article where academic research has found a vulnerability with Transparent Page Sharing (TPS). Apparently they can use this to determine the “private” AES encryption key of another virtual machine. Woops … another “breakout attack” for VMware. I’m still waiting on the first one for Hyper-V.

TPS is one of those features that vFanboys cling to when attacking Hyper-V Dynamic Memory. Now VMware are turning if off by default (starting Q4 2014 for ESXi 5.1, and later for other versions). Hmm, this case raises questions about the security design of vSphere.

I agree with VMware that the vulnerability is impractical in terms of usefulness to an attacker. But what if you could use TPS to get the private SSL key of an application server in a multi-tenant cloud, and then use that to launch man-in-the-middle attacks? That would be a serious threat.

Choose your hypervisor carefully – breakout attacks are BAD.

I wonder what fresh hate will be vomited in my direction by the vFanboys :D Thanks to Flemming Riis (@FlemmingRiis) for the heads up.

Technorati Tags: ,,
2014
10.17

This is the first of these since the 8th – my life consists of constant event/tradeshow/conference preparation at the moment so there’s little time for anything else.

Hyper-V

Windows Server

clip_image001

Azure

System Center Data Protection Manager

Windows Microsoft Intune

Office 365

Security

  • Signed Malware = Expensive “Oops” for HP: HP is revoking a digital certificate because the cert was used to sign malware in 2010. Nice one, HP!
  • And every retail chain in the US has been hacked. At least that’s what it seems like. Maybe the US banks will join the rest of us in the 21st century?

Miscellaneous

2014
10.16

I am in the midst of finishing off my presentation for TechEd Europe 2014, CDP-B329 From Demo to Reality: Best Practices Learned from Deploying Windows Server 2012 R2 Hyper-V.

as

The session drills into all the things that make previous big announcements & demos possible, and talks about those lesser known features that solve real problems. I’m covering a lot of stuff in this session. I submitted the draft deck a while ago, thinking that I’d have to cull a lot of it to fit within the limit of 75 minutes. Well, I did my first timed rehearsal tonight and I have a bit of wiggle room, maybe to even add in some more demos.

Speaking of which … my demos Open-mouthed smile Fast networking, good host hardware, and LOTS of PowerShell. All my demos are driven by PowerShell. Don’t think “ugh, boring!”. Nope. It’s all very visual, I assure you! There are ways, means, and tricks to show you the goodies even with a scripting language! Heck! PowerShell is even a part of the product that I want to demo! Right now I have 9 demos to show, and that might expand.

If you are coming to TechEd then I hope to see you at CDP-B329. Right now, I’m scheduled for Wednesday morning, but I heard I might be moved to the timeslot of doom on Friday at 08:30 Sad smile Please check the box for my session on the Schedule Builder to try change their mind before the move me!!!!! My session is confirmed for Wednesday at 10:15 in Hall 8.0 Room A2 (seats 1174 people!!!) – hit the schedule builder and check my session (CDP-B329) if it sounds interesting to you.

And by the way – a huge THANK YOU to Didier Van Hoye (aka @workinghardinit at http://workinghardinit.wordpress.com/)  for his help. He helped me sort out some problems in 2 of my demos. Didier is a class example of an MVP working in the community.

2014
10.16

It’s a reuse of the Office partner training label, but it’s simple and I like it: Microsoft Ignite. Hopefully my Speaker Idol win carries over, I don’t screw up in Barcelona, and I get to speak there!

image

This is bigger than MEC, TechEd, and the other tech conferences being merged:

  • Azure
  • Exchange
  • Intune
  • Lync
  • Office 365
  • Project
  • SharePoint
  • SQL Server
  • Surface
  • System Center
  • Visual Studio
  • Windows
  • Windows Server
  • And more

At the same prices as TechEd, this is a much higher value ticket because of the bigger breadth of content that you can absorb.

Technorati Tags: ,
2014
10.08

A new KB article by Microsoft solves an issue where a Windows 8.1 Client Hyper-V or Windows Server 2012 R2 Hyper-V virtual machine backup leaves the VM in a locked state.

Symptoms

Consider the following scenario:

  • You’re running Microsoft System Center Data Protection Manager (DPM).
  • You start a backup job in DPM to back up Hyper-V virtual machines (VMs).

In this scenario, DPM sometimes leaves the VM stuck in the backup state (locked).

A supported hotfix is available from Microsoft Support. To apply this update, you must first install update 2919355 in Windows 8.1 or Windows Server 2012 R2.

2014
10.08

Welcome to today’s cloud-heavy Microsoft news compilation.

Windows Server

clip_image001

Windows Client

Azure

  • Introducing the Azure Automation Runbook Gallery: The time it takes to create functional, polished runbooks is a little faster thanks to the new Azure Automation Runbook Gallery.
  • More Changes to Azure by Scott Guthrie: Including support for static private IP support in the Azure Preview Portal, Active Directory authentication, PowerShell script converter, runbook gallery, hourly scheduling support.
  • Microsoft Certification Test Tool Preview for Azure Certified: The Microsoft Certification Test Tool for Azure Certified is designed to provide an assessment of compliance to technical requirements as part of the Azure Certified program. The test tool includes a wizard style automated section and questionnaire section to assess characteristics of a Virtual Machine image running in Microsoft Azure and generate results logs. More information on the Azure Certified program is available.
  • Announcing Support for Backup of Windows Server 2008 with Azure Backup: Due to feedback. Please note that this is x64 only and that there are system requirements.
  • Hybrid Connection Manager ClickOnce Application: ClickOnce installer for the Hybrid Connection Manager.
  • D-Series Performance Expectations: The new D-Series VMs provide great performance for applications needing fast, local (ephemeral) storage or a faster CPU; however, it’s important to understand a little about how the system is configured to ensure you’re getting an optimal experience.
  • Cloud App Discovery – Now with Excel and PowerBI Support: One of the top customer requests was to be able to perform analytics on the data collected in tools like Excel and PowerBI. Now you can take cloud app discovery data offline and explore and analyze the data with tools you already know–Excel and PowerBI.
  • A new region will open in India by the end of 2015: It makes sense; there are 1 billion people and some big corporations there.
  • Microsoft Azure Speed Test: Which Azure region is closest to you (remember that Internet geography is different to the planet’s geography. For example, where I work is a few miles from Europe North (Dublin), but the test shows me that Europe West provides me with lower latency (beaten, obviously, by CDN). My own testing using Azure Traffic Manager with geo-dispersed websites has verified this.

clip_image002

Office 365

Miscellaneous

2014
10.06

I love my Lenovo Yoga 8, an 8” Android tablet. It’s what keeps me sane while travelling, it’s my bedside reading machine, and it’s my “couch” machine for those evenings when I’m “meerkatting” in front of the TV.

image

That’s why I was excited to see a story on WPCentral that thinks maybe that Lenovo might launch a Windows 8.1 version of one of the Yoga tablets (there is also a 10” version).

The Android tablet is ARM based – a low power ARM CPU. If Lenovo are releasing a Windows tablet in this form factor then I hope it is Intel-based and not ARM; ARM would require the soon-to-be-extinct Windows RT.

The original story on HDBlog.it (in Italian) thinks that this might be based on the 10.1” HD+ tablet, a larger cersion of my 8” entertainment and consumption machine, also with crazy long battery life and a built-in mini-kickstand.

WPCentral says that Lenovo has an announcement on Windows and Android tablets on October 9th. We won’t have long to see if this rumour is a fact.

Technorati Tags: ,
2014
10.06

The big news today is that HP is “planning” to split. No, not leave, but divide into two.

Hyper-V

Windows Server

Office 365

Miscellaneous

Get Adobe Flash player