2011
05.28

Now we have support for CentOS guests on Hyper-V, how about monitoring them?  I noticed some retweets by Carsten Rachfahl of some posts by @OpsMgr on Twitter:

With a bit of searching I also found an old post on the subject of installing the RedHat agent and importing a modified version of the RedHat management pack.

The above two posts use a management pack that is shared on the community cross platform extensions site.

None of this is supported in any way, but you’re probably not too worried about support if you’re using CentOS anyway (I guess).  This will extend the power of OpsMgr to your free Linux distro.

2011
05.25

When building my new demo laptop environment, I wanted a way to:

  1. Grant Internet access VMs
  2. Give the VMs access to communicate with the host OS
  3. Keep VMs off of the office network – because I will do some messy stuff

Usually we use NIC bridging or Windows routing to get Hyper-V VMs talking on the wireless NIC because Hyper-V virtual networks cannot be bound to a wifi NIC.  But this would put my VMs on the office network which is a flat single-VLAN network.  That breaks requirement #3.

image

 

My solution was as seen above.  I created an internal virtual network.  That allows the VMs to talk to the parent partition (host OS) without physical network access.  To give the VMs internet access (for Windows updates and activation), I have installed a light weight proxy on the parent partition.  Users on the VMs are configured to use the proxy, this giving the VMs the required Internet access.  I can confiure the proxy to use the wifi or wired NIC on the laptop for outbound communications.   This solution meets all 3 of my requirements.

2011
05.25

IBM Up In The Clouds

IBM rolled out their Smart Cloud Enterprise public cloud offering earlier this Spring.  It is based on RedHat KVM virtualisation.  Right now, it offers support for RHEL, SLES, and Windows Server 2003/2008 guest operating systems.

  • Databases: You can have anything as long as it is DB2 or Informix, both from IBM.
  • Monitoring: Tivoli .. yay?
  • Application Servers (such as IIS or SharePoint): You can have anything as long as it is IBM WebSphere
  • Business Intelligence: You can have anything as long as it is IBM Cognos.

Hmm.

OK, I’m sure the IBM support will be amazing.  Oh? … Yeah, I nearly forgot.  Of course, big government departments and corporations will lap this up and IBM will make a lot of money.

Right now it is available to USA customers only, according to the website.

Technorati Tags:
2011
05.24

My role at work demands that I be able to demo lots of different sorts of scenarios, e.g. SBS 2011 + SCE 2010, or Hyper-V + System Center, or … well, lots of stuff.  All these scenarios are pretty much mutually exclusive in a single portable lab (my laptop, aka The Beast).

So my solution is as follows:

  • Configure the laptop to boot from a Windows Server 2008 R2 VHD
  • Enable AD and Hyper-V (not recommended in production)
  • Configure a standard AD, policies, and Hyper-V setup
  • Install drivers, patches, and so on
  • Shut it down
  • Copy the (boot from) VHD to a safe location

No I can quickly create a new lab on this Windows 7 laptop in a matter of minutes instead of hours:

  • Boot up into Windows 7 (from C:)
  • Copy the above VHD – this will either wipe/replace to configured VHD or I’ll save the original VHD lab

This gives me a new host and AD in the time it takes to copy a 10 GB VHD.  That means I can deploy SCE 2010 in on lab (running on VHD A) or the full dynamic datacenter in another lab (VHD B), without the need to totally rebuild the entire host OS, and go through the time consuming process of installing drivers on a W2008 R2 laptop.

2011
05.18

I’ll be presenting at this MicroWarehouse/Microsoft Ireland event on Windows Server 2008 R2 Service Pack 1.  The focus will be on virtualisation/Hyper-V.  So that means Dynamic Memory.  I’ll talk about how it works, but more importantly, how you should implement it with the various workloads on your compute cluster.  DM is huge for VDI on Hyper-V (a core component of the Microsoft/Citrix VDI v-alliance).  So is RemoteFX.  And I’ll be talking about that too, as well as showing it being set up and configured on my laptop, “the beast”.  RemoteFX is a hot topic internationally because it opens up some interesting opportunities in server based or centralised computing.  You’ll see that if you attend – there won’t be a recording/webcast.

Intended audience: IT infrastructure architects, implementation consultants, engineers, virtualisation administrators.

The agenda is:

•Hello W2008 R2 Service Pack 1 (SP1)

•Deploying SP1

•RemoteFX

–Demo

•Dynamic Memory

–Demo

–Guidance

•SCVMM 2008 R2 SP1

•If there’s time: ARP Spoofing Prevention

2011
05.17

The conspiracy theories started a few weeks ago when Veeam started to advertise on my (mainly MS infrastructure, featuring MS Hyper-V) blog.  Then we saw a countdown clock for a big announcement on the first day of TechEd USA 2011.  1 hour into the keynote, Veeam made their announcement:

“Veeam Software, innovative provider of VMware data protection, disaster recovery and VMware management solutions for virtual datacenter environments, today announced at Tech·Ed North America that it is adding support for Windows Server Hyper-V and Microsoft Hyper-V Server to Veeam Backup & Replication, the leading data protection solution for virtual environments used with more than 1.5 million virtual machines (VMs) worldwide”.

Veeam Backup & Replication can:

  • 2-in-1 backup and replication for Hyper-V: Veeam’s solution includes replication, which provides near-continuous data protection (near-CDP) and enables the best possible recovery time and recovery point objectives (RTOs and RPOs).
  • Changed block tracking for Hyper-V: Veeam’s new hypervisor support includes technology for changed block tracking to enable fast, frequent and efficient backup and replication of all VMs, including those running on Cluster Shared Volumes (CSV).
  • Built-in deduplication and compression: Included at no extra charge, these capabilities minimize consumption of network bandwidth and backup storage.

Veeam is a name that is almost synonymous with VMware.  Many would consider that if you buy VMware then you buy Veeam.  With this new offering for Hyper-V, and with cluster support, you have to think that more than a few Hyper-V architects are considering the wider set of options that are now available to them.

Technorati Tags: ,,,
2011
05.17

According to Microsoft, you can expect Service Pack 1 for Office 2010 and SharePoint 2010 to RTM in the end of June.  “Initially, Service Pack 1 will be offered as a manual download from the Download Center and from Microsoft Update, and no sooner than 90 days after release, will be made available as an Automatic Update”.

Changes include:

  • Outlook fixes an issue where “Snooze Time” would not reset between appointments.
  • The default behavior for PowerPoint "Use Presenter View" option changed to display the slide show on the secondary monitor.
  • Integrated community content in the Access Application Part Gallery.
  • Better alignment between Project Server and SharePoint Server browser support.
  • Improved backup / restore functionality for SharePoint Server
  • The Word Web Application extends printing support to “Edit Mode.”
  • Project Professional now synchronizes scheduled tasks with SharePoint task lists.
  • Internet Explorer 9 “Native” support for Office Web Applications and SharePoint
  • Office Web Applications Support for Chrome
  • Inserting Charts into Excel Workbooks using Excel Web Application
  • Support for searching PPSX files in Search Server
  • Visio Fixes scaling issues and arrowhead rendering errors with SVG export
  • Proofing Tools improve spelling suggestions in Canadian English, French, Swedish and European Portuguese.
  • Outlook Web Application Attachment Preview (with Exchange Online only)
  • Office client suites using “Add Remove Programs” Control Panel, building on our work from Office 2007 SP2
2011
05.17

I used to work for a quite “big” hosting company in Dublin, that claimed 1/3 of the Irish internet footprint was in their infrastructure.  Over half of the servers we had in that infrastructure ran Linux, in particular the CentOS distribution.  It was liked because it’s a relation of RedHat and … well … it’s free … and most hosting customers are pretty tight with their wallets.  I’d really never heard of CentOS before that.  As a hosting company we weren’t unusual for choosing CentOS for our Linux platform.  In fact, it’s the norm because it is free.

We’ve had growing support for Linux on Hyper-V for a while but that was restricted initially to SUSE SLES (Novell, a partner of MSFT, and very unpopular in the market because of the NetWare abandonment) and RedHat RHEL (popular in the enterprise because you have to pay for it).

Over the last couple of years CentOS has come up more and more in conversations.  I remember one very large “RFI” (a first step in the tender process) for a very large cloud (virtualisation environment) for a particular closed industry.  In my last job we started reading that document with great anticipation – thinking about the huge numbers.  But then our hearts sank:  CentOS support was required.  That ruled us out at the time.  I know that other IT services companies were feeling the same way because I received a number of calls on the subject of Hyper-V/Linux support.  I also know what official opinions in certain places were: this was no longer a Hyper-V opportunity and VMware would win it.  CentOS may have run perfectly with the Linux integration components but the lack of an official support statement was impacting on potential sales & installations.  And this is a huge factor in the decision making process for hosting (VPS/cloud/whatever-marketing-label-is-popular-at-the-time) companies who do favour CentOS over the paid for Linux distros that were previously the only supported open source OSs on Hyper-V.

But now we do have support for CentOS, according to an announcement on the Openness @ Microsoft blog.  Now more enterprises and hosting companies can consider Hyper-V for their virtualisation and/or private/public cloud needs.  There are no specifics such as version support, or how Microsoft will support an open source OS with no company being responsible for it.  Hopefully that will emerge in the coming days.

One remaining lacking component is the System Center story.  OpsMgr has made great strides in adding support for SLES and RHEL.  Unfortunately they haven’t been in sync with Hyper-V so the common denominator of supported versions is quite small.  Hopefully OpsMgr will add equal CentOS support quite soon.  Let’s face it: the business really doesn’t care about the servers; they care about the services running on them, and quite a lot of those run on CentOS.

EDIT#1

I’ve been informed that CentOS 5.2 through 5.6 are supported now.

2011
05.16

Yusuf Öztürk has released a handy looking tool on his blog for setting up Linux virtual machines.  It will:

1) Unattended IP, Hostname and DNS configuration for Linux VMs.
2) Automatic Linux integration components installation.
3) Multi Distro Support: Debian, Ubuntu, Centos, Fedora, Redhat and Suse!
4) Automatic CPanel installation for Redhat and Centos
5) Linux VM Template support (Use Skip for EnableLIC switch)
6) Hyper-V support! You don’t need SCVMM to use this script.
7) Multiple Hyper-V and SCVMM host support.
8) Automatic Emulated NIC to Synthetic NIC support.
9) No need to internet connection (SSH access etc.) or additional changes on VM.
10) Custom Answer File support! You can execute your own scripts.

You can download the tool from his blog.  Well done, Yusuf!

Technorati Tags: ,,,
2011
05.16

I’ve talked about, presented about, and written about this topic quite a bit.  There’s already a TechNet article on the topic, but Microsoft has also issued a document called Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper V.

“The purpose of this paper is to provide guidance and best practices for deploying Microsoft Exchange Server 2010 in a virtualized environment with Windows Server 2008 R2 Hyper V technology. This paper has been carefully composed to be relevant to organizations of any size”.

2011
05.16

As of today:

“Combining Exchange 2010 high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions that will move or automatically failover mailbox servers that are members of a DAG between clustered root servers, is now supported”.

In other words, The Exchange team heard us, and they’ve added support to install DAG members (on Exchange 2010 SP1) on a highly available virtualisation cluster.  That will simplify many virtualised Exchange installations.

Also:

“The Unified Messaging server role is supported in a virtualized environment”.

2011
05.14

I was asked today about using (W2008 R2 SP1 Hyper-V) Dynamic Memory and Forefront Threat Management Gateway (TMG).  To be honest, I hadn’t looked at TMG on virtualisation before – Microsoft has a huge product catalogue.

I searched, and found a long and detailed article on the subject.  The guidance starts with understanding the network role of the TMG installation in question.  That means understanding workloads (network and server) that the VM will be handling.  This leads to some general TMG configurations, which will obviously affect resource requirements and performance.  We are reminded that the TMG VM will be sharing a host with other VM workloads, and therefore a spiking TMG VM could affect resource utilisation of other VMs.  Consider this when sizing hosts or placing virtual machines.  The TMG group recommends doing a 2 week proof-of-concept or assessment to gather empirical data for this sizing process.  TMG will eat CPU and memory.

Speaking of memory, a SQL back end is used for logging.  This is normally an Express install.  This edition (at the moment) doesn’t have the ability to deal with expanding memory such as Hyper-V Dynamic Memory.  The minimum RAM for TMG is 2 GB.  Well, SQL Express has a “one GB memory limit for the buffer pool”If you decide you must enable DM on your TMG VM(s), then maybe you should set the start up memory setting for a TMG VM to 2048 MB.  That will leave SQL Express in a healthy state in terms of memory (knowing how much to take at startup) and will ensure that TMG always has the minimum required.  You can set your maximum memory setting to what you find is required after your assessment.

Physical networking is discussed.  Any VLANing or DMZ/edge network designs for a physical installation should still apply.  Don’t redesign or compromise the network design to suit virtualisation; do redesign the virtualisation hosts to suit the network and security requirements.

Ideally, a host used for providing capacity to network security VMs should not run other VM roles, e.g. you ideally won’t mix Exchange VMs and TMG VMs on the same host.  But hey, sounds great in mid/enterprise environments but a bit pricey for SMEs.

There’s lots of advice on lock down policies, patching, and enabling BitLocker on the parent partition.  And of course, only provide access to the parent partition as and when is (business critically) required.

An interesting one which might answer many forum questions, the TMG group recommends that internal and external virtual NICs should not share virtual switches.  That means you should ideally use different physical NICs for those networks, and maybe use different virtual NICs that are created by your network provider (e.g. Broadcom, HP NCU, etc).

There is a reminder to disable everything except the virtual switch protocol in the parent partition NICs that are used for external virtual switches.

You should have a way to log into or manage/monitor the parent partition separately from the virtual machine workloads.  In other words, have a dedicated parent partition physical network card that is not used by virtual networks.  This will allow you to manage the parent partition and it’s other workloads if something like a DOS attack happens and the internet facing NIC for the TMG VM is being hammered.

For your virtual machine disks, it is recommended that you place OS, SQL logs on different drives.  If you are using host server internal disks then you’ll need to create different LUNs.  Things aren’t that simple in a SAN where virtual disk systems are used, because different LUNs are actually striped across the same disks in the disk group.  I’d consider a CSV with all VHDs on there.  And then you get into the normal CSV/backup design decision making process.  Remember to keep IOPS requirements (from the assessment) in mind.

The article ends with a discussion of various virtual networking designs and how they will impact on the performance of your TMG VM.

2011
05.13

Let’s pull a Doctor Who and travel back in time to 2003.  Odds are when you bought a server, and you were taking the usual precautions on uptime/reliability, you specified that the server should have dual power supplies.  The benefit of this is that a PSU could fail (it used to be #3 in my failure charts) but the redundant PSU would keep things running along. 

Furthermore, an electrician would provide two independent power circuits to the server racks.  PSU A in each server would go into power circuit A, and PSU B in each server would go into power circuit B.  The benefit of this is that a single power circuit could be brought down/fail but every server would stay running, because the redundant PSU would powered by the alternative power circuit.

image

Applying this design now is still the norm, and is probably what you plan when designing a private cloud compute cluster.  If power circuit A goes down, there is no downtime for VMs on either host.  They keep on chugging away.

Nothing is free in the computer room/data centre.  In fact, everything behind those secured doors costs much more than out on the office floors.  Electrician skills, power distribution networks, PSU’s for servers, the electricity itself (thanks to things like the UPS), not to mention the air conditioning that’s required to keep the place cool.  My experience in the server hosting industry taught me that the cost of biggest concern was electricity.  Every decision we made had to consider electricity consumption.

It’s not a secret that data centres are doing everything that they can to eliminate costs.  Companies in the public cloud (hosting) industry are trimming costs because they are in a cutthroat business where the sticker price is often the biggest decision making factor for the customers when they choose a service provider.  We’ve heard data centres running at 30C instead of the traditional 18-21C … I won’t miss having to wear a coat when physically touching servers in the middle of summer.  Some are locating their data centres in cool-moderate countries (Ireland & Iceland come to mind) because they can use native air without having to cool it (and avoiding the associated electrical costs).  There are now data centres that take the hot air from the “hot aisle” and use that to heat offices or water for the staff in the building.  Some are building their own power supplies, e.g. solar panel farms in California or wind turbines in Sweden.  It doesn’t have to stop there; you can do things in the micro level.

You can choose equipment that consumes less power.  Browsing around on the HP website quickly finds you various options for memory boards.  Some consume less electricity.  You can be selective about networking appliances.  I remember when buying a slightly higher spec model of switch than we needed because it consumed 40% less electricity than a lesser model.  And get this: some companies are deliberately (after much planning) choosing lower capacity processors based on a couple of factors. 

  • They know that they can get away with providing less CPU muscle.
  • They are deliberately choosing to put less VMs on a host than is possible because their “sweet spot” cost calculations took CPU power consumption and heat generation costs into account.
  • Having more medium capacity hosts works out cheaper for them than having fewer larger hosts over X years, because of the lower power costs (taking everything else into account).

Let’s bring it back to our computer room/ data centre where we’re building a private cloud.  What do we do?  Do we do “the usual” and build our virtualisation hosts just like we always have built servers: each host will get dual PSUs on independent power circuits just as above?  Or do we think about the real costs of servers?  I’ve previously mentioned that the true cost of a server is not the purchase cost.  It’s much more than that, including purchase cost, software licensing, and electricity.  A safe rule of thumb is that if a server costs €2,000 then it’s going to cost at least €2,000 to power it over its 3 year life time.

So this is when some companies compare cost of running fully specced and internally redundant (PSU’s etc) servers versus the risk of having brief windows of downtime.  Taking this into account, they’ll approach building clusters in alternative ways.

In the first diagram (above) we had a 2 node Hyper-V cluster, with the usual server build including 2 PSUs.  Now we’re simplifying the hosts.  They’ve each got one PSU.  To provide power circuit fault tolerance, we’ve doubled the number of hosts.  In theory, this should reduce our power requirements and costs.  It does double the rack space, license, and server purchase costs, but for some companies, this is negated by reduce power costs; the magic is in the assessment. 

But we need more hosts.  We can’t do an N+1 cluster.  This is because half of the hosts are on power circuit A.  If that circuit goes down then we lose half of the cluster.  Maybe we need an N+N cluster?  In other words if we have 2 active hosts, then we have 2 passive hosts.  Or maybe we extend this out again, to N+N+N with power circuits A, B, and C.  That way we would lose 1/3 of a cluster if the power goes.

Increasing the number hosts to give us power fault tolerance gives us the opportunity to spread the virtual machine loads.  That in turn means you need less CPU and memory in each host, in turn reducing the total power requirements of those hosts and reduces the cost impact of buying more server chassis’.

image

The downside of this approach is that if you do power to PSU A in Server 1, the VMs will stop executing, and failover to Severs 3 or 4.

I’m not saying this is the right way for everyone.  It’s just an option to consider, and run through Excel with all the costs to hand.  You will have to consider that there will be a brief amount of downtime for VMs (they will failover and boot up on another host) if you lose a power circuit.  That wouldn’t happen if each host has 2 PSUs, each on different power circuits.  But maybe the reduced cost (if really there) would be worth the risk of a few minutes downtime?

2011
05.12

I’ve either completely forgotten this application compatibility solution or it escaped by my attention.  RemoteApp for Hyper-V is a VDI solution that allows you to publish apps from the following VDI VM guest operating systems to end users via RDP:

  • Windows XP SP3: Professional
  • Windows Vista SP1 and above: Enterprise and Ultimate
  • Windows 7: Enterprise and Ultimate

You can set it up in a “standalone” format where you manually create VMs, RDP files, and configure end user machines.  Alternatively you can create a full RDS VDI farm, using the RD Connection Broker. 

This product isn’t as manageable as a normal RDSH (session host) RemoteApp solution but it sure seems like a better (manageable) way to do appcompat than XP Mode (which is cheaper), thanks to the centralisation of VMs that can be easily deployed via SCVMM/SCE/or Hyper-V import/copy.

2011
05.12

PubForum Dublin 2011 started today with a “pre-con” master class on Windows Server 2008 R2 Remote Desktop Services, focusing on VDI.  The speakers are Christa Anderson, Kristin Griffin (contributed but couldn’t be here) (both of them them wrote Windows Server® 2008 R2 Remote Desktop Services Resource Kit), Alex Yushchenko (RDS MVP, and the organiser), and me.

Christa is an RDS program manager in Microsoft and therefore is a fountain of knowledge.  She’s speaking right now.  I’m sitting here, listening, and making the most of the learning opportunity.

I’ll be doing a 2 hour brain dump on Hyper-V/SCVMM/backup in a VDI context.  My slide deck is monstrous.  I’ve had to drop a check point into it so I can see how I’m doing for time.  I’m not hitting all possible subjects, but I am focusing on what I think is critical, and some of the usual “pits” that I find people fall into.

Tomorrow I have a Microsoft Private Cloud session.  That’ll be funny; VMware will be in the next room talking about their solution.

And on Friday I have a 15 minute session.  I’d thought about doing an update session on Dynamic Memory but that is being covered in another 1 hour session.  And I thought about CSV/backup but I’m doing that today and it requires more than 15 minutes.  I think I’ll do a BYOQ session combined with chalk’n’talk.

2011
05.10

I’ve started in my new job and I’m in “personal” hardware heaven.  I’ve a snazzy HP Eltitebook 8740w with an i7 processor, on the way, and have a 256 GB SSD, 512 HB hybrid (SSD cache) drive (in a DVD slot caddy), and additional 8 GB RAM (to bring the laptop to half it’s 32 GB RAM potential), and a 12 cell battery.  It’s going to be a mutha for demos.

I also have a desktop machine.  That’ll allow me to double up my virtual load at peak usage, but it is intended mainly as the office machine while I work in the laptop lab.  The good news is that it’s an i5 CPU PC, with 12 GB RAM.

So that means I need to start Hyper-V building.  The plan is to dual boot with Windows 7 on both machines.  I could go with external disks but that means carrying stuff.  I’ll have enough internal storage so the plan is to boot from VHD.  This means the server OS will be installed in a VHD. 

Now I could go installing an OS in a VHD.  Yawn!  Time consuming.  Alternatively I could use WIM2VHD.  Note that you must install WAIK for Windows 7 to provide the prerequisite tools for this utility to work.  I’ve taken the install.wim file from the Windows Server 2008 R2 media, and run it:

CSCRIPT WIM2VHD.WSF /WIM:C:install.wim /SKU:SERVERENTERPRISE /VHD:C:W2008R2Ent.vhd

That will create a VHD file with an “installed” operating system.  This works because the Windows installer consumes files from a WIM file in the ISO/DVD that is a file based image, making it easy to read, consume, and manipulate.  I could have customise the install using an unattend file:

/UNATTEND:C:unattend.xml

Now I can configure my PC to boot from this VHD.  First step: attach the VHD.  You can do this from an elevated command prompt.

diskpart
select vdisk file=c:W2008R2Ent.vhd
attach vdisk
list volume
select volume <volume_number_of_attached_VHD>
assign letter=v
exit

This attaches the VHD file that you have created from the install.wim file using WIM2VHD.  It then assigns the drive letter V (or whatever is free for you) to that VHD.  You can see this in Disk Manager.

The following commands will now configure your PC to add an additional boot option to allow your machine to dual boot with Windows 7 on the C: drive (default) and Windows Server from the VHD (just added):

cd v:windowssystem32

bcdboot v:windows

Now your PC can dual boot.  All that remains is to configure the server with Hyper-V, etc.

image

When you reboot a boot menu appears.  By default, the new Windows VHD will be the default, but you can change it as above in Advanced System Settings.

The VHD will boot up, and commence the mini-setup wizard.  The OS is customised, boots up, and you can log into it, install drivers, enable Hyper-V, and so on.  I’ve got this working on my PC.  Next up will be the laptop.

I think this is a great way to get a Hyper-V host up and running.

Oh and it doesn’t end there …

You may have heard that SCVMM 2012 can deploy Hyper-V hosts.  It does this by deploying a VHD and configuring the host hardware to boot from that VHD.  Where does that VHD come from? Maybe (I haven’t tried it yet because I don’t have the required hardware) it could come from WIM2VHD and an install.wim? 

Comments on a post card …

2011
05.09

On May 20th, I will be presenting the 4th in the series of these events.  This event, focusing on what Windows Server 2008 R2 Service Pack 1 brings to Hyper-V, will be co-sponsored by Microsoft Ireland and MicroWarehouse Ltd.  You can register now.

Content will focus on RemoteFX and Dynamic Memory.  As you may have gathered from the last couple of months, I probably have a lot to talk about the latter in this 3 hour long event.  I’ll also try to squeeze in time for some other topics.

2011
05.09

Later this morning, I start a new job as a technical sales lead at MicroWarehouse Ltd.  This company is one of the biggest software distributors (and lots of other stuff) in Ireland and deals with a lot of Microsoft partners on a daily basis, selling them software and helping them with client engagements.

My role will be very much like that of a Partner Technical Advisor (PTA) in Microsoft.  In fact, the job spec is identical.  I will be working with our customers, who are Microsoft partners, to identify partners with potential sales/solutions growth, and to work with them to increase their sales.  This may require some sales/marketing (I’m moving to the dark side!), some education/training, and assistance with early implementations.  And I will be on the road a lot.

Why am I leaving System Dynamics Ireland after just 11 months?  They weren’t a good fit.  When I landed there, I found that there was a lack of experience in IT infrastructure.  That ruled us out of so much business.  I spent a frustrating amount of time on the bench and I decided that it was best for all concerned that I look elsewhere.

I start late this morning, and I have the feeling that there’s already a queue of work waiting for me.  My work laptop has already been pre-ordered: a nice HP mobile workstation capable of 16 GB RAM and dual hard drives: nice for Hyper-V and demos!

2011
05.05

The battle to see who will dominate the public cloud arena is shaping up like one of those original UFC tournament events where fighters of all sizes and backgrounds fought each other to see who was the ultimate fighter … and to see who had the ultimate martial art. 

We know Amazon are the dominators right now based on their customised XenServer (I believe but could be wrong).  Microsoft has, and continues to, put together a formidable threat to them, based on SaaS and Azure.  Google has a SaaS offering too.  And there are lots of other offerings from various point solution SaaS providers and hosting companies too based on VMware, XenServer, and Hyper-V.

Stepping up now are Dell and HP.  Dell have recently started recruiting developers and software architects in Dublin for their cloud offering.  OpenStack seems to be their preferred cloud solution with Azure Appliance … we do know that they have considerable custom hardware engineering expertise for large scale cloud deployments.  That knowledgebase will give them an advantage.   I read yesterday of a Linked In “leak” that leads us to believe that HP are focusing on VMware for their cloud.  They are announcing their cloud at VMworld according to the latest rumours, and it will be engineered to be similar to Amazon EC2. I know that their Galway (western Ireland) R&D operation has been recruiting Java and opensource skills and that seems to be backed up by the same “leak”.

IBM are also in the game.  The have Lotus Live (seems very limited compared to Office365 or even BPOS based on the brief look I had), but they also are doing something in the cloud arena.  Funnily enough, an ex colleague who started his first post-college job on the same day and in the same team as me is involved in their Dublin operation.

Using the UFC comparison, who will be the big powerful wrestler, throwing around their competition?  I think Dell stands a very good shot at that.  As a consumer, I’d be worried about HP’s commitment.  They were in the online backup game for a while and backed out, leaving a lot of customers in the lurch.

Who will be the boxer/kick-boxer who can always deliver that knockout punch, even when losing in the 5th round?  IBM are a mystery to me.  Other than some software that I hate (yes, I know many, but not the majority, of you love Domino/Notes), IBM makes an absolute fortune every year doing stuff we never hear about.  That makes them a heavyweight to me.  I’m also thinking Microsoft.  Their advantage is that they own a huge percentage of the on-premises market which is not going to disappear.  Integration via the hybrid or cross-premises cloud will be a nice clip to the chin.

And who will be the weedy looking Brazilian Jiu-Jitsu guy that takes everyone by surprise by dislocating the opposition’s ankle/knee/elbow or choking them out?  In my opinion, these are the folks to watch.  They’ll be the ones that are small enough to adjust to this emerging business.  Customer requirements, regulatory compliance, and other complications, are all still evolving.  The likes of IBM, Dell, Microsoft, HP and Amazon are all so big that change will be slow.  The “smaller” guys can adapt to the environment more easily.  When I say “small”, this could be a Rackspace or similar which are still very big presences but not on the same scale as the big boys, or they could be the smaller hosters who have even more freedom to engineer quickly.

Those early days of mixed martial arts saw this relatively unknown Brazlian choke out bigger, and allegedly badder guys than him.  Over the follow two decades, the sport evolved.  Now you normally cannot be a UFC champion without learning wrestling, (kick)boxing, and Brazilian Jiu Jitsu (BJJ) all at the same time.  Any weaknesses are exposed and taken advantage of as a fighter moves up the ladder.  But there are many fighters who have decent careers being one-trick-ponies, as long as they always put in a good effort.  They may lose a bit, and might not be champions, but fans want to see them because they are entertaining in the octagon (ring).  And this is where things get interesting in the cloud world.

Microsoft do appear to be that complete fighter who does a little bit of everything.  They could be the complete George St. Pierre to defeat the traditional wrestler/boxer of Matt Hughes/Amazon EC2.  They have PaaS in Azure.  They have a form of IaaS (stateless so it’s limited) in Azure VM Role which could develop to a more complete IaaS down the road.  And they have a growing SaaS offering in Office365, Intune, System Center Advisor, CRM, etc.  They have the huge on-site presence that can be integrated.  They’re all over the world and are marketing like crazy.  Sales people are being instructed to sell cloud first, then infrastructure.  IBM talk at very high levels but I’ve never heard specifics.  HP are building something that is open source based.  There is room for that in the public cloud arena.  Dell are doing Azure Appliance which will give them a PaaS, and HP were thought to be doing the same.  Amazon are an infrastructure company right now.  They may not be the right company to build a PaaS to pair with their IaaS.  Google are just a SaaS offering right now. 

Those “BJJ” hosters are in for interesting times.  I’ve bleated on before about the Patriot Act.  It’s the smaller local companies that are in a position to take advantage of that opening to cater for sensitive customers who want to go into the public cloud.  And those folks that do innovate new services, develop customer bases, and grow, will be the folks who become acquisition targets in the future.  They will be the new “fighting skill” that must be acquired to become a complete champion of the “sport”.

All that remains to be seen now is who will be the ultimate fighter … of the public cloud!

Technorati Tags:
2011
05.04

This rules. I drafted this post when I was sitting in a portable photography hide last night, watching some bait, hoping for a Common Buzzard to come visiting. This wasn’t a realistic option 6 months ago when I could lug along a heavy laptop with 3G access or my useless Windows Mobile 6 phone.

I lost it with that phone last year when trying to set the alarm to wake me up for an early flight. The LG shell would cause the interface to “bounce” whenever you pressed a putting with the stylus. No button press s recognised. As a result, the screen broke with a stylus point shaped impact point.

I blogged before that I chose the iPhone after trying it and a Samsung Android phone. I quickly found myself hooked on a few apps and web access. I was commuting on the train every day for 3 hours a day so I installed the then available VLC viewer and then started watching TechEd presentations,movies, and TV shows. Stuff I was missing because I was busy now filled in that dead time on the train.

Then last January I found myself in a hide in Norway, sitting there 7 hours a day, waiting for those few minutes when a Golden Eagle would land at the bait before my lens. I had no Internet access up in the mountains and the data roaming fees would have killed me. But I had installed the Kindle app on the phone and was reading books all day long while spying out the tiny window of the hide. Now I had a new use for the phone. I could quickly get books and read when I found myself with some spare time.

I was staying in Bellevue WA for the MVP summit in March. I decided to wander over to the Microsoft and Apple stores to have a look. I’d decided that I’d repeat this process with tablets. Here’s why.

In December I was at a MVP get together in Reading UK and was taking notes on my netbook. The batter dies half way through the event and I had to fall back to pen and paper. Let’s be honest … Those notes would go nowhere. My neighbour, fellow VM MVP Mark Wilson, counted tapping away on his iPad with batter to spare by the end of the day. Damn!

So I had a quick look at the demo iPads and bought one. The process was a dream. My credit card was associated with my iTunes account. They swiped it and activated the iPad in the store. They disposed of the trash for me. I walked out with my iPad in my laptop bag and was using it in a bar 10 minutes later. The following day I had Evernote installed for note taking and worked away all day without concern for battery life. Sweet!

The second day was funny. The iPad2 was being launched. And Ben Armstrong, Hyper-V PM and fellow geek, had a good time laughing at me for buying an iPad 1 that week. But I did save quite a bit by buying in the USA instead of Ireland, and the 2 would be out when I was over there.

I returned home. The iPad became my primary easy of reading books. Games were installed. It replaced my iPhone for viewing movies and TV shows using VLC. It also became my “couch computer” … A way to browse and read online without lugging a laptop around.

Now I found myself in the car quite a bit. I’m also facing increased time on the road. Radio is tiresome, always moaning about the same things or playing the same 10 songs over and over. I discovered podcasts. I have an FM transmitter for my iPhone and have the car radio tuned into it. I have a smart playlist set up for everything and another set up for NFL news. Now those hour long drives fly by while listening to ESPN Mike and Mike or Paul Thurrot. Yes, I have a MSFT podcast subscription but it’s a little light for my tastes.

I also now use the iPad as my portable portfolio. I have a “best of” photo folder that syncs with both it and the iPhone so I can quickly show people my best photography.

Everything is not perfect of course. I’d like more openness for data transfer and clean up. I’m used to that on the PC. And spell checking can be a nightmare too. As can accurately placing a cursor while correcting typos.

The recent update to IOS gives me the ability to tether the iPad to the iPhone to access the Internet. That useful because I maintain my single account and have access on the bigger device when required using the personal hotspot. So I write this, in a wasteland area, waiting for a bird of prey to land, making use of my down time.

I’ve just installed an app for a photography magazine. My first issue is downloading now. I can read that magazine as soon as it is published, without travelling to a newsagents, and without having to consume paper. Not to mention that it will reduce the cost of purchasing it. This is handy; I’ll be spending a lot more time going to meetings soon and I like to be early. I tend to read if I am early and I can pick up a magazine or read something on Kindle at the drop of a hat while sitting in the car or a nearby cafe.

Technology rocks!

2011
05.03

Today, Veeam has become a sponsor of this site.  Veeam are best known as the guys who make backup work on VMware’s virtualisation platform.  I became aware of them on the Minasi forum.  Anytime there was a question about backup on VMware, most of the answers were Veeam.  But they are much more than that …

Veeam are one of those companies that get that management is critical on a virtualisation platform.  Check out their product portfolio and you’ll see that Veeam understand that there is more to management than the virtualisation layer.

If you are on their site, you will see that Veeam have something big to announce in 12 days.  Curiouser and curiouser :)

And … shhh … but did you know that you can use System Center with VMware?  Yup!  Veeam have a slice of that pie too.  So check out the award winning Veeam to see what they can offer you.

Technorati Tags: ,
Get Adobe Flash player