My Top 5 Features in System Center Data Protection Manager 2016

[Image Credit: www.gotcredit.com]

Microsoft’s System Center Data Protection Manager (DPM) has undergone a huge period of transition over the past two years. Significant investments have been made in hybrid cloud backup solutions, and DPM 2016 brings many improvements to this on-premises backup solution that all kinds of enterprise customers need to consider. Here are my top 5 features in DPM 2016.

5: Upgrading a DPM production server to 2016 doesn’t require a reboot

Times have changed and Windows Server & System Center won’t be released every 3-5 years anymore. Microsoft recognizes that customers want to upgrade, but fear the complexity and downtime that upgrades often introduce. Upgrading DPM servers and agents to 2016 will not cause production hosts to reboot.

4: Continued protection during cluster aware updates

The theme of continued protection during upgrades without introducing downtime continues. I’ve worked in the hosting business where every second of downtime was calculated in Dollars and Euros. Cluster-aware updates allow Hyper-V clusters to get security updates and hotfixes without downtime to applications running in the virtual machines. DPM 2016 supports this orchestrated patching process, ensuring that your host clusters can continue to be stable and secure, and your valuable data is protected by backup.

3: Modern Backup Storage

Few people like tapes, first used with computers in 1951! And one of the big concerns about backup is the cost of storage. Few companies understand software-defined storage like Microsoft, leading the way with Azure and Windows Server. DPM 2016 joins the ranks by modernizing how disk storage is deployed for storing backups. ReFS 3.0 block cloning is used to store incremental backups, improving space utilization and performance. Other enhancements including growing/shrinking storage usage based on demand, instead of the expensive over-allocation of the past.

2: Support for Storage Spaces Direct

While we’re discussing modern storage, let’s talk about how DPM 2016 has support for Microsoft’s software-defined hyper-converged infrastructure solution, Storage Spaces Direct. In recent years, these two concepts, inspired by the cloud, have shaken up enterprise storage:

  • Software-defined storage: Customers have started to realize that SAN isn’t the best way to deploy fast, scalable, resilient, and cost-effective storage. Using commodity components, software can overcome the limitations of RAID and the expense of proprietary lock-in hardware.
  • Hyper-converged infrastructure: Imagine a virtualization deployment where there is one tier of hardware; storage and compute are merged together using the power of software and hardware offloads (such as SMD Direct/RDMA), and turn cluster deployments into a simpler and faster process.

Windows Server 2016 took lessons from the previous two versions of Storage Spaces, Azure, and the storage industry and made hyper-converged infrastructure a feature of Windows Server. This means that you can deploy an extremely fast (NVMe, SSD, and HDD disks with 10 Gbps or faster networking) storage that is cost effective, using 1U or 2U servers, and with no need for a SAN, external SAS hardware, or any of those other complications. DPM 2016 supports this revolutionary architecture, ensuring the protection of your data on the Microsoft on-premises cloud.

1: Built for the Cloud

I’ve already discussed the cost of storage, but that cost is doubled or more once we start to talk about off-site storage of backups or online-backup solutions. While many virtualization-era backup products are caught up on local backup bells and whistles, Microsoft has transformed backup for the cloud.

Combined with Azure Backup, DPM 2016 gives customers a unique option. You get enterprise-class backup that protects workloads on cost effective (Modern Backup Storage) storage for on-premises short term retention. Adding the very affordable Azure Backup provides you with a few benefits, including:

  • A secondary site, safeguarding your backups from localized issues.
  • Cost effective long-term retention for up to 99 years.
  • Encrypted “trust no-one” storage with security mechanisms to protect you against ransom-ware and deliberate attacks against your backups.

In my opinion, if you are not using DPM, or have not looked at it in the past two years, then I think it’s time to re-evaluate this product.

 

Please follow and like us:

7 Comments on My Top 5 Features in System Center Data Protection Manager 2016

  1. Aidan, do you know of a solution for backing up user profile disks? Microsoft surprisingly offers no solution for this with DPM (UPD backup is not supported) or otherwise.

    Also, have you seen any recent data on ReFS vs. NTFS performance? Most of the benchmarks I saw when ReFS was introduced showed NTFS to be faster in typical disk IO. Block cloning and the other VM management-oriented performance benefits look fantastic, but not so much if it’s at the expense of day-to-day IO.

    Thank you!
    Ryan

  2. Christian Wimmer // February 25, 2017 at 1:17 PM // Reply

    DPM 2016 was quite buggy on release. If you deploy it make shure to use the latest rollup, anything below UR2 will just be painful to deal with.

  3. Aidan, I don’t see any benefits of MBS at the moment. I don’t see any space saving using Modern Backup Storage without dedup. At the moment DPM eats all space on the dedicated volume and sends a lot of Warning/Resolved alerts about free space on this volume.
    I added a new volume for DPM, but it doesn’t help to resolve the issue.
    It seems without dedup MBS/ReFS will not reduce space usage.
    Also DPM 2016 needs more disk space for the same backups than DPM 2012R2.
    I see a big sizes of the storage consumption for BMR backups. I didn’t see these sizes for Replica/Recovery point volumes in DPM 2012R2.
    I’m going to switch DPM 2016 using the old disk allocation technology.
    Would be nice if you will compare MBS with dedup, without dedup and old disk allocation technology.

    • This would be quite disappointing news. One of the primary features I was in desperate need of was improvement to the existing large block IO heavy NTFS based storage design. Apart from the capacity increase needs have you been able to do any benchmarking of the load placed on the underlying storage when using ReFS 3.0 versus NTFS? I would be VERY interesting in seeing some results from the field.

  4. Christian Wimmer // March 9, 2017 at 4:44 PM // Reply

    MBS gave us like 30-40% more storage space. A lot of the savings came from Hyper-V backups. It’s true that BMR eat a lot more space now, luckily we only have a couple of those.

2 Trackbacks & Pingbacks

  1. Announcing SC 2016 DPM Guest Blog Series – Christopher Golden Blog
  2. Announcing SC 2016 DPM Guest Blog Series – IT-News von PC-Meister

Leave a comment

Your email address will not be published.

*