(SOLUTION) Azure File Sync–Tiering & Synchronisation Won’t Work

I recently had a problem where I could not get Azure File Sync (AFS) to work correctly for me. The two issues I had were:

  • I could not synchronise a share to a new file server (new office or disaster recovery) when I set the new server endpoint to be tiered.
  • When I enabled tiering to an existing server endpoint, the cloud tiering never occurred.

I ran FileSyncErrorsReport.ps1 from the sync agent installation folder. The error summary was:

0x80c80203 – There was a problem transferring a file but sync will try again later

Each file in the share had an additional message of:

0x80c80203 There was a problem transferring a file but sync will try again later.

Both problems seemed to indicate that there was an issue with tiering. I suspected that an old bug from the preview v2.3 sync agent had returned – I was wrong because it was something different. I decided to disable tiering on a new server endpoint that wasn’t synchronising – and the folder started to synchronise.

When this sort of thing happens in AFS, you suspect that there’s a problem with the storagesync filter, which you can investigate using fltmc.exe. I reached out to the AFS product group and they investigated over two nights (time zone differences). Eventually the logs identified the problem.

In my lab, I deployed 3 file servers as Hyper-V virtual machines. Each machine had Dynamic Memory enabled:

  • Startup Memory: 1024MB
  • Minimum Memory: 512MB
  • Maximum Memory: 4096MB

This means that each machine has access to up to 4 GB RAM. The host was far from contended so there should not have been an issue. But it turns out, there was an issue. The AfsDiag traces that I created showed that one of the machines had only 592 MB RAM free of 1907 MB free… remember that’s RAM free from the currently assigned RAM, not from the possible maximum RAM.

The storagesync filter requires more than that – the release notes for the sync agent that that the agent requires 2 GB of RAM. The team asked me to modify the dynamic memory settings of one of the file servers as follows to test. Shut down the VM and modified the memory settings to:

  • Startup Memory: 2048MB
  • Minimum Memory: 2048MB
  • Maximum Memory: 4096MB

I started up the VM and things immediately started to work as expected. The new server endpoints populated with files and the tiered endpoints started replacing cold files with reparse pointers to the cloud replicas.

The above settings might not work for you. Remember that the storage sync agent requires 2 GB RAM. Your settings might require more RAM. You’ll have to tune things specifically to your file server, particularly if you are using Dynamic Memory; tt might be worth exploring the memory buffer setting to ensure that there’s always enough free RAM for the sync agent, e.g. if the VM is set up as above set the buffer to 50% to add an extra 1 GB to the startup amount.

Thanks to Will, Manish, and Jeff in the AFS team for their help in getting to the bottom of this.

Please follow and like us:

Leave a comment

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.