Storage Spaces Direct – Cache Disk Status

Storage Spaces Direct can make use of cache disks if you have provided SSDs or NVMe SSDs in your nodes. Normally the capacity disks are bound to cache disks round-robin, see the official Microsoft doc here.

If you suspect or see that one node is not getting the right performance numbers you might wonder if your cache devices are used properly. The right way to check this is to run a Get-Clusterlog on your node which provide a log of all cluster components, including cache devices. In the file, scroll down to the ‘[=== SBL Disks ===]’ section which looks like this:


You will find a CSV formatted list with the disks in your node. Pay attention to the highlighted columns, these are the ones you’re looking for. With this information you can see which disk is connected to which cache disk. If the disks doesn’t have a ‘CacheDeviceId’ it is either a cachedisk itself or there is no cache device bound (booyah!).


Very cool but you want me to do this for every node… MANUALLY?!”

Nah, I wrote you a PowerShell script 😉
The script will setup a remote session with each clusternode and generate the clusterlog. It then reads through the log to find the disks section and report back:




Many credits of this script go out to this project.
I extracted the part for cache devices and modified the script so it runs remotely.

Thank you for reading my blog.
If you have any questions or feedback, leave a comment or drop me an email.

Darryl van der Peijl


5 thoughts to “Storage Spaces Direct – Cache Disk Status”

  1. What happens if you have an incorrect ratio? Example. 4 servers. Each with 3 NVMe, 6 SSD, and 10 HDD. Using 3 way mirror on the performance tier and parity on the capacity tier.
    Would this have any impact on resiliency?

    1. Hi Joe,

      It has never impact on resiliency. You would have 4 SSD’s that have 2 HDD’s bound to it. In other words, those 4 SSDs would get more IO and wear out faster.

  2. Hi
    So what do I do when I get an uneven ratio ? Is there any way to create the bindings manually ? Or let S2D re-consider the imbalance ?
    When adding 6 more hosts to an existing cluster of 6, two hosts failed to end up with the right cache binding.
    We have 5 SSD’s and 15 HDD’s in each node. So a 1:3 ratio. This works for all nodes, but two.

    I get these warnings on 2 hosts.
    WARNING: Not all cache devices in use
    WARNING: Binding ratios are uneven

    On these 2 nodes, only 1 or 2 cache disks are bound to the capacity disks. Even though all the SSD’s are in the pool and set to Journal usage.

    Will Repair-ClusterStorageSpacesDirect do the job ?


Leave a Reply

Your email address will not be published. Required fields are marked *