
If the value is different from those advised by HPE, here's the command to set the value for one volume (test with only one or two volumes first, then if it works, apply it to all) :Įsxcli storage core device set -device device_name -queue-full-threshold 4 -queue-full-sample-size 32įinally, have you tried to modify the PSP Options of your current rule to "1" ?
#VMWARE ESXI 6.7 HDD SUPPORT FULL#
It will give you the list of all your volumes with associated parameters, then you can take a look at the QueueFull Sample Size, Queue Full Threshold and Device Max Queue Depth if needed. You can check it with the following command : (Sometimes there's this one in addition "queue-depth 128" but it's not advised to change it if not needed.) I had the same issues on earlier ESXi versions, do your volumes get the following tuning as advised by HPE ? : Message was edited by: Schroeber - Changed title of thread to indicate problem is with 3PAR Disks But maybe it will help someone else out there if they're seeing it too. This is the only change I'm aware of that has been made to the hosts. Really just wondering if other folks are seeing this as well, or if I have something else in my environment I should track down. It's just like that config disappeared in my upgrade. I use the following command in my deployments to set the default path policy for my ALUA disks:Įsxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR After lots of digging, it looks like my path policy was reset from Round Robin to Most Recently Used. No problems early on, but as we upgraded more hosts, we started getting storage alarms about High I/O Latency to VMs.

#VMWARE ESXI 6.7 HDD SUPPORT UPDATE#
Got a sizeable environment of about 80 hosts that we recently updated from some build of Update 1 (not sure which) to Update 2 (Build 13981272 to be exact).

Wondering if someone else has run across this one?
