Download Esxi 6.7 Update 1

  1. Esxi 6.7.0 Update 3
  2. Download Esxi 6.7 Update 3
  3. Download Esxi 6.7 Update 1 Crack
  4. Esxi 6.7 Update 4
  5. Vmware Patch

After some months from the announce, finally VMware vSphere 6.7 Update 1 is now generally available (GA) and you can now download all the bits. Of course, this also mean a new version of the VMware vSAN 6.7 Update 1. But more importantly it means also a fully featured vSphere Client with the new version of the HTML5 client in vSphere 6.7 Update 1! Ensure that you have the login credential for the ESXi host which has the vCenter VM.- Below is an example on patching vCenter Appliance vCenter Appliance 6.7 to vCenter Appliance 6.7 Update 3o. This version, 6.7 U3o, includes the fix for VMSA-2021-0020. The build number of the patched vCenter will be 18485166.

Register to download your Free Product

This download center features technical documentation and installation guides to make your use of vSphere Hypervisor a success.


Top vSphere Hypervisor Resources

VMware vSphere Hypervisor – Install & Configure

Thank you for downloading VMware vSphere Hypervisor

Esxi 6.7.0 Update 3

Introductory Resources

Installing, Deploying and Using VMware vSphere Hypervisor

Videos

Installing, Deploying and Using VMware vSphere Hypervisor


Technical Virtualization Topics

Read technical information on deploying virtualization to the entire IT infrastructure.


Troubleshooting & Support

Learn basic tips and tricks for troubleshooting various components of VMware vSphere Hypervisor.

Other Resources

How to Buy

Build a Dynamic Datacenter with VMware vSphere

VMware vSphere Hypervisor enables single-server partitioning and forms the foundation for a virtualized datacenter. By upgrading to more advanced editions of VMware vSphere, you can build upon this base virtualization layer to obtain centralized management, continuous application availability, and maximum operational efficiency. VMware vSphere is the most widely deployed enterprise virtualization suite that offers customers:

  • Centralized management of virtual machines and their physical hosts
  • Integrated back up and restore of virtual machines
  • Protection against physical server failures for high availability
  • Live migration of virtual machines between physical servers with no downtime
  • Dynamic load balancing of virtual machines to guarantee service levels
Customers can obtain VMware vSphere Hypervisor free of charge and later seamlessly upgrade to more advanced kits of vSphere designed for Small Businesses or Mid-Size & Enterprise Businesses

Please login or create an account to access VMware vSphere Hypervisor license and downloads

Connect Support

View the top articles related to troubleshooting and support for this product. Add keywords to narrow your search.

Relevant Keywords:

ESXi 6.7 Update 1 16 OCT 2018 ISO Build 10302608

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

  • ESXi 6.7 Update 1 adds a Microsemi Smart PQI (smartpqi) LSU plug-in tо support attached disk management operations on the HPE ProLiant Gen10 Smart Array Controller.
  • ESXi 6.7 Update 1 adds Quick Boot support for Intel i40en and ixgben Enhanced Network Stack (ENS) drivers, and extends support for HPE ProLiant and Synergy servers. For more information, see VMware knowledge base article 52477.
  • ESXi 6.7 Update 1 enables a precheck when upgrading ESXi hosts by using the software profile commands of the ESXCLI command set to ensure upgrade compatibility.
  • ESXi 6.7 Update 1 adds nfnic driver support for Cisco UCS Fibre Channel over Ethernet (FCoE).
  • ESXi 6.7 Update 1 adds support for Namespace Globally Unique Identifier (NGUID) in the NVMe driver.
  • With ESXi 6.7 Update 1, you can run the following storage ESXCLI commands to enable status LED on Intel VMD based NVMe SSDs without downloading Intel CLI:
    • esxcli storage core device set -l locator -d
    • esxcli storage core device set -l error -d
    • esxcli storage core device set -l off -d
  • ESXi 6.7 Update 1 adds APIs to avoid ESXi host reboot while configuring ProductLocker and to enable the management of VMware Tools configuration by the CloudAdmin role in cloud SDDC without the need of access to Host Profiles.
  • ESXi 6.7 Update 1 adds the advanced configuration option EnablePSPLatencyPolicy to claim devices with latency based Round Robin path selection policy. The EnablePSPLatencyPolicy configuration option does not affect existing device or vendor-specific claim rules. You can also enable logging to display paths configuration when using the EnablePSPLatencyPolicy option.
  • With ESXi 6.7 Update 1, you can use vSphere vMotion to migrate virtual machines configured with NVIDIA virtual GPU types to other hosts with compatible NVIDIA Tesla GPUs. For more information on the supported NVIDIA versions, see the VMware Compatibility Guide.
  • With ESXi 6.7 Update 1, Update Manager Download Service (UMDS) does not require a Database and the installation procedure is simplified. For more information, see the vSphere Update Manager Installation and Administration Guide.

For more information on VMware vSAN issues, see VMware vSAN 6.7 Update 1 Release Notes.

Earlier Releases of ESXi 6.7

Features and known issues of ESXi are described in the release notes for each release. Release notes for earlier releases of ESXi 6.7 are:

For internationalization, compatibility, installation and upgrades, open source components, and product support notices, see the VMware vSphere 6.7 Release Notes.

Installation Notes for This Release

VMware Tools Bundling Changes in ESXi 6.7 Update 1

In ESXi 6.7 Update 1, a subset of VMware Tools 10.3.2 ISO images are bundled with the ESXi 6.7 Update 1 host.

The following VMware Tools 10.3.2 ISO images are bundled with ESXi:

  • windows.iso : VMware Tools image for Windows Vista or higher
  • linux.iso : VMware Tools image for Linux OS with glibc 2.5 or higher
Download Esxi 6.7 Update 1

The following VMware Tools 10.3.2 ISO images are available for download:

  • solaris.iso : VMware Tools image for Solaris
  • freebsd.iso : VMware Tools image for FreeBSD
  • darwin.iso : VMware Tools image for OSX

Follow the procedures listed in the following documents to download VMware Tools for platforms not bundled with ESXi:

Upgrade Notes for This Release

IMPORTANT: Upgrade from ESXi650-201811002 to ESXi 6.7 Update 1 is not supported because this patch released after 6.7 Update 1 and is considered a back in time upgrade.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product.

Build Details

Download Filename:update-from-esxi6.7-6.7_update01.zip
Build:

10302608

Download Size:450.5 MB

md5sum:

0ae2ab210d1ece9b3e22b5db2fa3a08e

sha1checksum:

b11a856121d2498453cef7447294b461f62745a3
Host Reboot Required:Yes
Virtual Machine Migration or Shutdown Required:Yes

Bulletins

This release contains general and security-only bulletins. Security-only bulletins are applicable to new security fixes only. No new bug fixes are included, but bug fixes from earlier patch and update releases are included.
If the installation of all new security and bug fixes is required, you must apply all bulletins in this release. In some cases, the general release bulletin will supersede the security-only bulletin. This is not an issue as the general release bulletin contains both the new security and bug fixes.
The security-only bulletins are identified by bulletin IDs that end in 'SG'. For information on patch and update classification, see KB 2014447.
For more information about the individual bulletins, see the My VMware page and the Resolved Issues section.

Bulletin ID

Category

Severity

ESXi670-201810201-UG

Bugfix

Critical

ESXi670-201810202-UG

Bugfix

Important

ESXi670-201810203-UG

Bugfix

Important

ESXi670-201810204-UG

Bugfix

Critical

ESXi670-201810205-UG

Bugfix

Moderate

ESXi670-201810206-UG

Bugfix

Critical

ESXi670-201810207-UG

Enhancement

Important

ESXi670-201810208-UG

Enhancement

Important

ESXi670-201810209-UG

Bugfix

Important

ESXi670-201810210-UG

Bugfix

Important

ESXi670-201810211-UG

Enhancement

Important

ESXi670-201810212-UG

Bugfix

Important

ESXi670-201810213-UG

Bugfix

Critical

ESXi670-201810214-UG

Bugfix

Critical

ESXi670-201810215-UG

Bugfix

Important

ESXi670-201810216-UG

Bugfix

Important

ESXi670-201810217-UG

Bugfix

Important

ESXi670-201810218-UG

Bugfix

Important

ESXi670-201810219-UG

Bugfix

Moderate

ESXi670-201810220-UG

Bugfix

Important

ESXi670-201810221-UG

Bugfix

Important

ESXi670-201810222-UG

Bugfix

Important

ESXi670-201810223-UG

Bugfix

Important

ESXi670-201810224-UG

Bugfix

Important

ESXi670-201810225-UG

Bugfix

Important

ESXi670-201810226-UG

Bugfix

Important

ESXi670-201810227-UG

Bugfix

Important

ESXi670-201810228-UG

Bugfix

Important

ESXi670-201810229-UG

Bugfix

Important

ESXi670-201810230-UG

Bugfix

Important

ESXi670-201810231-UG

Bugfix

Important

ESXi670-201810232-UG

Bugfix

Important

ESXi670-201810233-UG

Bugfix

Important

ESXi670-201810234-UG

Bugfix

Important

ESXi670-201810101-SG

Security

Important

ESXi670-201810102-SG

Security

Moderate

ESXi670-201810103-SG

Security

Important

IMPORTANT: For clusters using VMware vSAN, you must first upgrade the vCenter Server system. Upgrading only ESXi is not supported.
Before an upgrade, always verify in the VMware Product Interoperability Matrix compatible upgrade paths from earlier versions of ESXi and vCenter Server to the current version.

Image Profiles

Download Esxi 6.7 Update 1

VMware patch and update releases contain general and critical image profiles.

Application of the general release image profile applies to new bug fixes.

Image Profile Name
ESXi-6.7.0-20181002001-standard
ESXi-6.7.0-20181002001-no-tools
ESXi-6.7.0-20181001001s-standard
ESXi-6.7.0-20181001001s-no-tools

Resolved Issues

The resolved issues are grouped as follows.

ESXi670-201810201-UG
Patch CategoryBugfix
Patch SeverityCritical
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_esx-base_6.7.0-1.28.10302608
  • VMware_bootbank_vsan_6.7.0-1.28.10290435
  • VMware_bootbank_vsanhealth_6.7.0-1.28.10290721
  • VMware_bootbank_esx-update_6.7.0-1.28.10302608
PRs Fixed1220910, 2036262, 2036302, 2039186, 2046226, 2057603, 2058908, 2066026, 2071482, 2072977, 2078138, 2078782, 2078844, 2079807, 2080427, 2082405, 2083581, 2084722, 2085117, 2086803, 2086807, 2089047, 2096126, 2096312, 2096947, 2096948, 2097791, 2098170, 2098868, 2103579, 2103981, 2106747, 2107087, 2107333, 2110971, 2118588, 2119610, 2119663, 2120346, 2126919, 2128933, 2129130, 2129181, 2131393, 2131407, 2133153, 2133588, 2136004, 2137261, 2145089, 2146206, 2146535, 2149518, 2152380, 2154913, 2156841, 2157501, 2157817, 2163734, 2165281, 2165537, 2165567, 2166114, 2167098, 2167878, 2173810, 2186253, 2187008
CVE numbersN/A

This patch updates the esx-base, esx-update, vsan and vsanhealth VIBs to resolve the following issues:

  • PR 2084722: If you delete the support bundle folder of a virtual machine from an ESXi host, the hostd service might fail

    If you manually delete the support bundle folder of a virtual machinedownloaded in the /scratch/downloads directory of an ESXi host, the hostd service might fail. The failure occurs when hostd automatically tries to delete folders in the /scratch/downloads directory one hour after the files are created.

    This issue is resolved in this release.

  • PR 2083581: You might not be able to use a smart card reader as a passthrough device by using the feature Support vMotion while device is connected

    If you edit the configuration file of an ESXi host to enable a smart card reader as a passthrough device, you might not be able to use the reader if you have also enabled the feature Support vMotion while a device is connected.

    This issue is resolved in this release.

  • PR 2058908: VMkernel logs in a VMware vSAN environment might be flooded with the message Unable to register file system

    In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message Unable to register file system.

    This issue is resolved in this release.

  • PR 2106747: The vSphere Web Client might display incorrect storage allocation after you increase the disk size of a virtual machine

    If you use the vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration, and display the storage allocation of the virtual machine as unchanged.

    This issue is resolved in this release.

  • PR 2089047: Using the variable windows extension option in Windows Deployment Services (WDS) on virtual machines with EFI firmware might result in slow PXE booting

    If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable windows extension option in WDS, the virtual machine might boot extremely slowly.

    This issue is resolved in this release.

  • PR 2103579: Apple devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12

    Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.

    This issue is resolved in this release. To enable connection in earlier releases of ESXi, append usb.quirks.darwinVendor45 = TRUE as a new option to the .vmx configuration file.

  • PR 2096126: Applying a host profile with enabled Stateful Install to an ESXi 6.7 host by using vSphere Auto Deploy might fail

    If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.7 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.

    This issue is resolved in this release.

  • PR 2072977: Virtual machine disk consolidation might fail if the virtual machine has snapshots taken with Content Based Read Cache (CBRC) enabled and then the feature is disabled

    If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error A specified parameter was not correct: spec.deviceChange.device due to a deleted digest file after CBRC was disabled. An alert Virtual machine disks consolidation is needed. is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled.

    This issue is resolved in this release.

  • PR 2107087: An ESXi host might fail with a purple diagnostic screen

    An ESXi host might fail with a purple diagnostic screen due to a memory allocation problem.

    This issue is resolved in this release.

  • PR 2066026: The esxtop utility might report incorrect statistics for DAVG/cmd and KAVG/cmd on VAAI-supported LUNs

    An incorrect calculation in the VMkernel causes the esxtop utility to report incorrect statistics for the average device latency per command (DAVG/cmd) and the average ESXi VMkernel latency per command (KAVG/cmd) on LUNs with VMware vSphere Storage APIs Array Integration (VAAI).

    This issue is resolved in this release.

  • PR 2036262: Claim rules must be manually added to ESXi for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi

    This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi.

    This issue is resolved in this release.

  • PR 2082405: A virtual machine cannot connect to a distributed port group after cold migration or VMware vSphere High Availability failover

    After cold migration or a vSphere High Availability failover, a virtual machine might fail to connect to a distributed port group, because the port is deleted before the virtual machine powers on.

    This issue is resolved in this release.

  • PR 2078782: I/O commands might fail with INVALID FIELD IN CDB error

    ESXi hosts might not reflect the MAXIMUM TRANSFER LENGTH parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:

    2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev 'naa.514f0c5d38200035' failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

    This issue is resolved in this release.

  • PR 2107333: Claim rules must be manually added to ESXi for HITACHI OPEN-V storage arrays

    This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for HITACHI OPEN-v type storage arrays with Asymmetric Logical Unit Access (ALUA) support.

    The fix also sets Storage Array Type Plug-in (SATP) to VMW_SATP_DEFAULT_AA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_off as default for HITACHI OPEN-v type storage arrays without ALUA support.

    This issue is resolved in this release.

  • PR 2078138: The backup process of a virtual machine with Windows Server guest OS might fail because the quiesced snapshot disk cannot be hot-added to a proxy virtual machine that is enabled for CBRC

    If both the target virtual machine and the proxy virtual machine are enabled for CBRC, the quiesced snapshot disk cannot be hot-added to the proxy virtual machine, because the digest of the snapshot disk in the target virtual machine is disabled during the snapshot creation. As a result, the virtual machine backup process fails.

    This issue is resolved in this release.

  • PR 2098868: Calls to the HostImageConfigManager managed object might cause increased log activity in the syslog.log file

    The syslog.log file might be repeatedly populated with messages related to calls to the HostImageConfigManager managed object.

    This issue is resolved in this release.

  • PR 2096312: While extending the coredump partition of an ESXi host, vCenter Server reports might alarm that no coredump target is configured

    While extending the coredump partition of an ESXi host, vCenter Server reports might alarm that no coredump target is configured, because before increasing the partition size, the existing coredump partition is temporarily deactivated.

    This issue is resolved in this release. For vCenter Server 6.7.0, workaround this issue by running the following code line:
    # /bin/esxcfg-dumppart -C -D active --stdout 2> /dev/null

  • PR 2057603: An ESXi host might fail with a purple diagnostic screen during shutdown or power off of virtual machines if you use EMC RecoverPoint

    Due to a race condition in the VSCSI, if you use EMC RecoverPoint, an ESXi host might fail with a purple diagnostic screen during the shutdown or power off of a virtual machine.

    This issue is resolved in this release.

  • PR 2039186: VMware vSphere Virtual Volumes metadata might not be updated with associated virtual machines and make virtual disk containers untraceable

    vSphere Virtual Volumes set with VMW_VVolType metadata key Other and VMW_VVolTypeHint metadata key Sidecar might not get VMW_VmID metadata key to the associated virtual machines and cannot be tracked by using IDs.

    This issue is resolved in this release.

  • PR 2036302: SolidFire arrays might not get optimal performance without re-configuration of SATP claim rules

    This fix sets Storage Array Type Plugin (SATP) claim rules for Solidfire SSD SAN storage arrays to VMW_SATP_DEFAULT_AA and the Path Selection Policy (PSP) to VMW_PSP_RR with 10 I/O operations per second by default to achieve optimal performance.

    This issue is resolved in this release.

  • PR 2112769: A fast reboot of a system with an LSI controller or a reload of an LSI driver might result in unresponsive or inaccessible datastores

    A fast reboot of a system with an LSI controller or a reload of an LSI driver, such as lsi_mr3, might put the disks behind the LSI controller in an offline state. The LSI controller firmware sends a SCSI STOP UNIT command during unload, but a corresponding SCSI START UNIT command might not be issued during reload. If disks go offline, all datastores hosted on these disks become unresponsive or inaccessible.

    This issue is resolved in this release.

  • PR 2119610: Migration of a virtual machine with a Filesystem Device Switch (FDS) on a vSphere Virtual Volumes datastore by using VMware vSphere vMotion might cause multiple issues

    If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC) or I/O filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache I/O filters, corrupted replication I/O filters, and disk corruption, when cache I/O filters are configured in write-back mode. You might also see issues with the virtual machine encryption.

    Тhis issue is resolved in this release.

  • PR 2128933: I/O submitted to a VMFSsparse snapshot might fail without an error

    If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might only partially be processed at VMFSsparse level, but upper layers, such as I/O Filters, might presume the transfer is successful. This might lead to data inconsistency. This fix sets a transient error status for reference from upper layers if an I/O is complete.

    This issue is resolved in this release.

  • PR 2046226: SCSI INQUIRY commands on Raw Device Mapping (RDM) LUNs might return data from the cache instead of querying the LUN

    SCSI INQUIRY data is cached at the Pluggable Storage Architecture (PSA) layer for RDM LUNs and response for subsequent SCSI INQUIRY commands might be returned from the cache instead of querying the LUN. To avoid fetching cached SCSI INQUIRY data, apart from modifying the .vmx file of a virtual machine with RDM, with ESXi 6.7 Update 1 you can ignore the SCSI INQUIRY also by using the ESXCLI command

    esxcli storage core device inquirycache set --device <device-id> --ignore true.

    With the ESXCLI option, a reboot of the virtual machine is not necessary.

    This issue is resolved in this release.

  • PR 2133588: Claim rules must be manually added to ESXi for Tegile IntelliFlash storage arrays

    This fix sets Storage Array Type Plugin (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for Tegile IntelliFlash storage arrays with Asymmetric Logical Unit Access (ALUA) support. The fix also sets Storage Array Type Plugin (SATP) to VMW_SATP_DEFAULT_AA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_off as default for Tegile IntelliFlash storage arrays without ALUA support.

    This issue is resolved in this release.

  • PR 2129181: If the state of the paths to a LUN changes during an ESXi host booting, booting might take longer

    During an ESXi booting, if the commands issued over the initial device discovery fail with ASYMMETRIC ACCESS STATE CHANGE UA, the path failover might take longer because the commands are blocked within the ESXi host. This might lead to a longer ESXi booting.
    You might see logs similar to:
    2018-05-14T01:26:28.464Z cpu1:2097770)NMP: nmp_ThrottleLogForDevice:3689: Cmd 0x1a (0x459a40bfbec0, 0) to dev 'eui.0011223344550003' on path 'vmhba64:C0:T0:L3' Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x6 0x2a 0x6. Act:FAILOVER 2018-05-14T01:27:08.453Z cpu5:2097412)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x1a, CmdSN 0x29 from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:27:48.911Z cpu4:2097181)ScsiDeviceIO: 3029: Cmd(0x459a40bfd540) 0x25, CmdSN 0x2c from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.457Z cpu1:2097178)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x9e, CmdSN 0x2d from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.910Z cpu2:2097368)WARNING: ScsiDevice: 7641: GetDeviceAttributes during periodic probe'eui.0011223344550003': failed with I/O error

    This issue is resolved in this release.

  • PR 2119663: A VMFS6 datastore might report an incorrect out-of-space message

    A VMFS6 datastore might report that it is out of space incorrectly, due to stale cache entries. Space allocation reports are corrected with the automatic updates of the cache entries, but this fix prevents the error even before an update.

    This issue is resolved in this release.

  • PR 2129130: NTP servers removed by using the vSphere Web Client might remain in the NTP configuration file

    If you remove an NTP server from your configuration by using the vSphere Web Client, the server settings might remain in the /etc/ntp.conf file.

    This issue is resolved in this release.

  • PR 2078844: Virtual machines might stop responding during migration if you select Intel Merom or Penryn microprocessors from the VMware EVC Mode drop-down menu

    If you select Intel Merom or Penryn microprocessors from the VMware EVC Mode drop-down menu before migration of virtual machines from an ESXi 6.0 host to an ESXi 6.5 or ESXi 6.7 host, the virtual machines might stop responding.

    This issue is resolved in this release.

  • PR 2080427: VMkernel Observations (VOB) events might generate unnecessary device performance warnings

    The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:

    • 1. Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
    • 2. Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.

    This issue is resolved in this release.

  • PR 2137261: Exports of a large virtual machine by using the VMware Host Client might fail

    The export of a large virtual machine by using the VMware Host Client might fail or end with incomplete VMDK files, because the lease issued by the ExportVm method might expire before the file transfer finishes.

    This issue is resolved in this release.

  • PR 2149518: The hostd process might intermittently fail due to a high number of networking tasks

    High number of networking tasks, specifically multiple calls to QueryNetworkHint(), might exceed the memory limit of the hostd process and make it fail intermittently.

    This issue is resolved in this release.

  • PR 2146535: Firmware event code logs might flood the vmkernel.log

    Drives that do not support Block Limits VPD page 0xb0 might generate event code logs that flood the vmkernel.log.

    This issue is resolved in this release.

  • PR 2071482: Dell OpenManage Integration for VMware vCenter (OMIVV) might fail to identify some Dell modular servers from the Integrated Dell Remote Access Controller (iDRAC)

    OMIVV relies on information from the iDRAC property hardware.systemInfo.otherIdentifyingInfo.ServiceTag to fetch the SerialNumber parameter for identifying some Dell modular servers. A mismatch in the serviceTag property might fail this integration.

    This issue is resolved in this release.

  • PR 2145089: vSphere Virtual Volumes might become unresponsive if an API for Storage Awareness (VASA) provider loses binding information from the database

    vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from the database. Hostd might also stop responding. You might see a fatal error message.

    This issue is resolved in this release. This fix prevents infinite loops in case of database binding failures.

  • PR 2146206: vSphere Virtual Volumes metadata might not be available to storage array vendor software

    vSphere Virtual Volumes metadata might be available only when a virtual machine starts running. As result, storage array vendor software might fail to apply policies that impact the optimal layout of volumes during regular use and after a failover.

    This issue is resolved in this release. This fix makes vSphere Virtual Volumes metadata available at the time vSphere Virtual Volumes are configured, not when a virtual machine starts running.

  • PR 2096947: getTaskUpdate API calls to deleted task IDs might cause log spew and higher API bandwidth consumption

    If you use a VASA provider, you might see multiple getTaskUpdate calls to cancelled or deleted tasks. As a result, you might see a higher consumption of VASA bandwidth and a log spew.

    This issue is resolved in this release.

  • PR 2156841: Virtual machines using EFI and running Windows Server 2016 on AMD processors might stop responding during reboot

    Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or if the hardware version is 11 or later, or the guest OS is not Windows, or if the processors are Intel.

    This issue is resolved in this release.

  • PR 2163734: Enabling NetFlow might lead to high network latency

    If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.

    This issue is resolved in this release. You can further optimize the NetFlow performance by setting the ipfixHashTableSize IPFIX parameter to -p 'ipfixHashTableSize=65536' -m ipfix by using CLI. To complete the task, reboot the ESXi host.

  • PR 2136004: Stale tickets in RAM disks might cause some ESXi hosts to stop responding

    Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks inodes. This might cause some ESXi hosts to stop responding.

    This issue is resolved in this release.

  • PR 2152380: The esxtop command-line utility might not display the queue depth of devices correctly

    The esxtop command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes.

    This issue is resolved in this release.

  • PR 2133153: Reset to green functionality might not work as expected for hardware health alarms

    Manual resets of hardware health alarms to return to a normal state, by selecting Reset to green after right-clicking the Alarms sidebar pane, might not work as expected. The alarms might reappear after 90 seconds.

    This issue is resolved in this release. With this fix, you can customize the time that the system generates a new alarm if the problem is not fixed. By using the advanced option esxcli system settings advanced list -o /UserVars/HardwareHeathSyncTime, you can disable the sync interval, or set it to your preferred time.

  • PR 2131393: Presence Sensors in the Hardware Status tab might display status Unknown

    Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.

    This issue is resolved in this release. This fix filters components with Unknown status.

  • PR 2154913: VMware Tools might display incorrect status if you configure the /productLocker directory on a shared VMFS datastore

    If you configure the /productLocker directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools in the virtual machine might display an incorrect status Unsupported.

    This issue is resolved in this release.

  • PR 2137041: Encrypted vSphere vMotion might fail due to insufficient migration heap space

    For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.

    This issue is resolved in this release.

  • PR 2165537: Backup proxy virtual machines might go to invalid state during backup

    A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions, where the backup proxy virtual machine might be terminated because of this issue.

    In the hostd log, you might see content similar to:

    2018-06-08T10:33:14.150Z info hostd[15A03B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
    2018-06-08T10:33:14.167Z error hostd[15640B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
    2018-06-08T10:33:14.826Z info hostd[15640B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
    2018-06-08T10:35:53.120Z error hostd[15A44B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171

    In the vmkernel log, the content is similar to:

    2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
    2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand.

    This issue is resolved in this release.

  • PR 2157501: You might see false hardware health alarms due to disabled or idle Intelligent Platform Management Interface (IPMI) sensors

    Disabled IPMI sensors, or sensors that do not report any data, might generate false hardware health alarms.

    This issue is resolved in this release. This fix filters out such alarms.

  • PR 2167878: An ESXi host might fail with a purple diagnostic screen if you enable IPFIX in continuous heavy traffic

    When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.

    This issue is resolved in this release.

  • PR 2165567: Performance of long distance vSphere vMotion operations at high latency might deteriorate due to the max socket buffer size limit

    You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 10 GbE, due to the hard-coded socket buffer limit of 16 MB.

    This issue is resolved in this release. With this fix, you can configure the max socket buffer size parameter SB_MAX_ADJ.

  • PR 2082940: An ESXi host might become unresponsive when a user creates and adds a vmknic in the vSphereProvisioning netstack and uses it for NFC traffic

    When a user creates and adds a VMkernel NIC in the vSphereProvisioning netstack to use it for NFC traffic, the daemon that manages the NFC connections might fail to clean up old connections. This leads to the exhaustion of the allowed process limit. As a result, the host becomes unresponsive and unable to create processes for incoming SSH connections.

    This issue is resolved in this release. If you already face the issue, you must restart the NFCD daemon.

  • PR 2157817: You might not be able to create Virtual Flash File (VFFS) volumes by using a vSphere standard license

    An attempt to create VFFS volumes by using a vSphere standard license might fail with the error License not available to perform the operation as feature 'vSphere Flash Read Cache' is not licensed with this edition. This is because the system uses VMware vSphere Flash Read Cache permissions to check if provisioning of VFFS is allowed.

    This issue is resolved in this release. Since VFFS can be used outside of the Flash Read Cache, the check is removed.

  • PR 1220910: Multi-Writer Locks cannot be set for more than 8 ESXi hosts in a shared environment

    In a shared environment such as Raw Device Mapping (RDM) disks, you cannot use Multi-Writer Locks for virtual machines on more than 8 ESXi hosts. If you migrate a virtual machine to a ninth host, it might fail to power on with the error message Could not open xxxx.vmdk or one of the snapshots it depends on. (Too many users). This fix makes the advanced configuration option /VMFS3/GBLAllowMW visible. You can manually enable or disable multi-writer locks for more than 8 hosts by using generation-based locking.

    This issue is resolved in this release. For more details, see VMware knowledge base article 1034165.

  • PR 2187719: Third Party CIM providers installed in an ESXi host might fail to work properly with their external applications

    Third party CIM providers, by HPE and Stratus for instance, might start normally but lack some expected functionality when working with their external applications.

    This issue is resolved in this release.

  • PR 2096948: An ESXi host might become unresponsive if datastore heartbeating stops prematurely while closing VMware vSphere VMFS

    An ESXi host might become unresponsive if datastore heartbeating stops prematurely while closing VMFS. As a result, the affinity manager cannot exit gracefully.

    This issue is resolved in this release.

  • PR 2166824: SNMP monitoring systems might report incorrect memory statistics on ESXi hosts

    SNMP monitoring systems might report memory statistics on ESXi hosts different from the values that free, top or vmstat commands report.

    This issue is resolved in this release. The fix aligns the formula that SNMP agents use with the other command line command tools.

  • PR 2165281: If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery

    If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:

    SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
    0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
    0x0000418017d8026c in nmp_DeviceStartLoop ([email protected]=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
    0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., [email protected]=...) at bora/modules/vmkernel/nmp/nmp_core.c:807

    This issue is resolved in this release.

  • PR 2166114: I/Os might fail on some paths of a device due to faulty switch error

    I/Os might fail with an error FCP_DATA_CNT_MISMATCH, which translates into a HOST_ERROR in the ESXi host, and indicates link errors or faulty switches on some paths of the device.

    This issue is resolved in this release. The fix adds a configuration option
    PSPDeactivateFlakyPath to PSP_RR to disable paths if I/Os are continuously failing with HOST_ERROR and to allow the
    active paths to handle the operations. The configuration is enabled by default with the option to be disabled, if not necessary.

  • PR 2167098: All Paths Down (APD) is not triggered for LUNs behind IBM SAN Volume Controller (SVC) target even when no paths can service I/Os

    In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.

    This issue is resolved in this release. The fix is disabled by default. To enable the fix, set the ESXi config option /Scsi/ExtendAPDCondition to esxcfg-advcfg -s 1 /Scsi/ExtendAPDCondition.

  • PR 2120346: Slow network performance of NUMA servers for devices using VMware NetQueue

    You might see a slowdown in the network performance of NUMA servers for devices using VMware NetQueue, due to a pinning threshold. It is observed when Rx queues are pinned to a NUMA node by changing the default value of the advanced configuration NetNetqNumaIOCpuPinThreshold.

    This issue is resolved in this release.

  • After running ESXCLI diskgroup rebuild command, you might see the error message Throttled: BlkAttr not ready for disk

    vSAN creates blk attribute components after it creates a disk group. If the blk attrib component is missing from the API that supports the diskgroup rebuild command, you might see the following warning message in the vmkernel log: Throttled: BlkAttr not ready for disk.

    This issue is resolved in this release.

  • PR 2204439: Heavy I/Os issued to a snapshot virtual machines using the SEsparse format might cause guest OS file system corruption

    Heavy I/Os issued to a snapshot virtual machines using the SEsparse format might cause guest OS file system corruption, or data corruption in applications.

    This issue is resolved in this release.

  • PR 1996879: Unable to perform some host-related operations, such as place hosts into maintenance mode

    In previous releases, vSAN Observer runs under the init group. This group can occupy other group's memory, and starve the memory required for other host-related operations. To resolve the problem, you can run vSAN Observer under its own resource group.

    This issue is resolved in this release.

  • PR 2000367: Health check is unavailable: All hosts are in same network subnet

    In vSphere 6.7 Update 1, some vSAN stretched clusters can have hosts in different L3 networks. Therefore, the following health check is no longer valid, and has been removed: All hosts are in same network subnet.

    This issue is resolved in this release.

  • PR 2144043: API returns SystemError exception when calling querySyncingVsanObjects API on vmodl VsanSystemEx

    You might get SystemError exception when calling querySyncingVsanObjects API on vmodl VsanSystemEx, due to memory pressure caused by a large number of synchronizing objects. You might see an error message in the Recent Tasks pane of the vSphere Client .

    This issue is resolved in the release.

  • PR 2074369: vSAN does not mark a disk as degraded even after failed I/Os reported on the disk

    In some cases, vSAN takes a long time to mark a disk as degraded, even though I/O failures are reported by the disk and vSAN has stopped servicing any further I/Os from that disk.

    This issue is resolved in this release.

ESXi670-201810202-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_vmw-ahci_1.2.3-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the vmw-ahci VIB.

ESXi670-201810203-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_vmkusb_0.1-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the vmkusb VIB.

ESXi670-201810204-UG
Patch CategoryBugfix
Patch SeverityCritical
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_lpfc_11.4.33.3-11vmw.670.1.28.10302608
PRs Fixed2095671, 2095681
CVE numbersN/A

This patch updates the lpfc VIB to resolve the following issues:

  • PR 2095671: ESXi hosts might stop responding or fail with a purple diagnostic screen if you unmap LUNs from an EMC VPLEX storage system

    If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
    2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
    2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.

    This issue is resolved in this release.

  • PR 2095681: ESXi hosts might disconnect from an EMC VPLEX storage system upon heavy I/O load

    The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.

    This issue is resolved in this release.

ESXi670-201810205-UG
Patch CategoryBugfix
Patch SeverityModerate
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-16vmw.670.1.28.10302608
PRs Fixed2124061
CVE numbersN/A

This patch updates the lsu-hp-hpsa-plugin VIB to resolve the following issue:

  • PR 2124061: VMware vSAN with HPE ProLiant Gen9 Smart Array Controllers might not light locator LEDs on the correct disk

    In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lighted on the correct failed device.

    This issue is resolved in this release.

ESXi670-201810206-UG
Patch CategoryBugfix
Patch SeverityCritical
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_brcmfcoe_11.4.1078.5-11vmw.670.1.28.10302608
PRs Fixed2095671, 2095681
CVE numbersN/A

This patch updates the brcmfcoe VIB to resolve the following issues:

  • PR 2095671: ESXi hosts might stop responding or fail with a purple diagnostic screen if you unmap LUNs from an EMC VPLEX storage system

    If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
    2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
    2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.

    This issue is resolved in this release.

  • PR 2095681: ESXi hosts might disconnect from an EMC VPLEX storage system upon heavy I/O load

    The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.

    This issue is resolved in this release.

ESXi670-201810207-UG
Patch CategoryEnhancement
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_lsu-smartpqi-plugin_1.0.0-3vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the lsu-smartpqi-plugin VIB.

  • ESXi 6.7 Update 1 enables Microsemi Smart PQI (smartpqi) LSU plug-in tо support attached disk management operations on the HPE ProLiant Gen10 Smart Array Controller.

ESXi670-201810208-UG
Patch CategoryEnhancement
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_lsu-intel-vmd-plugin_1.0.0-2vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the lsu-intel-vmd-plugin VIB.

  • With ESXi 6.7 Update 1, you can run the following storage ESXCLI commands to enable status LED on Intel VMD based NVMe SSDs without downloading Intel CLI:

    • esxcli storage core device set -l locator -d
    • esxcli storage core device set -l error -d
    • esxcli storage core device set -l off -d.
ESXi670-201810209-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_cpu-microcode_6.7.0-1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the cpu-microcode VIB.

ESXi670-201810210-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_elxnet_11.4.1095.0-5vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the elxnet VIB.

ESXi670-201810211-UG
Patch CategoryEnhancement
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_nfnic_4.0.0.14-0vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the nfnic VIB.

  • ESXi 6.7 Update 1 enables nfnic driver support for Cisco UCS Fibre Channel over Ethernet (FCoE).

ESXi670-201810212-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_ne1000_0.8.4-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the ne1000 VIB.

ESXi670-201810213-UG
Patch CategoryBugfix
Patch SeverityCritical
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_nvme_1.2.2.17-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the nvme VIB.

  • ESXi 6.7 Update 1 adds support for NGUID in the NVMe driver.

ESXi670-201810214-UG
Patch CategoryBugfix
Patch SeverityCritical
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-13vmw.670.1.28.10302608
PRs Fixed2112193
CVE numbersN/A

This patch updates the lsu-lsi-lsi-mr3-plugin VIB to resolve the following issue:

  • PR 2112193: An ESXi host and hostd might become unresponsive while using the lsi_mr3 driver

    An ESXi host and hostd might become unresponsive due to memory exhaustion caused by the lsi_mr3 disk serviceability plug-in.

    This issue is resolved in this release.

ESXi670-201810215-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_smartpqi_1.0.1.553-12vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the smartpqi VIB.

ESXi670-201810216-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_nvmxnet3_2.0.0.29-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the nvmxnet3 VIB.

ESXi670-201810217-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_i40en_1.3.1-22vmw.670.1.28.10302608
PRs Fixed2153838
CVE numbersN/A

This patch updates the i40en VIB.

  • PR 2153838: Wake-on-LAN (WOL) might not work for NICs of the Intel X722 series in IPv6 networks

    The Intel native i40en driver in ESXi might not work with NICs of the X722 series in IPv6 networks and the Wake-on-LAN service might fail.

    This issue is resolved in this release.

ESXi670-201810218-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the ipmi-ipmi-devintf VIB.

ESXi670-201810219-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_ixgben_1.4.1-16vmw.670.1.28.10302608
PRs Fixed2142221
CVE numbersN/A

This patch updates the ixgben VIB.

  • PR 2142221: You might see idle alert logs Device 10fb does not support flow control autoneg

    You might see colored logs Device 10fb does not support flow control autoneg as alerts, but this is a regular log that reflects the status of flow control support of certain NICs. On some OEM images, this log might display frequently, but it does not indicate any issues with the ESXi host.

    This issue is resolved in this release.

ESXi670-201810220-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_qedentv_2.0.6.4-10vmw.670.1.28.10302608
PRs Fixed2106647
CVE numbersN/A

This patch updates the qedentv VIB to resolve the following issue:

  • PR 2106647: QLogic FastLinQ QL41xxx ethernet adapters might not create virtual functions after the single root I/O virtualization (SR-IOV) interface is enabled

    Some QLogic FastLinQ QL41xxx adapters might not create virtual functions after SR-IOV is enabled on the adapter and the host is rebooted.

    This issue is resolved in this release.

ESXi670-201810221-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_nenic_1.0.21.0-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the nenic VIB.

ESXi670-201810222-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_bnxtroce_20.6.101.0-20vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the bnxtroce VIB.

ESXi670-201810223-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the ipmi-ipmi-msghandler VIB.

ESXi670-201810224-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_ipmi-ipmi-si-drv_39.1-5vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the ipmi-ipmi-si-drv VIB.

ESXi670-201810225-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_iser_1.0.0.0-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the iser VIB.

ESXi670-201810226-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_vmkfcoe_1.0.0.1-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the vmkfcoe VIB.

ESXi670-201810227-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_lsi-mr3_7.702.13.00-5vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the lsi-mr3 VIB.

ESXi670-201810228-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_lsi-msgpt35_03.00.01.00-12vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the lsi-msgpt35 VIB.

ESXi670-201810229-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredNo
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.34-1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates thevmware-esx-esxcli-nvme-plugin VIB.

ESXi670-201810230-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_ntg3_4.1.3.2-1vmw.670.1.28.10302608
PRs Fixed2156445
CVE numbersN/A

This patch updates the ntg3 VIB.

  • PR 2156445: Oversize packets might cause NICs using the ntg3 driver to temporarily stop sending packets

    In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.

    This issue is resolved in this release.

ESXi670-201810231-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_nhpsa_2.0.22-3vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the nhpsa VIB.

ESXi670-201810232-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_lsi-msgpt3_16.00.01.00-3vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the the lsi-msgpt3 VIB.

ESXi670-201810233-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_lsi-msgpt2_20.00.04.00-5vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the lsi-msgpt2 VIB.

ESXi670-201810234-UG
Patch CategoryBugfix
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMW_bootbank_mtip32xx-native_3.9.8-1vmw.670.1.28.10302608
PRs FixedN/A
CVE numbersN/A

This patch updates the mtip32xx-native VIB.

ESXi670-201810101-SG
Patch CategorySecurity
Patch SeverityImportant
Host Reboot RequiredYes
Virtual Machine Migration or Shutdown RequiredYes
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_vsanhealth_6.7.0-0.28.10176879
  • VMware_bootbank_vsan_6.7.0-0.28.10176879
  • VMware_bootbank_esx-base_6.7.0-0.28.10176879
PRs Fixed 2093391, 2099951, 2093433, 2098003, 2155702
CVE numbersCVE-2018-6974

This patch updates the esx-base, vsan, and vsanhealth VIBs to resolve the following issues:

Download Esxi 6.7 Update 3

  • Update to libxml2 library

    The ESXi userworld libxml2 library is updated to version 2.9.7.

  • Update to the Network Time Protocol (NTP) daemon

    The NTP daemon is updated to version ntp-4.2.8p11.

  • Update of the SQLite database

    The SQLite database is updated to version 3.23.1.

  • Update to the libcurl library

    The ESXi userworld libcurl library is updated to libcurl-7.59.0.

  • Update to the OpenSSH

    The OpenSSH version is updated to 7.7p1.

  • Update to OpenSSL library

    The ESXi userworld OpenSSL library is updated to version openssl-1.0.2o.

  • Update to the Python library

    The Python third-party library is updated to version 3.5.5.

  • ESXi has an out-of-bounds read vulnerability in the SVGA device that might allow a guest to execute code on the host.The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the identifier CVE-2018-6974 to this issue.

ESXi670-201810102-SG
Patch CategorySecurity
Patch SeverityModerate
Host Reboot RequiredNo
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_locker_tools-light_10.3.2.9925305-10176879
PRs FixedN/A
CVE numbersN/A

This patch updates the tools-light VIB.

  • The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista ISO image is available for download by users who require it. For download information, see the Product Download page.

ESXi670-201810103-SG
Patch CategorySecurity
Patch SeverityImportant
Host Reboot RequiredNo
Virtual Machine Migration or Shutdown RequiredNo
Affected HardwareN/A
Affected SoftwareN/A
VIBs Included
  • VMware_bootbank_esx-ui_1.30.0-9946814
PRs FixedN/A
CVE numbersN/A

This patch updates the esx-ui VIB.

ESXi-6.7.0-20181002001-standard
Profile NameESXi-6.7.0-20181002001-standard
BuildFor build information, see Patches Contained in this Release.
VendorVMware, Inc.
Release DateOctober 16, 2018
Acceptance LevelPartnerSupported
Affected HardwareN/A
Affected SoftwareN/A
Affected VIBs
  • VMW_bootbank_brcmfcoe_11.4.1078.5-11vmw.670.1.28.10302608
  • VMW_bootbank_nenic_1.0.21.0-1vmw.670.1.28.10302608
  • VMW_bootbank_vmw-ahci_1.2.3-1vmw.670.1.28.10302608
  • VMware_bootbank_esx-base_6.7.0-1.28.10302608
  • VMware_bootbank_vsan_6.7.0-1.28.10290435
  • VMware_bootbank_vsanhealth_6.7.0-1.28.10290721
  • VMware_bootbank_esx-update_6.7.0-1.28.10302608
  • VMware_bootbank_lsu-intel-vmd-plugin_1.0.0-2vmw.670.1.28.10302608
  • VMW_bootbank_lsi-msgpt3_16.00.01.00-3vmw.670.1.28.10302608
  • VMW_bootbank_smartpqi_1.0.1.553-12vmw.670.1.28.10302608
  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.34-1.28.10302608
  • VMW_bootbank_nvme_1.2.2.17-1vmw.670.1.28.10302608
  • VMW_bootbank_bnxtroce_20.6.101.0-20vmw.670.1.28.10302608
  • VMW_bootbank_vmkusb_0.1-1vmw.670.1.28.10302608
  • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-16vmw.670.1.28.10302608
  • VMW_bootbank_ne1000_0.8.4-1vmw.670.1.28.10302608
  • VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-13vmw.670.1.28.10302608
  • VMW_bootbank_i40en_1.3.1-22vmw.670.1.28.10302608
  • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.670.1.28.10302608
  • VMware_bootbank_lsu-smartpqi-plugin_1.0.0-3vmw.670.1.28.10302608
  • VMW_bootbank_nenic_1.0.21.0-1vmw.670.1.28.10302608
  • VMW_bootbank_lsi-msgpt2_20.00.04.00-5vmw.670.1.28.10302608
  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.34-1.28.10302608
  • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.670.1.28.10302608
  • VMware_bootbank_vsanhealth_6.7.0-1.28.10290721
  • VMW_bootbank_nhpsa_2.0.22-3vmw.670.1.28.10302608
  • VMware_bootbank_cpu-microcode_6.7.0-1.28.10302608
  • VMW_bootbank_lsi-msgpt3_16.00.01.00-3vmw.670.1.28.10302608
  • VMW_bootbank_bnxtroce_20.6.101.0-20vmw.670.1.28.10302608
  • VMW_bootbank_mtip32xx-native_3.9.8-1vmw.670.1.28.10302608
  • VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-13vmw.670.1.28.10302608
  • VMW_bootbank_nhpsa_2.0.22-3vmw.670.1.28.10302608
  • VMW_bootbank_lsi-msgpt2_20.00.04.00-5vmw.670.1.28.10302608
  • VMW_bootbank_vmkfcoe_1.0.0.1-1vmw.670.1.28.10302608
  • VMW_bootbank_nfnic_4.0.0.14-0vmw.670.1.28.10302608
  • VMW_bootbank_mtip32xx-native_3.9.8-1vmw.670.1.28.10302608
  • VMware_bootbank_lsu-smartpqi-plugin_1.0.0-3vmw.670.1.28.10302608
  • VMW_bootbank_lsi-mr3_7.702.13.00-5vmw.670.1.28.10302608
  • VMW_bootbank_iser_1.0.0.0-1vmw.670.1.28.10302608
  • VMW_bootbank_nvmxnet3_2.0.0.29-1vmw.670.1.28.10302608
  • VMW_bootbank_lsi-msgpt35_03.00.01.00-12vmw.670.1.28.10302608
  • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.670.1.28.10302608
  • VMW_bootbank_elxnet_11.4.1095.0-5vmw.670.1.28.10302608
  • VMW_bootbank_ixgben_1.4.1-16vmw.670.1.28.10302608
  • VMW_bootbank_ntg3_4.1.3.2-1vmw.670.1.28.10302608
  • VMW_bootbank_ipmi-ipmi-si-drv_39.1-5vmw.670.1.28.10302608
  • VMW_bootbank_qedentv_2.0.6.4-10vmw.670.1.28.10302608
  • VMW_bootbank_i40en_1.3.1-22vmw.670.1.28.10302608
  • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.670.1.28.10302608
  • VMware_bootbank_cpu-microcode_6.7.0-1.28.10302608
  • VMW_bootbank_lpfc_11.4.33.3-11vmw.670.1.28.10302608
PRs Fixed1220910 , 2036262 , 2036302 , 2039186 , 2046226 , 2057603 , 2058908 , 2066026 , 2071482 , 2072977 , 2078138 , 2078782 , 2078844 , 2079807 , 2080427 , 2082405 , 2083581 , 2084722 , 2085117 , 2086803 , 2086807 , 2089047 , 2096126 , 2096312 , 2096947 , 2096948 , 2097791 , 2098170 , 2098868 , 2103579 , 2103981 , 2106747 , 2107087 , 2107333 , 2110971 , 2118588 , 2119610 , 2119663 , 2120346 , 2126919 , 2128933 , 2129130 , 2129181 , 2131393 , 2131407 , 2133153 , 2133588 , 2136004 , 2137261 , 2139127 , 2145089 , 2146206 , 2146535 , 2149518 , 2152380 , 2154913 , 2156841 , 2157501 , 2157817 , 2163734 , 2165281 , 2165537 , 2165567 , 2166114 , 2167098 , 2167878 , 2173810 , 2186253 , 2187008 , 2204439 , 2095671 , 2095681 , 2124061 , 2095671 , 2095681 , 2093493 , 2103337 , 2112361 , 2099772 , 2137374 , 2112193 , 2153838 , 2142221 , 2106647 , 2137374 , 2156445
Related CVE numbersN/A
  • This patch updates the following issues:
    • If you manually delete the support bundle folder of a virtual machinedownloaded in the /scratch/downloads directory of an ESXi host, the hostd service might fail. The failure occurs when hostd automatically tries to delete folders in the /scratch/downloads directory one hour after the files are created.

    • If you edit the configuration file of an ESXi host to enable a smart card reader as a passthrough device, you might not be able to use the reader if you have also enabled the feature Support vMotion while a device is connected.

    • In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message Unable to register file system.

    • If you use the vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration, and display the storage allocation of the virtual machine as unchanged.

    • If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable windows extension option in WDS, the virtual machine might boot extremely slowly.

    • Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.

    • If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.7 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.

    • If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error A specified parameter was not correct: spec.deviceChange.device due to a deleted digest file after CBRC was disabled. An alert Virtual machine disks consolidation is needed. is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled.

    • An ESXi host might fail with a purple diagnostic screen due to a memory allocation problem.

    • An incorrect calculation in the VMkernel causes the esxtop utility to report incorrect statistics for the average device latency per command (DAVG/cmd) and the average ESXi VMkernel latency per command (KAVG/cmd) on LUNs with VMware vSphere Storage APIs Array Integration (VAAI).

    • This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi.

    • After cold migration or a vSphere High Availability failover, a virtual machine might fail to connect to a distributed port group, because the port is deleted before the virtual machine powers on.

    • ESXi hosts might not reflect the MAXIMUM TRANSFER LENGTH parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:

      2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev 'naa.514f0c5d38200035' failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

    • This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for HITACHI OPEN-v type storage arrays with Asymmetric Logical Unit Access (ALUA) support.

      The fix also sets Storage Array Type Plug-in (SATP) to VMW_SATP_DEFAULT_AA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_off as default for HITACHI OPEN-v type storage arrays without ALUA support.

    • If both the target virtual machine and the proxy virtual machine are enabled for CBRC, the quiesced snapshot disk cannot be hot-added to the proxy virtual machine, because the digest of the snapshot disk in the target virtual machine is disabled during the snapshot creation. As a result, the virtual machine backup process fails.

    • The syslog.log file might be repeatedly populated with messages related to calls to the HostImageConfigManager managed object.

    • While extending the coredump partition of an ESXi host, vCenter Server reports might alarm that no coredump target is configured, because before increasing the partition size, the existing coredump partition is temporarily deactivated.

    • Due to a race condition in the VSCSI, if you use EMC RecoverPoint, an ESXi host might fail with a purple diagnostic screen during the shutdown or power off of a virtual machine.

    • vSphere Virtual Volumes set with VMW_VVolType metadata key Other and VMW_VVolTypeHint metadata key Sidecar might not get VMW_VmID metadata key to the associated virtual machines and cannot be tracked by using IDs.

    • This fix sets Storage Array Type Plugin (SATP) claim rules for Solidfire SSD SAN storage arrays to VMW_SATP_DEFAULT_AA and the Path Selection Policy (PSP) to VMW_PSP_RR with 10 I/O operations per second by default to achieve optimal performance.

    • A fast reboot of a system with an LSI controller or a reload of an LSI driver, such as lsi_mr3, might put the disks behind the LSI controller in an offline state. The LSI controller firmware sends a SCSI STOP UNIT command during unload, but a corresponding SCSI START UNIT command might not be issued during reload. If disks go offline, all datastores hosted on these disks become unresponsive or inaccessible.

    • If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC) or I/O filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache I/O filters, corrupted replication I/O filters, and disk corruption, when cache I/O filters are configured in write-back mode. You might also see issues with the virtual machine encryption.

    • If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might only partially be processed at VMFSsparse level, but upper layers, such as I/O Filters, might presume the transfer is successful. This might lead to data inconsistency. This fix sets a transient error status for reference from upper layers if an I/O is complete.

    • SCSI INQUIRY data is cached at the Pluggable Storage Architecture (PSA) layer for RDM LUNs and response for subsequent SCSI INQUIRY commands might be returned from the cache instead of querying the LUN. To avoid fetching cached SCSI INQUIRY data, apart from modifying the .vmx file of a virtual machine with RDM, with ESXi 6.7 Update 1 you can ignore the SCSI INQUIRY also by using the ESXCLI command

      esxcli storage core device inquirycache set --device <device-id> --ignore true.

      With the ESXCLI option, a reboot of the virtual machine is not necessary.

    • This fix sets Storage Array Type Plugin (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for Tegile IntelliFlash storage arrays with Asymmetric Logical Unit Access (ALUA) support. The fix also sets Storage Array Type Plugin (SATP) to VMW_SATP_DEFAULT_AA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_off as default for Tegile IntelliFlash storage arrays without ALUA support.

    • During an ESXi booting, if the commands issued over the initial device discovery fail with ASYMMETRIC ACCESS STATE CHANGE UA, the path failover might take longer because the commands are blocked within the ESXi host. This might lead to a longer ESXi booting.
      You might see logs similar to:
      2018-05-14T01:26:28.464Z cpu1:2097770)NMP: nmp_ThrottleLogForDevice:3689: Cmd 0x1a (0x459a40bfbec0, 0) to dev 'eui.0011223344550003' on path 'vmhba64:C0:T0:L3' Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x6 0x2a 0x6. Act:FAILOVER 2018-05-14T01:27:08.453Z cpu5:2097412)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x1a, CmdSN 0x29 from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:27:48.911Z cpu4:2097181)ScsiDeviceIO: 3029: Cmd(0x459a40bfd540) 0x25, CmdSN 0x2c from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.457Z cpu1:2097178)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x9e, CmdSN 0x2d from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.910Z cpu2:2097368)WARNING: ScsiDevice: 7641: GetDeviceAttributes during periodic probe'eui.0011223344550003': failed with I/O error

    • A VMFS6 datastore might report that it is out of space incorrectly, due to stale cache entries. Space allocation reports are corrected with the automatic updates of the cache entries, but this fix prevents the error even before an update.

    • If you remove an NTP server from your configuration by using the vSphere Web Client, the server settings might remain in the /etc/ntp.conf file.

    • If you select Intel Merom or Penryn microprocessors from the VMware EVC Mode drop-down menu before migration of virtual machines from an ESXi 6.0 host to an ESXi 6.5 or ESXi 6.7 host, the virtual machines might stop responding.

    • The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:

      • 1. Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
      • 2. Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.
    • The export of a large virtual machine by using the VMware Host Client might fail or end with incomplete VMDK files, because the lease issued by the ExportVmmethod might expire before the file transfer finishes.

    • High number of networking tasks, specifically multiple calls to QueryNetworkHint(), might exceed the memory limit of the hostd process and make it fail intermittently.

    • Drives that do not support Block Limits VPD page 0xb0 might generate event code logs that flood the vmkernel.log.

    • OMIVV relies on information from the iDRAC property hardware.systemInfo.otherIdentifyingInfo.ServiceTag to fetch the SerialNumberparameter for identifying some Dell modular servers. A mismatch in the serviceTag property might fail this integration.

    • vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from the database. Hostd might also stop responding. You might see a fatal error message.

    • vSphere Virtual Volumes metadata might be available only when a virtual machine starts running. As result, storage array vendor software might fail to apply policies that impact the optimal layout of volumes during regular use and after a failover.

    • If you use a VASA provider, you might see multiple getTaskUpdate calls to cancelled or deleted tasks. As a result, you might see a higher consumption of VASA bandwidth and a log spew.

    • Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or if the hardware version is 11 or later, or the guest OS is not Windows, or if the processors are Intel.

    • If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.

    • Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks inodes. This might cause some ESXi hosts to stop responding.

    • The esxtop command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes.

    • Manual resets of hardware health alarms to return to a normal state, by selecting Reset to green after right-clicking the Alarms sidebar pane, might not work as expected. The alarms might reappear after 90 seconds.

    • Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.

    • If you configure the /productLocker directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools in the virtual machine might display an incorrect status Unsupported.

    • For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.

    • A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions, where the backup proxy virtual machine might be terminated because of this issue.

      In the hostd log, you might see content similar to:

      2018-06-08T10:33:14.150Z info hostd[15A03B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
      2018-06-08T10:33:14.167Z error hostd[15640B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
      2018-06-08T10:33:14.826Z info hostd[15640B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
      2018-06-08T10:35:53.120Z error hostd[15A44B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171

      In the vmkernel log, the content is similar to:

      2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
      2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand.

    • Disabled IPMI sensors, or sensors that do not report any data, might generate false hardware health alarms.

    • When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.

    • You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 1 GbE and higher, due to the hard-coded socket buffer limit of 16 MB.

    • An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:

      #PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
      ...
      0x451a1b81b9d0:[0x41802e62abc1][email protected](tcpip4)#+0x161 stack: 0x430e0593c9e8
      0x451a1b81ba20:[0x41802e62bb57][email protected](tcpip4)#+0x7fc stack: 0x30
      0x451a1b81bb20:[0x41802e60d7f8][email protected](tcpip4)#+0xbe1 stack: 0x30
      0x451a1b81bcf0:[0x41802e621d3b][email protected](tcpip4)#+0x770 stack: 0x451a00000000

    • When a user creates and adds a VMkernel NIC in the vSphereProvisioning netstack to use it for NFC traffic, the daemon that manages the NFC connections might fail to clean up old connections. This leads to the exhaustion of the allowed process limit. As a result, the host becomes unresponsive and unable to create processes for incoming SSH connections.

    • An attempt to create VFFS volumes by using a vSphere standard license might fail with the error License not available to perform the operation as feature 'vSphere Flash Read Cache' is not licensed with this edition. This is because the system uses VMware vSphere Flash Read Cache permissions to check if provisioning of VFFS is allowed.

    • In a shared environment such as Raw Device Mapping (RDM) disks, you cannot use Multi-Writer Locks for virtual machines on more than 8 ESXi hosts. If you migrate a virtual machine to a ninth host, it might fail to power on with the error message Could not open xxxx.vmdk or one of the snapshots it depends on. (Too many users). This fix makes the advanced configuration option /VMFS3/GBLAllowMW visible. You can manually enable or disable multi-writer locks for more than 8 hosts by using generation-based locking.

    • Third party CIM providers, by HPE and Stratus for instance, might start normally but lack some expected functionality when working with their external applications.

    • An ESXi host might become unresponsive if datastore heartbeating stops prematurely while closing VMFS. As a result, the affinity manager cannot exit gracefully.

    • SNMP monitoring systems might report memory statistics on ESXi hosts different from the values that free, top or vmstat commands report.

    • If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:

      SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
      0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
      0x0000418017d8026c in nmp_DeviceStartLoop ([email protected]=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
      0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., [email protected]=...) at bora/modules/vmkernel/nmp/nmp_core.c:807

    • The Cluster Compliance status of a vSAN enabled cluster might display as Not compliant, because the compliance check might not recognize vSAN datastores as shared datastores.

    • I/Os might fail with an error FCP_DATA_CNT_MISMATCH, which translates into a HOST_ERROR in the ESXi host, and indicates link errors or faulty switches on some paths of the device.

    • In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.

    • You might see a slowdown in the network performance of NUMA servers for devices using VMware NetQueue, due to a pinning threshold. It is observed when Rx queues are pinned to a NUMA node by changing the default value of the advanced configuration NetNetqNumaIOCpuPinThreshold.

    • Some hosts might fail to boot after upgrade to vSAN 6.7 Update 1 from a previous release. This problem occurs on NUMA-enabled servers in a vSAN cluster 3, 4, or 5 disk groups per host. The host might timeout while creating the last disk group.

    • vSAN creates blk attribute components after it creates a disk group. If the blk attrib component is missing from the API that supports the diskgroup rebuildcommand, you might see the following warning message in the vmkernel log: Throttled: BlkAttr not ready for disk.

    • When all disk groups are removed from a vSAN cluster, the vSphere Client displays a warning similar to the following:
      VMware vSAN cluster in datacenter does not have capacity
      After you disable vSAN on the cluster, the warning message persists.

    • If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
      2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
      2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.

    • The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.

    • In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lighted on the correct failed device.

    • If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
      2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
      2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.

    • The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.

    • ESXi 6.7 Update 1 enables Microsemi Smart PQI (smartpqi) LSU plug-in tо support attached disk management operations on the HPE ProLiant Gen10 Smart Array Controller.

    • With ESXi 6.7 Update 1, you can run the following storage ESXCLI commands to enable status LED on Intel VMD based NVMe SSDs without downloading Intel CLI:

      • esxcli storage core device set -l locator -d
      • esxcli storage core device set -l error -d
      • esxcli storage core device set -l off -d.
    • ESXi 6.7 Update 1 enables nfnic driver support for Cisco UCS Fibre Channel over Ethernet (FCoE).
    • ESXi 6.7 Update 1 adds support for NGUID in the NVMe driver.
    • An ESXi host and hostd might become unresponsive due to memory exhaustion caused by the lsi_mr3 disk serviceability plug-in.
    • The Intel native i40en driver in ESXi might not work with NICs of the X722 series in IPv6 networks and the Wake-on-LAN service might fail.
    • You might see colored logs Device 10fb does not support flow control autoneg as alerts, but this is a regular log that reflects the status of flow control support of certain NICs. On some OEM images, this log might display frequently, but it does not indicate any issues with the ESXi host.
    • Some QLogic FastLinQ QL41xxx adapters might not create virtual functions after SR-IOV is enabled on the adapter and the host is rebooted.
    • In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.
ESXi-6.7.0-20181002001-no-tools
Profile NameESXi-6.7.0-20181002001-no-tools
BuildFor build information, see Patches Contained in this Release.
VendorVMware, Inc.
Release DateOctober 16, 2018
Acceptance LevelPartnerSupported
Affected HardwareN/A
Affected SoftwareN/A
Affected VIBs
  • VMW_bootbank_brcmfcoe_11.4.1078.5-11vmw.670.1.28.10302608
  • VMW_bootbank_nenic_1.0.21.0-1vmw.670.1.28.10302608
  • VMW_bootbank_vmw-ahci_1.2.3-1vmw.670.1.28.10302608
  • VMware_bootbank_esx-base_6.7.0-1.28.10302608
  • VMware_bootbank_vsan_6.7.0-1.28.10290435
  • VMware_bootbank_vsanhealth_6.7.0-1.28.10290721
  • VMware_bootbank_esx-update_6.7.0-1.28.10302608
  • VMware_bootbank_lsu-intel-vmd-plugin_1.0.0-2vmw.670.1.28.10302608
  • VMW_bootbank_lsi-msgpt3_16.00.01.00-3vmw.670.1.28.10302608
  • VMW_bootbank_smartpqi_1.0.1.553-12vmw.670.1.28.10302608
  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.34-1.28.10302608
  • VMW_bootbank_nvme_1.2.2.17-1vmw.670.1.28.10302608
  • VMW_bootbank_bnxtroce_20.6.101.0-20vmw.670.1.28.10302608
  • VMW_bootbank_vmkusb_0.1-1vmw.670.1.28.10302608
  • VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-16vmw.670.1.28.10302608
  • VMW_bootbank_ne1000_0.8.4-1vmw.670.1.28.10302608
  • VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-13vmw.670.1.28.10302608
  • VMW_bootbank_i40en_1.3.1-22vmw.670.1.28.10302608
  • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.670.1.28.10302608
  • VMware_bootbank_lsu-smartpqi-plugin_1.0.0-3vmw.670.1.28.10302608
  • VMW_bootbank_nenic_1.0.21.0-1vmw.670.1.28.10302608
  • VMW_bootbank_lsi-msgpt2_20.00.04.00-5vmw.670.1.28.10302608
  • VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.34-1.28.10302608
  • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.670.1.28.10302608
  • VMware_bootbank_vsanhealth_6.7.0-1.28.10290721
  • VMW_bootbank_nhpsa_2.0.22-3vmw.670.1.28.10302608
  • VMware_bootbank_cpu-microcode_6.7.0-1.28.10302608
  • VMW_bootbank_lsi-msgpt3_16.00.01.00-3vmw.670.1.28.10302608
  • VMW_bootbank_bnxtroce_20.6.101.0-20vmw.670.1.28.10302608
  • VMW_bootbank_mtip32xx-native_3.9.8-1vmw.670.1.28.10302608
  • VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-13vmw.670.1.28.10302608
  • VMW_bootbank_nhpsa_2.0.22-3vmw.670.1.28.10302608
  • VMW_bootbank_lsi-msgpt2_20.00.04.00-5vmw.670.1.28.10302608
  • VMW_bootbank_vmkfcoe_1.0.0.1-1vmw.670.1.28.10302608
  • VMW_bootbank_nfnic_4.0.0.14-0vmw.670.1.28.10302608
  • VMW_bootbank_mtip32xx-native_3.9.8-1vmw.670.1.28.10302608
  • VMware_bootbank_lsu-smartpqi-plugin_1.0.0-3vmw.670.1.28.10302608
  • VMW_bootbank_lsi-mr3_7.702.13.00-5vmw.670.1.28.10302608
  • VMW_bootbank_iser_1.0.0.0-1vmw.670.1.28.10302608
  • VMW_bootbank_nvmxnet3_2.0.0.29-1vmw.670.1.28.10302608
  • VMW_bootbank_lsi-msgpt35_03.00.01.00-12vmw.670.1.28.10302608
  • VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.670.1.28.10302608
  • VMW_bootbank_elxnet_11.4.1095.0-5vmw.670.1.28.10302608
  • VMW_bootbank_ixgben_1.4.1-16vmw.670.1.28.10302608
  • VMW_bootbank_ntg3_4.1.3.2-1vmw.670.1.28.10302608
  • VMW_bootbank_ipmi-ipmi-si-drv_39.1-5vmw.670.1.28.10302608
  • VMW_bootbank_qedentv_2.0.6.4-10vmw.670.1.28.10302608
  • VMW_bootbank_i40en_1.3.1-22vmw.670.1.28.10302608
  • VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.670.1.28.10302608
  • VMware_bootbank_cpu-microcode_6.7.0-1.28.10302608
  • VMW_bootbank_lpfc_11.4.33.3-11vmw.670.1.28.10302608
PRs Fixed1220910 , 2036262 , 2036302 , 2039186 , 2046226 , 2057603 , 2058908 , 2066026 , 2071482 , 2072977 , 2078138 , 2078782 , 2078844 , 2079807 , 2080427 , 2082405 , 2083581 , 2084722 , 2085117 , 2086803 , 2086807 , 2089047 , 2096126 , 2096312 , 2096947 , 2096948 , 2097791 , 2098170 , 2098868 , 2103579 , 2103981 , 2106747 , 2107087 , 2107333 , 2110971 , 2118588 , 2119610 , 2119663 , 2120346 , 2126919 , 2128933 , 2129130 , 2129181 , 2131393 , 2131407 , 2133153 , 2133588 , 2136004 , 2137261 , 2139127 , 2145089 , 2146206 , 2146535 , 2149518 , 2152380 , 2154913 , 2156841 , 2157501 , 2157817 , 2163734 , 2165281 , 2165537 , 2165567 , 2166114 , 2167098 , 2167878 , 2173810 , 2186253 , 2187008 , 2204439 , 2095671 , 2095681 , 2124061 , 2095671 , 2095681 , 2093493 , 2103337 , 2112361 , 2099772 , 2137374 , 2112193 , 2153838 , 2142221 , 2106647 , 2137374 , 2156445
Related CVE numbersN/A
  • This patch updates the following issues:
    • If you manually delete the support bundle folder of a virtual machinedownloaded in the /scratch/downloads directory of an ESXi host, the hostd service might fail. The failure occurs when hostd automatically tries to delete folders in the /scratch/downloads directory one hour after the files are created.

    • If you edit the configuration file of an ESXi host to enable a smart card reader as a passthrough device, you might not be able to use the reader if you have also enabled the feature Support vMotion while a device is connected.

    • In vSAN environments, VMkernel logs that record activities related to virtual machines and ESXi, might be flooded with the message Unable to register file system.

    • If you use the vSphere Web Client to increase the disk size of a virtual machine, the vSphere Web Client might not fully reflect the reconfiguration, and display the storage allocation of the virtual machine as unchanged.

    • If you attempt to PXE boot a virtual machine that uses EFI firmware, with a vmxnet3 network adapter and WDS, and you have not disabled the variable windows extension option in WDS, the virtual machine might boot extremely slowly.

    • Due to issues with Apple USB, devices with an iOS version later than iOS 11 cannot connect to virtual machines with an OS X version later than OS X 10.12.

    • If you enable the Stateful Install feature on a host profile, and the management VMkernel NIC is connected to a distributed virtual switch, applying the host profile to another ESXi 6.7 host by using vSphere Auto Deploy might fail during a PXE boot. The host remains in maintenance mode.

    • If a virtual machine has snapshots taken by using CBRC and later CBRC is disabled, disk consolidation operations might fail with an error A specified parameter was not correct: spec.deviceChange.device due to a deleted digest file after CBRC was disabled. An alert Virtual machine disks consolidation is needed. is displayed until the issue is resolved. This fix prevents the issue, but it might still exist for virtual machines that have snapshots taken with CBRC enabled and later disabled.

    • An ESXi host might fail with a purple diagnostic screen due to a memory allocation problem.

    • An incorrect calculation in the VMkernel causes the esxtop utility to report incorrect statistics for the average device latency per command (DAVG/cmd) and the average ESXi VMkernel latency per command (KAVG/cmd) on LUNs with VMware vSphere Storage APIs Array Integration (VAAI).

    • This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for the following DELL MD Storage array models: MD32xx, MD32xxi, MD36xxi, MD36xxf, MD34xx, MD38xxf, and MD38xxi.

    • After cold migration or a vSphere High Availability failover, a virtual machine might fail to connect to a distributed port group, because the port is deleted before the virtual machine powers on.

    • ESXi hosts might not reflect the MAXIMUM TRANSFER LENGTH parameter reported by the SCSI device in the Block Limits VPD page. As a result, I/O commands issued with a transfer size greater than the limit might fail with a similar log:

      2017-01-24T12:09:40.065Z cpu6:1002438588)ScsiDeviceIO: SCSICompleteDeviceCommand:3033: Cmd(0x45a6816299c0) 0x2a, CmdSN 0x19d13f from world 1001390153 to dev 'naa.514f0c5d38200035' failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

    • This fix sets Storage Array Type Plug-in (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for HITACHI OPEN-v type storage arrays with Asymmetric Logical Unit Access (ALUA) support.

      The fix also sets Storage Array Type Plug-in (SATP) to VMW_SATP_DEFAULT_AA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_off as default for HITACHI OPEN-v type storage arrays without ALUA support.

    • If both the target virtual machine and the proxy virtual machine are enabled for CBRC, the quiesced snapshot disk cannot be hot-added to the proxy virtual machine, because the digest of the snapshot disk in the target virtual machine is disabled during the snapshot creation. As a result, the virtual machine backup process fails.

    • The syslog.log file might be repeatedly populated with messages related to calls to the HostImageConfigManager managed object.

    • While extending the coredump partition of an ESXi host, vCenter Server reports might alarm that no coredump target is configured, because before increasing the partition size, the existing coredump partition is temporarily deactivated.

    • Due to a race condition in the VSCSI, if you use EMC RecoverPoint, an ESXi host might fail with a purple diagnostic screen during the shutdown or power off of a virtual machine.

    • vSphere Virtual Volumes set with VMW_VVolType metadata key Other and VMW_VVolTypeHint metadata key Sidecar might not get VMW_VmID metadata key to the associated virtual machines and cannot be tracked by using IDs.

    • This fix sets Storage Array Type Plugin (SATP) claim rules for Solidfire SSD SAN storage arrays to VMW_SATP_DEFAULT_AA and the Path Selection Policy (PSP) to VMW_PSP_RR with 10 I/O operations per second by default to achieve optimal performance.

    • A fast reboot of a system with an LSI controller or a reload of an LSI driver, such as lsi_mr3, might put the disks behind the LSI controller in an offline state. The LSI controller firmware sends a SCSI STOP UNIT command during unload, but a corresponding SCSI START UNIT command might not be issued during reload. If disks go offline, all datastores hosted on these disks become unresponsive or inaccessible.

    • If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC) or I/O filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache I/O filters, corrupted replication I/O filters, and disk corruption, when cache I/O filters are configured in write-back mode. You might also see issues with the virtual machine encryption.

    • If a virtual machine is on a VMFSsparse snapshot, I/Os issued to the virtual machine might only partially be processed at VMFSsparse level, but upper layers, such as I/O Filters, might presume the transfer is successful. This might lead to data inconsistency. This fix sets a transient error status for reference from upper layers if an I/O is complete.

    • SCSI INQUIRY data is cached at the Pluggable Storage Architecture (PSA) layer for RDM LUNs and response for subsequent SCSI INQUIRY commands might be returned from the cache instead of querying the LUN. To avoid fetching cached SCSI INQUIRY data, apart from modifying the .vmx file of a virtual machine with RDM, with ESXi 6.7 Update 1 you can ignore the SCSI INQUIRY also by using the ESXCLI command

      esxcli storage core device inquirycache set --device <device-id> --ignore true.

      With the ESXCLI option, a reboot of the virtual machine is not necessary.

    • This fix sets Storage Array Type Plugin (SATP) to VMW_SATP_ALUA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_on as default for Tegile IntelliFlash storage arrays with Asymmetric Logical Unit Access (ALUA) support. The fix also sets Storage Array Type Plugin (SATP) to VMW_SATP_DEFAULT_AA, Path Selection Policy (PSP) to VMW_PSP_RR and Claim Options to tpgs_off as default for Tegile IntelliFlash storage arrays without ALUA support.

    • During an ESXi booting, if the commands issued over the initial device discovery fail with ASYMMETRIC ACCESS STATE CHANGE UA, the path failover might take longer because the commands are blocked within the ESXi host. This might lead to a longer ESXi booting.
      You might see logs similar to:
      2018-05-14T01:26:28.464Z cpu1:2097770)NMP: nmp_ThrottleLogForDevice:3689: Cmd 0x1a (0x459a40bfbec0, 0) to dev 'eui.0011223344550003' on path 'vmhba64:C0:T0:L3' Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x6 0x2a 0x6. Act:FAILOVER 2018-05-14T01:27:08.453Z cpu5:2097412)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x1a, CmdSN 0x29 from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:27:48.911Z cpu4:2097181)ScsiDeviceIO: 3029: Cmd(0x459a40bfd540) 0x25, CmdSN 0x2c from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.457Z cpu1:2097178)ScsiDeviceIO: 3029: Cmd(0x459a40bfbec0) 0x9e, CmdSN 0x2d from world 0 to dev 'eui.0011223344550003' failed H:0x5 D:0x0 P:0x0 Invalid sense data: 0x0 0x0 0x0. 2018-05-14T01:28:28.910Z cpu2:2097368)WARNING: ScsiDevice: 7641: GetDeviceAttributes during periodic probe'eui.0011223344550003': failed with I/O error

    • A VMFS6 datastore might report that it is out of space incorrectly, due to stale cache entries. Space allocation reports are corrected with the automatic updates of the cache entries, but this fix prevents the error even before an update.

    • If you remove an NTP server from your configuration by using the vSphere Web Client, the server settings might remain in the /etc/ntp.conf file.

    • If you select Intel Merom or Penryn microprocessors from the VMware EVC Mode drop-down menu before migration of virtual machines from an ESXi 6.0 host to an ESXi 6.5 or ESXi 6.7 host, the virtual machines might stop responding.

    • The following two VOB events might be generated due to variations in the I/O latency in a storage array, but they do not report an actual problem in virtual machines:

      • 1. Device naa.xxx performance has deteriorated. I/O latency increased from average value of 4114 microseconds to 84518 microseconds.
      • 2. Device naa.xxx performance has improved. I/O latency reduced from 346115 microseconds to 67046 microseconds.
    • The export of a large virtual machine by using the VMware Host Client might fail or end with incomplete VMDK files, because the lease issued by the ExportVmmethod might expire before the file transfer finishes.

    • High number of networking tasks, specifically multiple calls to QueryNetworkHint(), might exceed the memory limit of the hostd process and make it fail intermittently.

    • Drives that do not support Block Limits VPD page 0xb0 might generate event code logs that flood the vmkernel.log.

    • OMIVV relies on information from the iDRAC property hardware.systemInfo.otherIdentifyingInfo.ServiceTag to fetch the SerialNumberparameter for identifying some Dell modular servers. A mismatch in the serviceTag property might fail this integration.

    • vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from the database. Hostd might also stop responding. You might see a fatal error message.

    • vSphere Virtual Volumes metadata might be available only when a virtual machine starts running. As result, storage array vendor software might fail to apply policies that impact the optimal layout of volumes during regular use and after a failover.

    • If you use a VASA provider, you might see multiple getTaskUpdate calls to cancelled or deleted tasks. As a result, you might see a higher consumption of VASA bandwidth and a log spew.

    • Virtual machines with hardware version 10 or earlier, using EFI, and running Windows Server 2016 on AMD processors, might stop responding during reboot. The issue does not occur if a virtual machine uses BIOS, or if the hardware version is 11 or later, or the guest OS is not Windows, or if the processors are Intel.

    • If you enable the NetFlow network analysis tool to sample every packet on a vSphere Distributed Switch port group, by setting the Sampling rate to 0, the network latency might reach 1000 ms in case flows exceed 1 million.

    • Stale tickets, which are not deleted before hostd generates a new ticket, might exhaust RAM disks inodes. This might cause some ESXi hosts to stop responding.

    • The esxtop command-line utility might not display an updated value of the queue depth of devices if the corresponding device path queue depth changes.

    • Manual resets of hardware health alarms to return to a normal state, by selecting Reset to green after right-clicking the Alarms sidebar pane, might not work as expected. The alarms might reappear after 90 seconds.

    • Previously, if a component such as a processor or a fan was missing from a vCenter Server system, the Presence Sensors displayed a status Unknown. However, the Presence Sensors do not have health state associated with them.

    • If you configure the /productLocker directory on a shared VMFS datastore, when you migrate a virtual machine by using vSphere vMotion, VMware Tools in the virtual machine might display an incorrect status Unsupported.

    • For large virtual machines, Encrypted vSphere vMotion might fail due to insufficient migration heap space.

    • A virtual machine that does hundreds of disk hot add or remove operations without powering off or migrating might be terminated and become invalid. This affects backup solutions, where the backup proxy virtual machine might be terminated because of this issue.

      In the hostd log, you might see content similar to:

      2018-06-08T10:33:14.150Z info hostd[15A03B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datatore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING) ...
      2018-06-08T10:33:14.167Z error hostd[15640B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] Could not apply pre-reconfigure domain changes: Failed to add file policies to domain :171: world ID :0:Cannot allocate memory ...
      2018-06-08T10:33:14.826Z info hostd[15640B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx opID=5953cf5e-3-a90a user=vpxuser:ADMINadmin] State Transition (VM_STATE_RECONFIGURING -> VM_STATE_ON) ...
      2018-06-08T10:35:53.120Z error hostd[15A44B70] [[email protected] sub=Vmsvc.vm:/vmfs/volumes/datastore/path/to/proxyVM/proxyVM.vmx] Expected permission (3) for /vmfs/volumes/path/to/backupVM not found in domain 171

      In the vmkernel log, the content is similar to:

      2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)World: 12235: VC opID 5953cf5e-3-a90a maps to vmkernel opID 4c6a367c
      2018-06-08T05:40:53.264Z cpu49:68763 opID=4c6a367c)WARNING: Heap: 3534: Heap domainHeap-54 already at its maximum size. Cannot expand.

    • Disabled IPMI sensors, or sensors that do not report any data, might generate false hardware health alarms.

    • When you enable IPFIX and traffic is heavy with different flows, the system heartbeat might fail to preempt CPU from IPFIX for a long time and trigger a purple diagnostic screen.

    • You might see poor performance of long distance vSphere vMotion operations at high latency, such as 100 ms and higher, with high speed network links, such as 1 GbE and higher, due to the hard-coded socket buffer limit of 16 MB.

    • An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:

      #PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
      ...
      0x451a1b81b9d0:[0x41802e62abc1][email protected](tcpip4)#+0x161 stack: 0x430e0593c9e8
      0x451a1b81ba20:[0x41802e62bb57][email protected](tcpip4)#+0x7fc stack: 0x30
      0x451a1b81bb20:[0x41802e60d7f8][email protected](tcpip4)#+0xbe1 stack: 0x30
      0x451a1b81bcf0:[0x41802e621d3b][email protected](tcpip4)#+0x770 stack: 0x451a00000000

    • When a user creates and adds a VMkernel NIC in the vSphereProvisioning netstack to use it for NFC traffic, the daemon that manages the NFC connections might fail to clean up old connections. This leads to the exhaustion of the allowed process limit. As a result, the host becomes unresponsive and unable to create processes for incoming SSH connections.

    • An attempt to create VFFS volumes by using a vSphere standard license might fail with the error License not available to perform the operation as feature 'vSphere Flash Read Cache' is not licensed with this edition. This is because the system uses VMware vSphere Flash Read Cache permissions to check if provisioning of VFFS is allowed.

    • In a shared environment such as Raw Device Mapping (RDM) disks, you cannot use Multi-Writer Locks for virtual machines on more than 8 ESXi hosts. If you migrate a virtual machine to a ninth host, it might fail to power on with the error message Could not open xxxx.vmdk or one of the snapshots it depends on. (Too many users). This fix makes the advanced configuration option /VMFS3/GBLAllowMW visible. You can manually enable or disable multi-writer locks for more than 8 hosts by using generation-based locking.

    • Third party CIM providers, by HPE and Stratus for instance, might start normally but lack some expected functionality when working with their external applications.

    • An ESXi host might become unresponsive if datastore heartbeating stops prematurely while closing VMFS. As a result, the affinity manager cannot exit gracefully.

    • SNMP monitoring systems might report memory statistics on ESXi hosts different from the values that free, top or vmstat commands report.

    • If the target connected to an ESXi host supports only implicit ALUA and has only standby paths, the host might fail with a purple diagnostic screen at the time of device discovery due to a race condition. You might see a backtrace similar to:

      SCSIGetMultipleDeviceCommands (vmkDevice=0x0, result=0x451a0fc1be98, maxCommands=1, pluginDone=<optimized out>) at bora/vmkernel/storage/device/scsi_device_io.c:2476
      0x00004180171688bc in vmk_ScsiGetNextDeviceCommand (vmkDevice=0x0) at bora/vmkernel/storage/device/scsi_device_io.c:2735
      0x0000418017d8026c in nmp_DeviceStartLoop ([email protected]=0x43048fc37350) at bora/modules/vmkernel/nmp/nmp_core.c:732
      0x0000418017d8045d in nmpDeviceStartFromTimer (data=..., [email protected]=...) at bora/modules/vmkernel/nmp/nmp_core.c:807

    • The Cluster Compliance status of a vSAN enabled cluster might display as Not compliant, because the compliance check might not recognize vSAN datastores as shared datastores.

    • I/Os might fail with an error FCP_DATA_CNT_MISMATCH, which translates into a HOST_ERROR in the ESXi host, and indicates link errors or faulty switches on some paths of the device.

    • In an ESXi configuration with multiple paths leading to LUNs behind IBM SVC targets, in case of connection loss on active paths and if at the same time the other connected paths are not in a state to service I/Os, the ESXi host might not detect this condition as APD even as no paths are actually available to service I/Os. As a result, I/Os to the device are not fast failed.

    • You might see a slowdown in the network performance of NUMA servers for devices using VMware NetQueue, due to a pinning threshold. It is observed when Rx queues are pinned to a NUMA node by changing the default value of the advanced configuration NetNetqNumaIOCpuPinThreshold.

    • Some hosts might fail to boot after upgrade to vSAN 6.7 Update 1 from a previous release. This problem occurs on NUMA-enabled servers in a vSAN cluster 3, 4, or 5 disk groups per host. The host might timeout while creating the last disk group.

    • vSAN creates blk attribute components after it creates a disk group. If the blk attrib component is missing from the API that supports the diskgroup rebuildcommand, you might see the following warning message in the vmkernel log: Throttled: BlkAttr not ready for disk.

    • When all disk groups are removed from a vSAN cluster, the vSphere Client displays a warning similar to the following:
      VMware vSAN cluster in datacenter does not have capacity
      After you disable vSAN on the cluster, the warning message persists.

    • If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
      2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
      2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.

    • The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.

    • In a vSAN cluster with HPE ProLiant Gen9 Smart Array Controllers, such as P440 and P840, the locator LEDs might not be lighted on the correct failed device.

    • If you unmap a LUN from an EMC VPLEX storage system, ESXi hosts might stop responding or fail with a purple diagnostic screen. You might see logs similar to:
      2018-01-26T18:53:28.002Z cpu6:33514)WARNING: iodm: vmk_IodmEvent:192: vmhba1: FRAME DROP event has been observed 400 times in the last one minute.
      2018-01-26T18:53:37.271Z cpu24:33058)WARNING: Unable to deliver event to user-space.

    • The HBA driver of an ESXi host might disconnect from an EMC VPLEX storage system upon heavy I/O load and not reconnect when I/O paths are available.

    • ESXi 6.7 Update 1 enables Microsemi Smart PQI (smartpqi) LSU plug-in tо support attached disk management operations on the HPE ProLiant Gen10 Smart Array Controller.

    • With ESXi 6.7 Update 1, you can run the following storage ESXCLI commands to enable status LED on Intel VMD based NVMe SSDs without downloading Intel CLI:

      • esxcli storage core device set -l locator -d
      • esxcli storage core device set -l error -d
      • esxcli storage core device set -l off -d.
    • ESXi 6.7 Update 1 enables nfnic driver support for Cisco UCS Fibre Channel over Ethernet (FCoE).
    • ESXi 6.7 Update 1 adds support for NGUID in the NVMe driver.
    • An ESXi host and hostd might become unresponsive due to memory exhaustion caused by the lsi_mr3 disk serviceability plug-in.
    • The Intel native i40en driver in ESXi might not work with NICs of the X722 series in IPv6 networks and the Wake-on-LAN service might fail.
    • You might see colored logs Device 10fb does not support flow control autoneg as alerts, but this is a regular log that reflects the status of flow control support of certain NICs. On some OEM images, this log might display frequently, but it does not indicate any issues with the ESXi host.
    • Some QLogic FastLinQ QL41xxx adapters might not create virtual functions after SR-IOV is enabled on the adapter and the host is rebooted.
    • In rare occasions, NICs using the ntg3 driver, such as the Broadcom BCM5719 and 5720 GbE NICs, might temporarily stop sending packets after a failed attempt to send an oversize packet. The ntg3 driver version 4.1.3.2 resolves this problem.
ESXi-6.7.0-20181001001s-standard
Profile NameESXi-6.7.0-20181001001s-standard
BuildFor build information, see Patches Contained in this Release.
VendorVMware, Inc.
Release DateOctober 16, 2018
Acceptance LevelPartnerSupported
Affected HardwareN/A
Affected SoftwareN/A
Affected VIBs
  • VMware_bootbank_vsanhealth_6.7.0-0.28.10176879
  • VMware_bootbank_vsan_6.7.0-0.28.10176879
  • VMware_bootbank_esx-base_6.7.0-0.28.10176879
  • VMware_bootbank_esx-ui_1.30.0-9946814
  • VMware_locker_tools-light_10.3.2.9925305-10176879
PRs Fixed1804719, 2093433, 2099951, 2168471
Related CVE numbersN/A
  • This patch updates the following issues:
    • The ESXi userworld libxml2 library is updated to version 2.9.7.

    • The NTP daemon is updated to version ntp-4.2.8p11.

    • The SQLite database is updated to version 3.23.1.

    • The ESXi userworld libcurl library is updated to libcurl-7.59.0.

    • The OpenSSH version is updated to 7.7p1.

    • The ESXi userworld OpenSSL library is updated to version openssl-1.0.2o.

    • The Python third-party library is updated to version 3.5.5.

    • The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista ISO image is available for download by users who require it. For download information, see the Product Download page.

ESXi-6.7.0-20181001001s-no-tools
Profile NameESXi-6.7.0-20181001001s-no-tools
BuildFor build information, see Patches Contained in this Release.
VendorVMware, Inc.
Release DateOctober 16, 2018
Acceptance LevelPartnerSupported
Affected HardwareN/A
Affected SoftwareN/A
Affected VIBs
  • VMware_bootbank_vsanhealth_6.7.0-0.28.10176879
  • VMware_bootbank_vsan_6.7.0-0.28.10176879
  • VMware_bootbank_esx-base_6.7.0-0.28.10176879
  • VMware_bootbank_esx-ui_1.30.0-9946814
PRs Fixed1804719, 2093433, 2099951, 2168471
Related CVE numbersN/A
  • This patch updates the following issues:
    • The ESXi userworld libxml2 library is updated to version 2.9.7.

    • The NTP daemon is updated to version ntp-4.2.8p11.

    • The SQLite database is updated to version 3.23.1.

    • The ESXi userworld libcurl library is updated to libcurl-7.59.0.

    • The OpenSSH version is updated to 7.7p1.

    • The ESXi userworld OpenSSL library is updated to version openssl-1.0.2o.

    • The Python third-party library is updated to version 3.5.5.

    • The Windows pre-Vista iso image for VMware Tools is no longer packaged with ESXi. The Windows pre-Vista ISO image is available for download by users who require it. For download information, see the Product Download page.

Known Issues

The known issues are grouped as follows.

ESXi670-201810201-UG
  • After you upgrade a host to ESXi 6.7 Update 1, if you remediate a host profile, the compliance checks might fail if the host is with NVMe devices, which support only NGUID as device identifier

    After you upgrade to ESXi 6.7 Update 1, a host with NVMe devices, which support only NGUID as device identifier, if you try to remediate a host profile configured for stateful installs, the compliance checks might fail. This is because the device identifier presented by the NVMe driver in ESXi 6.7 Update 1 is NGUID itself, instead of an ESXi-generated t10 identifier.

    Workaround: Update the Host Profile configuration:

    1. Navigate to the Host Profile.
    2. Click Copy Settings from Host.
    3. Select the host from which you want to copy the configuration settings.
    4. Click OK.
    5. Right-click the host and select Host Profiles > Reset Host Customizations.
  • Host failure when converting a data host into a witness host

    When you convert a vSAN cluster into a stretched cluster, you must provide a witness host. You can convert a data host into the witness host, but you must use maintenance mode with full data migration during the process. If you place the host into maintenance mode, enable the Ensure accessibility option, and then configure the host as the witness host, the host might fail with a purple diagnostic screen.

    Workaround: Remove the disk group on the witness host and then re-create the disk group.

  • Tagged unicast packets from a Virtual Function (VF) cannot arrive at its Physical Function (PF) when the port groups connected to the VF and PF are set to Virtual Guest Tagging (VGT)

    If you configure VGT in trunk mode on both VF and PF port groups, the unicast traffic might not pass between the VF and the PF.

    Workaround: Do not use the PF for VGT when its VFs are used for VGT. When you must use VGT on a PF and a VF for any reason, the VF and the PF must be on different physical NICs.

  • Physical NIC binding might be lost after a PXE boot

    If the vmknic is connected to an NSX-T logical switch with physical NIC binding configured, the configuration might be lost after a PXE boot. If a software iSCSI adapter is configured, you might see a host compliance error similar to The iSCSI adapter vmhba65 does not have the vnic vmkX that is configured in the profile.

    Workaround: Configure the physical NIC binding and software iSCSI adapter manually after the host boots.

  • The vmk0 management network MAC address might be deleted while remediating a host profile

    Remediating a host profile with a VMkernel interface created on a VMware NSX-T logical switch might remove vmk0 from the ESXi hosts. Such host profiles cannot be used in an NSX-T environment.

    Workaround: Configure the hosts manually.

  • The SNMP service fails frequently after upgrade to ESXi 6.7

    The SNMP service might be failing frequently, in intervals as small as 30 min, after an upgrade to ESXi 6.7, if the execution of the main thread is not complete when a child thread is called. The service generates snmpd-zdump core dumps. If the SNMP service fails, you can restart it with the following commands:

    esxcli system snmp set -e false and esxcli system snmp set -e true.

    Workaround: None.

  • When rebalancing disks, the amount of data to move displayed by vSAN health service does not match the amount displayed by the Ruby vSphere Console (RVC)

    RVC performs a rough calculation to determine the amount of data to move when rebalancing disks. The value displayed by the vSAN health service is more accurate. When rebalancing disks, refer to the vSAN health service to check the amount of data to move.

    Workaround: None.

  • Host with three or more disk groups fails to boot after upgrade to vSAN 6.7 Update 1

    In some cases, a host might fail to boot after upgrade to vSAN 6.7 Update 1 from a previous release. This rare condition occurs on NUMA-enabled servers in a vSAN cluster 3, 4, or 5 disk groups per host.

    Workaround: If a host with three or more disk groups failed to boot during an upgrade to vSAN 6.7 Update 1, change the ESXi configuration on each host as follows:
    Run the following commands on each vSAN host:
    esxcfg-advcfg --set 0 /LSOM/blPLOGRecovCacheLines
    auto-backup.sh

  • An ESXi host might fail with a purple diagnostic screen while shutting down

    An ESXi host might fail with a purple diagnostic screen due to a race condition in a query by the Multicast Listener Discovery (MLD) version 1 in IPv6 environments. You might see an error message similar to:

    #PF Exception 14 in world 2098376:vmk0-rx-0 IP 0x41802e62abc1 addr 0x40
    ...
    0x451a1b81b9d0:[0x41802e62abc1][email protected](tcpip4)#+0x161 stack: 0x430e0593c9e8
    0x451a1b81ba20:[0x41802e62bb57][email protected](tcpip4)#+0x7fc stack: 0x30
    0x451a1b81bb20:[0x41802e60d7f8][email protected](tcpip4)#+0xbe1 stack: 0x30
    0x451a1b81bcf0:[0x41802e621d3b][email protected](tcpip4)#+0x770 stack: 0x451a00000000

    Workaround: Disable IPv6. If you cannot disable IPv6 for some reason, disable MLDv1 from all other devices in your network.

Known Issues from Earlier Releases

To view a list of previous known issues, click here.

The earlier known issues are grouped as follows.

Installation, Upgrade, and Migration Issues
  • ESXi installation or upgrade fail due to memory corruption on HPE ProLiant - DL380/360 Gen 9 Servers

    The issue occurs on HPE ProLiant - DL380/360 Gen 9 Servers that have a Smart Array P440ar storage controller.

    Workaround: Set the server BIOS mode to UEFI before you install or upgrade ESXi.

  • After an ESXi upgrade to version 6.7 and a subsequent rollback to version 6.5 or earlier, you might experience failures with error messages

    You might see failures and error messages when you perform one of the following on your ESXi host after reverting to 6.5 or earlier versions:

    • Install patches and VIBs on the host
      Error message: [DependencyError] VIB VMware_locker_tools-light requires esx-version >= 6.6.0
    • Install or upgrade VMware Tools on VMs
      Error message: Unable to install VMware Tools.

    After the ESXi rollback from version 6.7, the new tools-light VIB does not revert to the earlier version. As a result, the VIB becomes incompatible with the rolled back ESXi host causing these issues.

    Workaround: Perform the following to fix this problem.

    SSH to the host and run one of these commands:

    esxcli software vib install -v /path/to/tools-light.vib

    or

    esxcli software vib install -d /path/to/depot/zip -n tools-light

    Where the vib and zip are of the currently running ESXi version.

    Note: For VMs that already have new VMware Tools installed, you do not have to revert VMware Tools back when ESXi host is rolled back.

  • Special characters backslash () or double-quote (') used in passwords causes installation pre-check to fail

    If the special characters backslash () or double quote (') are used in ESXi, vCenter Single Sign-On, or operating system password fields during the vCenter Server Appliance Installation templates, the installation pre-check fails with the following error:

    Error message: com.vmware.vcsa.installer.template.cli_argument_validation: Invalid escape: line ## column ## (char ###)

    Workaround: If you include special characters backslash () or double quote (') in the passwords for ESXi, operating systems, or Single-Sign-On, the special characters need to be escaped. For example, the password password should be escaped as password.

  • Windows vCenter Server 6.7 installer fails when non-ASCII characters are present in password

    The Windows vCenter Server 6.7 installer fails when the Single Sign-on password contains non-ASCII characters for Chinese, Japanese, Korean, and Taiwanese locales.

    Workaround: Ensure that the Single Sign-on password contains ASCII characters only for Chinese, Japanese, Korean, and Taiwanese locales.

  • Cannot log in to vSphere Appliance Management Interface if the colon character (:) is part of vCenter Server root password

    During the vCenter Server Appliance UI installation (Set up appliance VM page of Stage 1), if you include the colon character (:) as part of the vCenter Server root password, logging into the vSphere Appliance Management Interface (https://vc_ip:5480) fails and you are unable to login. The password might be accepted by the password rule check during the setup, but login fails.

    Workaround: Do not use the colon character (:) to set the vCenter Server root password in the vCenter Server Appliance UI (Set up appliance VM of Stage 1).

  • vCenter Server Appliance installation fails when the backslash character () is included in the vCenter Single Sign-On password

    During the vCenter Server Appliance UI installation (SSO setup page of Stage 2), if you include the backslash character () as part of the vCenter Single Sign-On password, the installation fails with the error Analytics Service registration with Component Manager failed. The password might be accepted by the password rule check, but installation fails.

    Workaround: Do not use the backslash character () to set the vCenter Single Sign-On password in the vCenter Server Appliance UI installer (SSO setup page of Stage 2)

  • Scripted ESXi installation fails on HP ProLiant Gen 9 Servers with an error

    When you perform a scripted ESXi installation on an HP ProLiant Gen 9 Server under the following conditions:

    • The Embedded User Partition option is enabled in the BIOS.
    • You use multiple USB drives during installation: one USB drive contains the ks.cfg file, and the others USB drive is not formatted and usable.

    The installation fails with the error message Partitions not initialized.

    Workaround:

    1. Disable the Embedded User Partition option in the server BIOS.
    2. Format the unformatted USB drive with a file system or unplug it from the server.
  • Windows vCenter Server 6.0.x or 6.5.x upgrade to vCenter Server 6.7 fails if vCenter Server contains non-ASCII or high-ASCII named 5.5 host profiles

    When a source Windows vCenter Server 6.0.x or 6.5.x contains vCenter Server 5.5.x host profiles named with non-ASCII or high-ASCII characters, UpgradeRunner fails to start during the upgrade pre-check process.

    Workaround: Before upgrading Windows vCenter Server 6.0.x or 6.5.x to vCenter Server 6.7, upgrade the ESXi 5.5.x with the non-ASCII or high-ASCII named host profiles to ESXi 6.0.x or 6.5.x, then update the host profile from the upgraded host by clicking Copy setting from the hosts.

  • You cannot run the camregister command with the -x option if the vCenter Single Sign-On password contains non-ASCII characters

    When you run the camregister command with the -xfile option, for example, to register the vSphere Authentication Proxy, the process fails with an access denied error when the vCenter Single Sign-On password contains non-ASCII characters.

    Workaround: Either set up the vCenter Single Sign-On password with ASCII characters, or use the –ppassword option when you run the camregister command to enter the vCenter Single Sign-On password that contains non-ASCII characters.

  • The Bash shell and SSH login are disabled after upgrading to vCenter Server 6.7

    After upgrading to vCenter Server 6.7, you are not able to access the vCenter Server Appliance using either the Bash shell or SSH login.

    Workaround:

    1. After successfully upgrading to vCenter Server 6.7, log in to the vCenter Server Appliance Management Interface. In a Web browser, go to: https://appliance_ip_address_or_fqdn:5480
    2. Log in as root.
      The default root password is the password you set while deploying the vCenter Server Appliance.
    3. Click Access, and click Edit.
    4. Edit the access settings for the Bash shell and SSH login.
      When enabling Bash shell access to the vCenter Server Appliance, enter the number of minutes to keep access enabled.
    5. Click OK to save the settings.
  • Management node migration is blocked if vCenter Server for Windows 6.0 is installed on Windows Server 2008 R2 without previously enabling Transport Layer Security 1.2

    This issue occurs if you are migrating vCenter Server for Windows 6.0 using an external Platform Services Controller (an MxN topology) on Windows Server 2008 R2. After migrating the external Platform Services Controller (PSC), when you run Migration Assistant on the Management node it fails, reporting that it cannot retrieve the PSC version. This error occurs because Windows Server 2008 R2 does not support Transport Layer Security (TLS) 1.2 by default, which is the default TLS protocol for Platform Services Controller 6.7.

    Workaround: Enable TLS 1.2 for Windows Server 2008 R2.1.

    1. Navigate to the registry key: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocols
    2. Create a new folder and label it TLS 1.2.
    3. Create two new keys with the TLS 1.2 folder, and name the keys Client and Server.
    4. Under the Client key, create two DWORD (32-bit) values, and name them DisabledByDefault and Enabled.
    5. Under the Server key, create two DWORD (32-bit) values, and name them DisabledByDefault and Enabled.
    6. Ensure that the Value field is set to 0 and that the Base is Hexadecimal for DisabledByDefault.
    7. Ensure that the Value field is set to 1 and that the Base is Hexadecimal for Enabled.
    8. Reboot the Windows Server 2008 R2 computer.

    For more information on using TLS 1.2 with Windows Server 2008 R2, refer to the operating system vendor's documentation.

  • vCenter Server containing host profiles with version less than 6.0 fails during upgrade to version 6.7

    vCenter Server 6.7 does not support host profiles with version less than 6.0. To upgrade to vCenter Server 6.7, you must first upgrade the host profiles to version 6.0 or later, if you have any of the following components:

    • ESXi host(s) version - 5.1 or 5.5
    • vCenter server version - 6.0 or 6.5
    • Host profiles version - 5.1 or 5.5

    Workaround: See KB 52932

  • After upgrading to vCenter Server 6.7, any edits to the ESXi host's /etc/ssh/sshd_config file are discarded, and the file is restored to the vCenter Server 6.7 default configuration

    Due to changes in the default values in the /etc/ssh/sshd_config file, the vCenter Server 6.7 upgrade replaces any manual edits to this configuration file with the default configuration. This change was necessary as some prior settings (for example, permitted ciphers) are no longer compatible with current ESXi behavior, and prevented SSHD (SSH daemon) from starting correctly.

    CAUTION: Editing /etc/ssh/sshd_config is not recommended. SSHD is disabled by default, and the preferred method for editing the system configuration is through the VIM API (including the ESXi Host Client interface) or ESXCLI.

    Workaround: If edits to /etc/ssh/sshd_config are needed, you can apply them after successfully completing the vCenter Server 6.7 upgrade. The default configuration file now contains a version number. Preserve the version number to avoid overwriting the file.

    For further information on editing the /etc/ssh/sshd_config file, see the following Knowledge Base articles:

    • For information on enabling public/private key authentication, see Knowledge Base article KB 1002866
    • For information on changing the default SSHD configuration, see Knowledge Base article KB 1020530
Security Features Issues
  • Virtualization Based Security (VBS) on vSphere in Windows Guest OSs RS1, RS2 and RS3 require HyperV to be enabled in the Guest OS.

    Virtualization Based Security (VBS) on vSphere in Windows Guest OSs RS1, RS2 and RS3 require HyperV to be enabled in the Guest OS.

    Workaround: Enable Hyper-V Platform on Windows Server 2016. In the Server Manager, under Local Server select Manage -> Add Roles and Features Wizard and under Role-based or feature-based installation select Hyper-V from the server pool and specify the server roles. Choose defaults for Server Roles, Features, Hyper-V, Virtual Switches, Migration and Default Stores. Reboot the host.

    Enable Hyper-V on Windows 10: Browse to Control Panel -> Programs -> Turn Windows features on or off. Check the Hyper-V Platform which includes the Hyper-V Hypervisor and Hyper-V Services. Uncheck Hyper-V Management Tools. Click OK. Reboot the host.

Networking Issues
  • Hostprofile PeerDNS flags do not work in some scenarios

    If PeerDNS for IPv4 is enabled for a vmknic on a stateless host that has an associated host profile, the iPv6PeerDNS might appear with a different state in the extracted host profile after the host reboots.

    Workaround: None.

  • When you upgrade vSphere Distributed Switches to version 6.6, you might encounter a few known issues

    During upgrade, the connected virtual machines might experience packet loss for a few seconds.

    Workaround: If you have multiple vSphere Distributed Switches that need to be upgraded to version 6.6, upgrade the switches sequentially.

    Schedule the upgrade of vSphere Distributed Switches during a maintenance window, set DRS mode to manual, and do not apply DRS recommendations for the duration of the upgrade.

    For more details about known issues and solutions, see KB 52621

  • VM fails to power on when Network I/O Control is enabled and all active uplinks are down

    A VM fails to power on when Network I/O Control is enabled and the following conditions are met:

    • The VM is connected to a distributed port group on a vSphere distributed switch
    • The VM is configured with bandwidth allocation reservation and the VM's network adapter (vNIC) has a reservation configured
    • The distributed port group teaming policy is set to Failover
    • All active uplinks on the distributed switch are down. In this case, vSphere DRS cannot use the standby uplinks and the VM fails to power on.

    Workaround: Move the available standby adapters to the active adapters list in the teaming policy of the distributed port group.

  • Network flapping on a NIC that uses qfle3f driver might cause ESXi host to crash

    The qfle3f driver might cause the ESXi host to crash (PSOD) when the physical NIC that uses the qfle3f driver experiences frequent link status flapping every 1-2 seconds.

    Workaround: Make sure that network flapping does not occur. If the link status flapping interval is more than 10 seconds, the qfle3f driver does not cause ESXi to crash. For more information, see KB 2008093.

  • Port Mirror traffic packets of ERSPAN Type III fail to be recognized by packet analyzers

    A wrong bit that is incorrectly introduced in ERSPAN Type III packet header causes all ERSPAN Type III packets to appear corrupt in packet analyzers.

    Workaround: Use GRE or ERSPAN Type II packets, if your traffic analyzer supports these types.

  • DNS configuration esxcli commands are not supported on non-default TCP/IP stacks

    DNS configuration of non-default TCP/IP stacks is not supported. Commands such as esxcli network ip dns server add -N vmotion -s 10.11.12.13 do not work.

    Workaround: Do not use DNS configuration esxcli commands on non-default TCP/IP stacks.

  • Compliance check fails with an error when applying a host profile with enabled default IPv4 gateway for vmknic interface

    When applying a host profile with enabled default IPv4 gateway for vmknic interface, the setting is populated with '0.0.0.0' and does not match the host info, resulting with the following error:

    IPv4 vmknic gateway configuration doesn't match the specification

    Workaround:

    1. Edit the host profile settings.
    2. Navigate to Networking configuration > Host virtual nic or Host portgroup > (name of the vSphere Distributed Switch or name of portgroup) > IP address settings.
    3. From the Default gateway Vmkernal Network Adapter (IPv4) drop-down menu, select Choose a default IPv4 gateway for the vmknic and enter the Vmknic Default IPv4 gateway.
  • Intel Fortville series NICs cannot receive Geneve encapsulation packets with option length bigger than 255 bytes

    If you configure Geneve encapsulation with option length bigger than 255 bytes, the packets are not received correctly on Intel Fortville NICs X710, XL710, and XXV710.

    Workaround: Disable hardware VLAN stripping on these NICs by running the following command:

    esxcli network nic software set --untagging=1 -n vmnicX.

  • RSPAN_SRC mirror session fails after migration

    When a VM connected to a port assigned for RSPAN_SRC mirror session is migrated to another host, and there is no required pNic on the destination network of the destination host, then the RSPAN_SRC mirror session fails to configure on the port. This causes the port connection to fail failure but the vMotion migration process succeeds.

    Workaround: To restore port connection failure, complete either one of the following:

    • Remove the failed port and add a new port.
    • Disable the port and enable it.

    The mirror session fails to configure, but the port connection is restored.

Storage Issues
  • NFS datastores intermittently become read-only

    A host's NFS datastores may become read-only when the NFS vmknic temporarily loses its IP address or after a stateless hosts reboot.

    Workaround: You can unmount and remount the datastores to regain connectivity through the NFS vmknic. You can also set the NFS datastore write permission to both the IP address of the NFS vmknic and the IP address of the Management vmknic.

  • When editing a VM's storage policies, selecting Host-local PMem Storage Policy fails with an error

    In the Edit VM Storage Policies dialog, if you select Host-local PMem Storage Policy from the dropdown menu and click OK, the task fails with one of these errors:

    The operation is not supported on the object.

    or

    Incompatible device backing specified for device '0'Detailed

    Workaround: You cannot apply the Host-local PMem Storage Policy to VM home. For a virtual disk, you can use the migration wizard to migrate the virtual disk and apply the Host-local PMem Storage Policy.

  • Datastores might appear as inaccessible after ESXi hosts in a cluster recover from a permanent device loss state

    This issue might occur in the environment where the hosts in the cluster share a large number of datastore, for example, 512 to 1000 datastores.
    After the hosts in the cluster recover from the permanent device loss condition, the datastores are mounted successfully at the host level. However, in vCenter Server, several datastores might continue to appear as inaccessible for a number of hosts.

    Workaround: On the hosts that show inaccessible datastores in the vCenter Server view, perform the Rescan Storage operation from vCenter Server.

  • Migration of a virtual machine from a VMFS3 datastore to VMFS5 fails in a mixed ESXi 6.5 and 6.7 host environment

    If you have a mixed host environment, you cannot migrate a virtual machine from a VMFS3 datastore connected to an ESXi 6.5 host to a VMFS5 datastore on an ESXi 6.7 host.

    Workaround: Upgrade the VMFS3 datastore to VMFS5 to be able to migrate the VM to the ESXi 6.7 host.

  • Warning message about a VMFS3 datastore remains unchanged after you upgrade the VMFS3 datastore using the CLI

    Typically, you use the CLI to upgrade the VMFS3 datastore that failed to upgrade during an ESXi upgrade. The VMFS3 datastore might fail to upgrade due to several reasons including the following:

    • No space is available on the VMFS3 datastore.
    • One of the extents on the spanned datastore is offline.

    After you fix the reason of the failure and upgrade the VMFS3 datastore to VMFS5 using the CLI, the host continues to detect the VMFS3 datastore and reports the following error:

    Deprecated VMFS (ver 3) volumes found. Upgrading such volumes to VMFS (ver5) is mandatory for continued availability on vSphere 6.7 host.

    Workaround: To remove the error message, restart hostd using the /etc/init.d/hostd restart command or reboot the host.

  • The Mellanox ConnectX-4/ConnectX-5 native ESXi driver might exhibit performance degradation when its Default Queue Receive Side Scaling (DRSS) feature is turned on

    Receive Side Scaling (RSS) technology distributes incoming network traffic across several hardware-based receive queues, allowing inbound traffic to be processed by multiple CPUs. In Default Queue Receive Side Scaling (DRSS) mode, the entire device is in RSS mode. The driver presents a single logical queue to OS and is backed by several hardware queues.

    The native nmlx5_core driver for the Mellanox ConnectX-4 and ConnectX-5 adapter cards enables the DRSS functionality by default. While DRSS helps to improve performance for many workloads, it could lead to possible performance degradation with certain multi-VM and multi-vCPU workloads.

    Workaround: If significant performance degradation is observed, you can disable the DRSS functionality.

    1. Run the esxcli system module parameters set -m nmlx5_core -p DRSS=0 RSS=0 command.
    2. Reboot the host.
  • Datastore name does not extract to the Coredump File setting in the host profile

    When you extract a host profile, the Datastore name field is empty in the Coredump File setting of the host profile. Issue appears when using esxcli command to set coredump.

    Workaround:

    1. Extract a host profile from an ESXi host.
    2. Edit the host profile settings and navigate to General System Settings > Core Dump Configuration > Coredump File.
    3. Select Create the Coredump file with an explicit datastore and size option and enter the Datastore name, where you want the Coredump File to reside.
  • Native software FCoE adapters configured on an ESXi host might disappear when the host is rebooted

    After you successfully enable the native software FCoE adapter (vmhba) supported by the vmkfcoe driver and then reboot the host, the adapter might disappear from the list of adapters. This might occur when you use Cavium QLogic 57810 or QLogic 57840 CNAs supported by the qfle3 driver.

    Workaround: To recover the vmkfcoe adapter, perform these steps:

    1. Run the esxcli storage core adapter list command to make sure that the adapter is missing from the list.
    2. Verify the vSwitch configuration on vmnic associated with the missing FCoE adapter.
    3. Run the following command to discover the FCoE vmhba:
      • On a fabric setup:
        #esxcli fcoe nic discover -n vmnic_number
      • On a VN2VN setup:
        #esxcli fcoe nic discover -n vmnic_number
  • Attempts to create a VMFS datastore on an ESXi 6.7 host might fail in certain software FCoE environments

    Your attempts to create the VMFS datastore fail if you use the following configuration:

    • Native software FCoE adapters configured on an ESXi 6.7 host.
    • Cavium QLogic 57810 or 57840 CNAs.
    • Cisco FCoE switch connected directly to an FCoE port on a storage array from the Dell EMC VNX5300 or VNX5700 series.

    Workaround: None.

    As an alternative, you can switch to the following end-to-end configuration:
    ESXi host > Cisco FCoE switch > FC switch > storage array from the DELL EMC VNX5300 and VNX5700 series.

Backup and Restore Issues
  • Windows Explorer displays some backups with unicode differently from how browsers and file system paths show them

    Some backups containing unicode display differently in the Windows Explorer file system folder than they do in browsers and file system paths.

    Workaround: Using http, https, or ftp, you can browse backups with your web browser instead of going to the storage folder locations through Windows Explorer.

Download Esxi 6.7 Update 1 Crack

vCenter Server Appliance, vCenter Server, vSphere Web Client, and vSphere Client Issues
  • The time synchronization mode setting is not retained when upgrading vCenter Server Appliance

    If NTP time synchronization is disabled on a source vCenter Server Appliance, and you perform an upgrade to vCenter Server Appliance 6.7, after the upgrade has successfully completed NTP time synchronization will be enabled on the newly upgraded appliance.

    Workaround:

    1. After successfully upgrading to vCenter Server Appliance 6.7, log into the vCenter Server Appliance Management Interface as root.

      The default root password is the password you set while deploying the vCenter Server Appliance.

    2. In the vCenter Server Appliance Management Interface, click Time.
    3. In the Time Synchronization pane, click Edit.
    4. From the Mode drop-down menu, select Disabled.

      The newly upgraded vCenter Server Appliance 6.7 will no longer use NTP time synchronization, and will instead use the system time zone settings.

  • Login to vSphere Web Client with Windows session authentication fails on Firefox browsers of version 54 or later

    If you use Firefox of version 54 or later to log in to the vSphere Web Client, and you use your Windows session for authentication, the VMware Enhanced Authentication Plugin might fail to populate your user name and to log you in.

    Workaround: If you are using Windows session authentication to log in to the vSphere Web Client, use one of the following browsers: Internet Explorer, Chrome, or Firefox of version 53 and earlier.

  • vCenter hardware health alarm notifications are not triggered in some instances

    When multiple sensors in the same category on an ESXi host are tripped within a time span of less than five minutes, traps are not received and email notifications are not sent.

    Workaround: None. You can check the hardware sensors section for any alerts.

  • When using the VCSA Installer Time Sync option, you must connect the target ESX to the NTP server in the Time & Date Setting from the ESX Management

    If you want to select Time Sync with NTP server from the VCSA Installer->Stage2->Appliance configuration->Time Sync option (ESX/NTP server), you also need to have the target ESX already connected to NTP server in the Time&Date Setting from the ESX Management, otherwise it'll fail in installation.

    Workaround:

    1. Set the Time Sync option in stage2->Appliance configuration to sync with ESX
    2. Set the Time Sync option in stage2->Appliance configuration to sync with NTP Servers, make sure both the ESX and VC are set to connect to NTP servers.
  • When you monitor Windows vCenter Server health, an error message appears

    Health service is not available for Windows vCenter Server. If you select the vCenter Server, and click Monitor > Health, an error message appears:

    Unable to query vSAN health information. Check vSphere Client logs for details.

    This problem can occur after you upgrade the Windows vCenter Server from release 6.0 Update 1 or 6.0 Update 2 to release 6.7. You can ignore this message.

    Workaround: None. Users can access vSAN health information through the vCenter Server Appliance.

  • vCenter hardware health alarms do not function with earlier ESXi versions

    If ESXi version 6.5 Update 1 or earlier is added to vCenter 6.7, hardware health related alarms will not be generated when hardware events occur such as high CPU temperatures, FAN failures, and voltage fluctuations.

    Workaround: None.

  • vCenter Server stops working in some cases when using vmodl to edit or expand a disk

    When you configure a VM disk in a Storage DRS-enabled cluster using the latest vmodl, vCenter Server stops working. A previous workaround using an earlier vmodl no longer works and will also cause vCenter Server to stop working.

    Workaround: None

  • vCenter Server for Windows migration to vCenter Server Appliance fails with error

    When you migrate vCenter Server for Windows 6.0.x or 6.5.x to vCenter Server Appliance 6.7, the migration might fail during the data export stage with the error: The compressed zip folder is invalid or corrupted.

    Workaround: You must zip the data export folder manually and follow these steps:

    1. In the source system, create an environment variable MA_INTERACTIVE_MODE.
    2. Go to Computer > Properties > Advanced system settings > Environment Variables > System Variables > New.
    3. Enter 'MA_INTERACTIVE_MODE' as variable name with value 0 or 1.
    4. Start the VMware Migration Assistant and provide your password.
    5. Start the Migration from the client machine. The migration will pause, and the Migration Assistant console will display the message To continue the migration, create the export.zip file manually from the export data (include export folder).
    6. NOTE: Do not press any keys or tabs on the Migration Assistant console.
    7. Go to the %appdata%vmwaremigration-assistant folder.
    8. Delete the export.zip created by the Migration Assistant.
    9. To continue the migration, manually create the export.zip file from the export folder.
    10. Return to the Migration Assistant console. Type Y and press Enter.
  • Discrepancy between the build number in VAMI and the build number in the vSphere Client

    In vSphere 6.7, the VAMI summary tab displays the ISO build for the vCenter Server and vCenter Server Appliance products. The vSphere Client summary tab displays the build for the vCenter product, which is a component within the vCenter Server product.

    Workaround: None

  • vCenter Server Appliance 6.7 displays an error message in the Available Update section of the vCenter Server Appliance Management Interface (VAMI)

    The Available Update section of the vCenter Server Appliance Management Interface (VAMI) displays the following error message:

    Check the URL and try again.

    This message is generated when the vCenter Server Appliance searches for and fails to find a patch or update. No functionality is impacted by this issue. This issue will be resolved with the release of the first patch for vSphere 6.7.

    Workaround: None. No functionality is impacted by this issue.

Virtual Machine Management Issues
  • Name of the virtual machine in the inventory changes to its path name

    This issue might occur when a datastore where the VM resides enters the All Paths Down state and becomes inaccessible. When hostd is loading or reloading VM state, it is unable to read the VM's name and returns the VM path instead. For example, /vmfs/volumes/123456xxxxxxcc/cs-00.111.222.333.

    Workaround: After you resolve the storage issue, the virtual machine reloads, and its name is displayed again.

  • You must select the 'Secure boot' Platform Security Level when enabling VBS in a Guest OS on AMD systems

    On AMD systems, vSphere virtual machines do not provide a vIOMMU. Since vIOMMU is required for DMA protection, AMD users cannot select 'Secure Boot and DMA protection' in the Windows Group Policy Editor when they 'Turn on Virtualization Based Security'. Instead select 'Secure boot.' If you select the wrong option it will cause VBS services to be silently disabled by Windows.

    Workaround: Select 'Secure boot' Platform Security Level in a Guest OS on AMD systems.

  • You cannot hot add memory and CPU for Windows VMs when Virtualization Based Security (VBS) is enabled within Windows

    Virtualization Based Security (VBS) is a new feature introduced in Windows 10 and Windows Server 2016. vSphere supports running Windows with VBS enabled starting in the vSphere 6.7 release. However, Hot add of memory and CPU will not operate for Windows VMs when Virtualization Based Security (VBS) is enabled.

    Workaround: Power-off the VM, change memory or CPU settings and power-on the VM.

  • Snapshot tree of a linked-clone VM might be incomplete after a vSAN network recovery from a failure

    A vSAN network failure might impact accessibility of vSAN objects and VMs. After a network recovery, the vSAN objects regain accessibility. The hostd service reloads the VM state from storage to recover VMs. However, for a linked-clone VM, hostd might not detect that the parent VM namespace has recovered its accessibility. This results in the VM remaining in inaccessible state and VM snapshot information not being displayed in vCenter Server.

    Workaround: Unregister the VM, then re-register it to force the hostd to reload the VM state. Snapshot information will be loaded from storage.

  • An OVF Virtual Appliance fails to start in the vSphere Client

    The vSphere Client does not support selecting vService extensions in the Deploy OVF Template wizard. As a result, if an OVF virtual appliance uses vService extensions and you use the vSphere Client to deploy the OVF file, the deployment succeeds, but the virtual appliance fails to start.

    Workaround: Use the vSphere Web Client to deploy OVF virtual appliances that use vService extensions.

vSphere HA and Fault Tolerance Issues
  • When you configure Proactive HA in Manual/MixedMode in vSphere 6.7 RC build you are prompted twice to apply DRS recommendations

    When you configure Proactive HA in Manual/MixedMode in vSphere 6.7 RC build and a red health update is sent from the Proactive HA provider plug-in, you are prompted twice to apply the recommendations under Cluster -> Monitor -> vSphere DRS -> Recommendations. The first prompt is to enter the host into maintenance mode. The second prompt is to migrate all VMs on a host entering maintenance mode. In vSphere 6.5, these two steps are presented as a single recommendation for entering maintenance mode, which lists all VMs to be migrated.

    Workaround: There is no impact to work flow or results. You must apply the recommendations twice. If you are using automated scripts, you must modify the scripts to include the additional step.

  • Lazy import upgrade interaction when VCHA is not configured

    The VCHA feature is available as part of 6.5 release. As of 6.5, a VCHA cluster cannot be upgraded while preserving the VCHA configuration. The recommended approach for upgrade is to first remove the VCHA configuration either through vSphere Client or by calling a destroy VCHA API. So for lazy import upgrade workflow without VCHA configuration, there is no interaction with VCHA.

    Do not configure a fresh VCHA setup while lazy import is in progress. The VCHA setup requires cloning the Active VM as Passive/Witness VM. As a result of an ongoing lazy import, the amount of data that needs to be cloned is large and may lead to performance issues.

    Workaround: None.

Esxi 6.7 Update 4

Auto Deploy and Image Builder Issues
  • Reboot of an ESXi stateless host resets the numRxQueue value of the host

    When an ESXi host provisioned with vSphere Auto Deploy reboots, it loses the previously set numRxQueue value. The Host Profiles feature does not support saving the numRxQueue value after the host reboots.

    Workaround: After the ESXi stateless host reboots:

    1. Remove the vmknic from the host.
    2. Create a vmknic on the host with the expected numRxQueue value.
  • After caching on a drive, if the server is in the UEFI mode, a boot from cache does not succeed unless you explicitly select the device to boot from the UEFI boot manager

    In case of Stateless Caching, after the ESXi image is cached on a 512n, 512e, USB, or 4Kn target disk, the ESXi stateless boot from autodeploy might fail on a system reboot. This occurs if autodeploy service is down.

    The system attempts to search for the cached ESXi image on the disk, next in the boot order. If the ESXi cached image is found, the host is booted from it. In legacy BIOS, this feature works without problems. However, in the UEFI mode of the BIOS, the next device with the cached image might not be found. As a result, the host cannot boot from the image even if the image is present on the disk.

    Workaround: If autodeploy service is down, on the system reboot, manually select the disk with the cached image from the UEFI Boot Manager.

  • A stateless ESXi host boot time might take 20 minutes or more

    The booting of a stateless ESXi host with 1,000 configured datastores might require 20 minutes or more.

    Workaround: None.

Miscellaneous Issues
  • ESXi might fail during reboot with VMs running on the iSCSI LUNs claimed by the qfle3i driver

    ESXi might fail during reboot with VMs running on the iSCSI LUNs claimed by the qfle3i driver if you attempt to reboot the server with VMs in the running I/O state.

    Workaround: First power off VMs and then reboot the ESXi host.

  • VXLAN stateless hardware offloads are not supported with Guest OS TCP traffic over IPv6 on UCS VIC 13xx adapters

    You may experience issues with VXLAN encapsulated TCP traffic over IPv6 on Cisco UCS VIC 13xx adapters configured to use the VXLAN stateless hardware offload feature. For VXLAN deployments involving Guest OS TCP traffic over IPV6, TCP packets subject to TSO are not processed correctly by the Cisco UCS VIC 13xx adapters, which causes traffic disruption. The stateless offloads are not performed correctly. From a TCP protocol standpoint this may cause incorrect packet checksums being reported to the ESXi software stack, which may lead to incorrect TCP protocol processing in the Guest OS.

    Workaround: To resolve this issue, disable the VXLAN stateless offload feature on the Cisco UCS VIC 13xx adapters for VXLAN encapsulated TCP traffic over IPV6. To disable the VXLAN stateless offload feature in UCS Manager, disable the Virtual Extensible LAN field in the Ethernet Adapter Policy. To disable the VXLAN stateless offload feature in the CIMC of a Cisco C-Series UCS server, uncheck Enable VXLAN field in the Ethernet Interfaces vNIC properties section.

  • Significant time might be required to list a large number of unresolved VMFS volumes using the batch QueryUnresolvedVmfsVolume API

    ESXi provides the batch QueryUnresolvedVmfsVolume API, so that you can query and list unresolved VMFS volumes or LUN snapshots. You can then use other batch APIs to perform operations, such as resignaturing specific unresolved VMFS volumes. By default, when the API QueryUnresolvedVmfsVolume is invoked on a host, the system performs an additional filesystem liveness check for all unresolved volumes that are found. The liveness check detects whether the specified LUN is mounted on other hosts, whether an active VMFS heartbeat is in progress, or if there is any filesystem activity. This operation is time consuming and requires at least 16 seconds per LUN. As a result, when your environment has a large number of snapshot LUNs, the query and listing operation might take significant time.

    Workaround: To decrease the time of the query operation, you can disable the filesystem liveness check.

    1. Log in to your host as root.
    2. Open the configuration file for hostd using a text editor. The configuration file is located in /etc/vmware/hostd/config.xml under plugins/hostsvc/storage node.
    3. Add the checkLiveFSUnresolvedVolume parameter and set its value to FALSE. Use the following syntax:

      <checkLiveFSUnresolvedVolume>FALSE</checkLiveFSUnresolvedVolume>

    As an alternative, you can set the ESXi Advanced option VMFS.UnresolvedVolumeLiveCheck to FALSE in the vSphere Client.

  • Compliance check fails with an error for the UserVars.ESXiVPsDisabledProtocols option when an ESXi host upgraded to version 6.7 is attached to a host profile with version 6.0

    Issue occurs when you perform the following actions:

    1. Extract a host profile from an ESXi host with version 6.0.
    2. Upgrade the ESXi host to version 6.7.
    3. The host appears as non-compliant for UserVars.ESXiVPsDisabledProtocols option even after remediation.

    Workaround:

    • Extract a new host profile from the upgraded ESXi host and attach the host to the profile.
    • Upgrade the host profile by using the Copy Settings from Host from the upgraded ESXi host.
  • After upgrade to ESXi 6.7, networking workloads on Intel 10GbE NICs cause higher CPU utilization

    If you run certain types of networking workloads on an upgraded ESXi 6.7 host, you might see a higher CPU utilization under the following conditions:

    • The NICs on the ESXi host are from the Intel 82599EB or X540 families
    • The workload involves multiple VMs that run simultaneously and each VM is configured with multiple vCPUs
    • Before the upgrade to ESXi 6.7, the VMKLinux ixgbe driver was used

    Workaround: Revert to the legacy VMKLinux ixgbe driver:

    1. Connect to the ESXi host and run the following command:
      # esxcli system module set -e false -m ixgben
    2. Reboot the host.

    Note: The legacy VMKLinux ixgbe inbox driver version 3.7.x does not support Intel X550 NICs. Use the VMKLinux ixgbe async driver version 4.x with Intel X550 NICs.

  • Unable to unstage patches when using an external Platform Services Controller

    If you are patching an external Platform Services Controller (an MxN topology) using the VMWare Appliance Management Interface with patches staged to an update repository, and then attempt to unstage the patches, the following error message is reported: Error in method invocation [Errno 2] No such file or directory: '/storage/core/software-update/stage'

    Workaround:

    1. Access the appliance shell and log in as a user who has a super administrator role.
    2. Run the command software-packages unstage to unstage the staged patches. All directories and files generated by the staging process are removed.
    3. Refresh the VMware Appliance Management Interface, which will now report the patches as being removed.
  • Initial install of DELL CIM VIB might fail to respond

    After you install a third-party CIM VIB it might fail to respond.

    Workaround: To fix this issue, enter the following two commands to restart sfcbd:

    1. esxcli system wbem set --enable false
    2. esxcli system wbem set --enable true

Vmware Patch

To collapse the list of previous known issues, click here.