RAGEval
Collection
2 items
•
Updated
id
stringlengths 1
25.2k
⌀ | title
stringlengths 5
916
⌀ | summary
stringlengths 4
1.51k
⌀ | description
stringlengths 2
32.8k
⌀ | solution
stringlengths 2
32.8k
⌀ |
---|---|---|---|---|
KB10717 | LCM upgrade fails due to drive count mismatch | LCM SDD/HDD firmware and HBA firmware upgrade fail due to drive count mismatch | Scenario-1: Drive firmware upgrade fails leaving the node stuck in phoenix.
In the LCM leader's lcm_ops.out log entries, drive firmware upgrade operations may fail with entries pointing to a drive count mismatch, similar to:
2021-01-20 10:51:56 DEBUG: Phoenix: Drive count before 24 != after update 25
This is caused because sometimes Phoenix operating system takes too long to attach the drives and LCM starts with the firmware upgrade of the drives before all disks have been attached.LCM performs a count of the total number of drives observed on the system before and after performing the firmware upgrade of each drive. If during the firmware upgrade of a drive a new disk is attached to the system then the count after the upgrade will be higher than the count before, and the LCM operation will fail.In the following example, the firmware upgrade operation for drive /dev/sdd starts at 10:51:30 with a drive count of 24, and it ends at 10:51:56 with a count of 25.
--
By looking at the Phoenix dmesg it is possible to observe that at 10:51:44, just in the middle of the firmware upgrade operation for drive /dev/sdd, the last disk is attached to the system to sum a count 25 drives. Due to this mismatch, the entire LCM operation fails even when the firmware upgrade was successful for that drive.
$ grep "Attached SCSI disk" dmesg
Scenario-2: HBA firmware upgrade fails leaving the node stuck in phoenix.
Reviewing lcm_ops.out from the LCM leader shows below log snippet:
2020-11-24 23:41:45,975Z INFO helper.py:106 (10.x.x.x, kLcmUpdateOperation, d6841107-3bb9-4796-ad94-ab04aa1f7299)
In the above logs, we see LCM failed with the message "ERROR: Phoenix: SATA drives before 6 != after update 3". Also we see the link rate is 6.0 only for 3 SATA links hence phoenix detected only 3 disks.
Run lsscsi command from phoenix node and verify all disks are listed. Due to lower negotiated link speed, indicates the link isn't capable of reliable transfer at a higher link speed. This is only a temporary state and performing a power cycle (or rebooting the node from phoenix) initiates a renegotiation of the link's speed and recovers the link back to the highest link speed. | This issue is resolved in LCM-2.5. Please upgrade to LCM-2.5 or higher version - Release Notes | Life Cycle Manager Version 2.5 https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-LCM:top-Release-Notes-LCM-v2_5.htmlIf you are using LCM for the upgrade at a dark site or a location without Internet access, please upgrade to the latest LCM build (LCM-2.5 or higher) using Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_5:Life-Cycle-Manager-Dark-Site-Guide-v2_5Solution Implemented: In LCM-2.5, we are no longer counting total drives to verify, instead checking the serial number of the drives to verify if there is no discrepancy during or after upgrade.If the upgrade fails and the node is in phoenix, follow the KB-9437 https://portal.nutanix.com/kb/9437 to recover the node.
|
KB8138 | Changing password policies for test Nutanix clusters | Changing password policies for test systems | *** INTERNAL **Below is the process for changing default password policies for internal Nutanix test/LAB systems - please do not share the steps with external customers or modify system files on external clusters over Webex | By default, password policies on Nutanix clusters are pretty restrictive. This is for use on a test/LAB system that allows for setting up less restrictive password policies.By default, our PE/PC used pam authentication. The policies are stored in two files under /etc/pam.d which are password-auth and system-authThe files look like the example below
/etc/pam.d$ sudo cat system-auth
These can be modified for example if you are sharing a repro environment and need the password to remain consistent but a password reset is needed. In that case, the "remember=5" can be set to "remember=0" temporarily and the password reset to the defaults that you need. The value can then be reset to defaults once the password has been changed |
KB13618 | Newly deployed clusters does not appear in the drop down when deploying VMs using VM Templates in Prism Central | A recently deployed PE cluster registered to PC does not appear in the cluster list while deploying a VM using VM templates. | A recently deployed PE cluster registered to PC don't appear in the cluster list while deploying a VM using VM templates.This is related to ENG-425348, where a Prism Central clusters/list v3 API call does not return entire list of clusters.Steps to Verify:1. Verify that none of the nodes have the cluster map cache by running the following command in PCVM:
allssh "grep 'Periodic refresh of cluster map' ~/data/logs/aplos.* -l"
Example where one of the node has cluster map cache:
nutanix@NTNX-10-200-224-41-A-PCVM:~/data/logs$ allssh "grep 'Periodic refresh of cluster map' ~/data/logs/aplos.* -l" | Current workaround is to restart aplos on all PCVMs with the following command
allssh "genesis stop aplos aplos_engine"; cluster start
|
KB1250 | Upgrading with CVM fails to boot with missing /.nutanix_active_svm_partition | Upgrading with CVM fails to boot with missing /.nutanix_active_svm_partition | Each Nutanix system uses either /dev/sda1 or /dev/sda2 partition during the upgrade process. During the upgrade, boot up process will check which partition has a partition marker file (/.nutanix_active_svm_partition) to indicate the active partition for the upgrade. The active partition alternates for each subsequent upgrades. The active partition will be the partition where the upgrade package will be deployed. If the marker file is absent, boot up will fail with the following message.
The below log snippets will appear on the console of the CVM :
Checking /dev/sda1 for /.nutanix_active_svm_partition
| Note:Please check the md5sum of the ServiceVM.iso before proceeding with any recovery steps.If it is different from a working, upgraded CVM, replace the iso with the correct one from the working CVM.
Create svmrescue.iso from another CVM in the cluster with make_iso.sh svmrescue RescueShell 50 (cd ~/data/installer/*)SCP the svmrescue.iso to the host with failure CVM to the local datastorePower > Shutdown Guest (for the failure CVM)Edit settingsSelect Hardware > CD/DVD Drive 1 Select Datastore ISO File to point to the svmrescue.isoBoot up with svmrescue and enter rescue shellCheck where the /.nutanix_active_svm_partition residesmount /dev/sda2 , touch .nutanix_svm_partition on /dev/sda2Reboot the CVM. |
{ | null | null | null | null |
KB3591 | Unable to expand the cluster without the latest iso_whitelist.json file, error "Hypervisor installer is not compatible" | Cluster expansion requires the iso_whitelist.json file. In most instances, the default iso_whitelist.json file that is present in Prism works. In some cases, you need to manually upload the latest iso_whitelist.json to Prism, as it might not contain the ISO image of your latest hypervisor. | An error message is generated when attempting to expand a cluster.
Hypervisor installer is not compatible.
You are more likely to experience this issue if you are using the latest hypervisor version. If AOS is newer than the hypervisor, the issue might not occur. | If you are using an older version of Foundation, the iso_whitelist.json file in Foundation may not contain the ISO image of the latest hypervisor. If the whitelist does not contain the MD5 hash of the ISO image of the hypervisor that you are adding to the cluster, the cluster expansion fails because the pre-upgrade checks fail.To resolve this issue:1. Upgrade the foundation https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v6_1:upg-cluster-foundation-upgrade-wc-t.html to the latest version. Upgrading Foundation Version to the latest will update the iso_whitelist.json file in the cluster and hence resolve the issue.OR2. If upgrading the foundation is not possible, download the latest iso_whitelist.json from the Foundation download page https://portal.nutanix.com/page/downloads?product=foundation and manually upload the file to Prism. Always use the latest Foundation version where possible.
After uploading the latest iso_whitelist.json file, you should now be able to successfully upgrade and expand the cluster.
Note: Uploading any other firmware, such as an AOS tarball, the expansion will fail with the same error message.From AOS 6, the cluster expand dialogue box will look different. Here is the page for uploading iso_whitelist.json in AOS 6.x and later:If after uploading the new iso_whitelist.json the issue persists, verify that the build number is not included in the iso_whitelist.json file yet and proceed with imaging the node using Foundation then add it to the cluster. |
KB11931 | OVA export from Prism Central fails with 'Internal Server Error. kInternalError: Internal Error: 5 :Error accessing ovas capabilities' | Following steps for 'Exporting a VM as an OVA' in the Prism Central (PC) Infrastructure Guide, OVA export from PC fails with 'Internal Server Error. kInternalError: Internal Error: 5 :Error accessing ovas capabilities'. | Issue encountered when following the instructions in the Prism Central (PC) Infrastructure Guide for ' Exporting a VM as an OVA https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-vm-export-as-ova-pc-t.html'.
After clicking 'Export', the following error occurs:
Internal Server Error. kInternalError: Internal Error: 5 :Error accessing ovas capabilities.
The Prism Element (PE) clusters to which the OVA is being exported meet the prerequisites of a minimum AOS version of 5.18 and a minimum of PC 5.18. Performing this task with the VM powered on or off or exporting with disk format as QCOW2 or VMDK returns the same error.
Log messages that can be seen in Aplos (~/data/logs/aplos.out) on Prism Central VM:
2021-05-24 18:03:41,823Z ERROR resource.py:217 Traceback (most recent call last):
Metropolis (~/data/logs/metropolis.out) log may also record the error:
E0513 21:38:09.640903Z 18838 base_ova_task.go:202] Error forwarding request: asn1: structure error: tags don't match (2 vs {class:0 tag:16 length:1511 isCompound:true}) {optional:false explicit:false application:false private:false defaultValue:<nil> tag:<nil> stringType:0 timeType:0 set:false omitEmpty:false} int @4 | This is caused by an incorrect Prism Central certificate setup.
What is expected:
nutanix@PCVM$ sudo sed -e 1b -e '$!d' /home/private/server.key
Example of incorrect certificate setup:
nutanix@PCVM$ sudo sed -e 1b -e '$!d' /home/private/server.key
Generate a self-signed certificate that adheres to the formatting the system expects by issuing the command below from a Prism Central VM:
nutanix@PCVM$ ncli ssl-certificate generate
At this point, you should be able to test the OVA export. Review and correct your custom SSL certificates before you upload them again to Prism Central. |
KB13287 | [host_nic_error_check] RX_missed/RX_CRC errors against unused interface eth0 after AOS upgrade to 5.20.x and NCC version to 4.5.0.2 | RX_missed/RX_CRC errors reports a high error rate or a false positive alert message for the wrong network interface "eth0" after AOS upgrade to 5.20.x and NCC version to 4.5.0.2 | The cluster may start getting "NIC eth0 RX_Missed Error Rate High" alerts after AOS upgrade to 5.20.X along with NCC version 4.5.0.2 although eth0 is not in use. Please see below example for the symptom in prism. The cluster may also receive "NIC eth0 RX_CRC Error Rate High" as seen below:
ID : xxxx
Please follow the steps below to verify the symptoms
Check the health_server.log. In below example, NCC logged rx_missed_errors against "eth11",
2022-05-11 19:09:04,698Z INFO base_plugin.py:722 [host_nic_error_check] Overriding service_vm_external_ip in dst(xx.xx.xx.71) with that from src(xx.xx.xx.71)
Check atlc.out to confirm whether it reported the wrong interface which is eth0 or not. If then, this is the problem as health server log reported eth11 not eth0.
I0511 19:09:04.762900Z 25621 alert.go:167] Alert ID: A6014
Check alert manager (leader) log. It also indicates the wrong network interface which is eth0
I20220601 19:09:04.383710Z 11249 process_alerts_rpc_op.cc:49] ProcessAlerts RPC received with alerts { alert_id | The health server picks up the right interface but atlc and alert_manager continuously understand the interface as eth0 ,which is the problem. The root cause has been identified to be with NCC version 4.5.0.2 in ENG-474304 https://jira.nutanix.com/browse/ENG-474034. If you run into this issue, collect a log bundle and attach your case to this ENG. For temporary fix to the issue, cluster wide restarting of health_server, insights, alert_manager services could help. If the problem still persists despite the restart of the services, then please contact Engineering via the ENG-474034 https://jira.nutanix.com/browse/ENG-474034 for further assistance.
$ allssh "genesis stop alert_manager cluster_health insights_server; cluster start" |
KB15469 | A post-procedure of reconfiguring the IP address of C-MSP enabled Prism Central VMs | An additional step is required to reconfigure the IP address of Prism Central VMs with C-MSP enabled | Reconfiguring Prism Central IP address requires additional steps if C-MSP is enabled.
The following symptoms may be observed after completing the steps in the product documentation.
Verification
All services are UP in cluster status command.
nutanix@pcvm$ cluster status
Microservices Infrastructure (C-MSP) is enabled. Verify that prism-central entry appears in the following command.
nutanix@pcvm$ mspctl cluster list
Note: C-MSP is enabled by default on pc.2022.9 or later.
mspctl cluster health shows some components are not healthy.
nutanix@pcvm$ mspctl cluster health prism-central
Note: COMPONENT lines may be different depending on the version of Prism Central.
Accessing PCVM IP addresses or the Virtual IP address by web browsers may show some errors or get stuck in the loading circle.
Error examples:
upstream connect error or disconnect/reset before headers. reset reason: connection failure
no healthy upstream
Prism Central in the registered Prism Element Home screen shows Disconnected state. | Perform the following steps after starting the PC cluster by cluster start.
Run the following command, or download mspserviceregistry-cli https://download.nutanix.com/kbattachments/15469/mspserviceregistry-cli and put it in /home/nutanix/bin of one of the PCVMs.
nutanix@pcvm$ wget https://download.nutanix.com/kbattachments/15469/mspserviceregistry-cli -O /home/nutanix/bin/mspserviceregistry-cli
Run the following command:
nutanix@pcvm$ PYTHONPATH=/home/nutanix/cluster/bin /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/cmsp_ip_reconfig
Run the following command to verify that C-MSP became healthy.
nutanix@pcvm$ mspctl cluster health prism-central
Verify that Prism GUI login became available. |
KB16938 | Objects - Error saving the bucket due to "reduce your request rate" messages. | When a customer tries to create a objects bucket, the error "reduce your request rate messages" is shown in the UI. This is due to a RocksDB leak when doing compactions in the metadata server pod running in the Objects cluster. | 1. When a user tries to create/delete a bucket, the following message is displayed in the Objects UI.
Error saving the bucket. Bucket create failed : HTTP request failed with err: <?xml version="1.0" encoding="utf-8"?> ‹Error<Code>SlowDown</Code>Message>Reduce your request rate.‹/Message>Resource>/veeam-backup-app-media</Resource></Errors
2. Verify the version is Objects 4.2 (only this version is affected)
nutanix@PCVM~$ docker ps | grep -i aoss
3. Log into the Objects MSP cluster using KB- https://portal.nutanix.com/kb/8170 8170 https://portal.nutanix.com/kb/8170.4. Verify that the ms-server pod is running.
NOTE: There can be more than one ms-server pod running, if that is the case, run the steps below in all pods.
[nutanix@Objects-abcd-default-0 ~]$ sudo kubectl get pods -o wide | grep -i ms
5. Connect to the MS-Server pod with a bash shell
[nutanix@Objects-abcd-default-0 ~]$ kubectl exec -it ms-server-0 -- bash
6. Check for the entries "Result incomplete: Write stall" specifically for the RocksDB in the metadata server logs.
[nutanix@ms-server-0 /]$ grep -i 'Result incomplete: Write stall' /home/nutanix/logs/ms-server-0/logs/metadata_server.out
| This issue will be fixed in Objects 4.3, more information is described in ENG-604903 https://jira.nutanix.com/browse/ENG-604903.The workaround is to restart the "ms-server-0" pod within the Objects cluster and this is recommended to be done by a Support Tech Lead/Staff SRE or DevEx. |
KB2430 | [AWS/S3] How To Download Older Phoenix, AHV, PC, Foundation and AOS Images | [AWS/S3] How To Download Older Phoenix, AHV, PC, Foundation and AOS Images | The Nutanix Support Portal provides the latest AOS, PC, Foundation, Phoenix, Nutanix Files, etc. images in each currently supported code stream. A generic Phoenix image to be used for SATADOM replacements or other maintenance issues is available as well on Nutanix public portal. See KB 3523 for instructions for creating a Phoenix or AHV ISO on an existing / running cluster.
Note: Old versions of code should not be used to foundation an entire cluster - we should be pushing customers to adopt the latest version of code in their desired code stream. | Note: AWS authentication has moved to OKTA. All SREs should have the AWS-Master-Console App assigned in OKTA. Open a ServiceNOW ticket if you don't see it in OKTA.
Access the AWS app via OKTAClick on S3 that is located under Storage.
Navigate to the required image:
Note: Portal mapping is to: https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/secure/downloads/ https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/secure/downloads/Check the above link first to try and find what you are looking for, before traversing the links below.
For AHV ISO images:
https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/ahv-iso https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/ahv-iso
For AOS Specific Phoenix Images:
https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/phoenix-builds/ https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/phoenix-builds/For legacy Phoenix builds: ntnx-portal -> phoenix (look inside phoenix-builds first)
For AOS upgrade bundles:
https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/releases/ https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/releases/
For PC:
One Click Deployment:
https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/pc/one-click-pc-deployment/ https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/pc/one-click-pc-deployment/
Upgrade:
https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/pc/pc_upgrade/ https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/pc/pc_upgrade/
Foundation:
https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/Foundation/?region=us-east-1 https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal/Foundation/?region=us-east-1
For Older Hypervisor JSON files
To check the available supported JSON files on S3 go to page : ntnx-portal > hypervisor page and look for the AOS version customer is running and click on that folder to see all the supported JSON files for that version. https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal?prefix=hypervisor/ https://s3.console.aws.amazon.com/s3/buckets/ntnx-portal?prefix=hypervisor/ To check what is the latest version of AOS which the ESX versions supports - check in interoperability page. https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix
Click on the desired phoenix build
Example:
Select the desired ISO and click the ' Actions -> "Download as" ' button
Example:
A pop-up will appear on the right side of the page. Right-click the provided link and choose "Copy Link Address" to copy the link to your clipboard. Note: On MAC, you may need to hold down a right click (two fingers) in order to access the menu.
Example:
Provide the hyperlink to your customer. Note: This link is extremely long as it contains temporarily S3 AWS credentials that will expire in 5 minutes after the link is created. In order to get the link without expiration timeout, you can cut the link to only contain the path to the file.
For example link from AWS which will expire in 5 minutes:
https://ntnx-portal.s3.us-east-1.amazonaws.com/releases/euphrates-5.5.5-stable/nutanix_installer_package-release-euphrates-5.5.5-stable.tar.gz?response-content-disposition=attachment&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEN3%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDGV1LWNlbnRyYWwtMSJHMEUCIDI%2FRop1NNNf3ZdA7j%2B%2Fa81l3VdxW70xSw5aphAv3V38AiEAgbRqCxU0WJhFGQ3L0PnQoHMjNoS3sbKuChCSgc5vuAsqvgMIdhAAGgw1MTk2MzU1NzU5ODUiDHDYmALuvPitR3tp%2BSqbA6ITqj5QVR%2BvgAP8yy0eNAwdjS8dyO7bAJyPQqcuJP8j6yTbYdT0w5qr3IZHhnp2P746adMDDjEriM8Cnb9p5e47a6W4myPBQQDxqRzObS8k8IuWDC%2BYkO6v1bgJ5PwPZv%2BM77CcpF0KJdAwzHSsCF1oC0Zygui1y9bz643Jsc1yt3Sfbgped7dJoGyMtEFtvwaHtBm2p457ASvjEorCobP4hPmZzt0kkHWVWVcA%2ByLHJT%2BzWHhjq%2ByzgmwVQeQecM%2FsAFNu%2BN6lIP7I5PGOpDk%2BLKZPALqMTqvF2BtY6qn967Fa%2F0PhYXiJ8j6GhJJ72zOP3Ang0YwiN6ek3WEDYt%2FXgI%2FWK%2Bwbwc20zXBdHYW9h2ba5%2B8EJO0rscNb2vPLiU%2FvtBvW6nycVuh56rTVvN9iinoWvsWsMLO7va4rXiL1C7Gvolwt8NX7IUWG3ccBZFLIGCbX706hmbV8gdM5iNvskPU3m1hEfOycyl88saWd%2FYu%2Bc8yQNDlZeEWmL1TAhTDoDnivFe2S%2BBe6qtS6sxvXaf9rqm1soP10BjCDucj%2BBTqEAiEKxtcEmLjFKXLNgsUFFOlcEZJNTsdHVhKz8yS7hhtIh5r1kjxnJnnGW14vYX%2BRva2WWBGRxifNrS56iDV9BoWPHO2T0yFcquhkf0OW2FtVduyga6KCcpSwBoiCrSsic0%2FJLQDI%2F7QvphIXIfTdo%2BmYjbWjURtQfEAXgWPMEW7dgpR7zM%2FSRmyxFpCWzOKyhsz0QvW2e3xN8a7V07H3faSjGxpt%2BuGCKJSfgMtsIXrhjIorOFO%2FghKtQi1B%2BdE%2B5LnpPaMi8p0WSiqBva3rJiwGi2EN0CCc%2F%2FPc73ebYRpVirW8cShX4yT0bWjdP9poWvlrL7lE8hCcacGhOpWF5e7JrnKb&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20201210T134947Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAXR7FRUSYSUJ7BOLE%2F20201210%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=e58d00a7a1ddcc8c16db81ecd210d7869811296833521016c8b43b2570c5a9b9
Remove the rest of the link so it will have only path to the file and will not expire:
https://ntnx-portal.s3.us-east-1.amazonaws.com/releases/euphrates-5.5.5-stable/nutanix_installer_package-release-euphrates-5.5.5-stable.tar.gz
NOTE: There can be some upgrade/install bundles with multiple folders named as v1, v2, these folders will contain different versions of the metdata.json file. Always choose the metadata.json file from the latest version folder. In the example below, DO NOT use the old metadata.json, always use the metadata.json file inside the latest folder which in this case is v2
For downloading the SPP bundle for the HPE firmware upgrade:
If hitting KB-15579 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V0000010xccSAA&a=78682339848e8a799036d05a389d255e999ee0294216669f93a9fc94272bb3dce11722e998d5679b, follow the below path to download the compatible SPP bundle:
NOTE: If you cannot find the AOS / Phoenix / AHV ISO, you can generate one on customer's cluster: See KB 3523 for detailed instructions.
NOTE: If you cannot find what you are looking for, head-over to slack channel #support-readiness and ask for assistance. Thanks! |
KB4635 | Hyper-V: Host domain join in Prism fails with error "There is no such object on the server" or "Invalid OU path" | Joining the hosts and cluster to an AD domain via Prism fails when the default "Computers" CN is not present in the AD environment | Joining the Hyper-V hosts and cluster to domain may fail with an error when the default "Computers" container (CN) is not present:
There is no such object on the server.
Invalid OU path. | This occurs when the default "Computers" container is not present (or renamed) in the domain provided. The target OU Path will need to be filled out in the UI to proceed. For example:
OU=nutanix,OU=servers,DC=abc,DC=com
In older AOS versions, the OU Path field is not available in Prism. In this case, NCLI has to be used to join the cluster and hosts to the domain.For example, run on any CVM:
nutanix@cvm:~$ ncli cluster join-domain domain=abc.com cluster-name=abc-cluster external-ip-address=xx.xx.xx.xx logon-name=abc.com\username name-server-ip=yy.yy.yy.yy ou-path="OU=nutanix,OU=servers,DC=abc,DC=com"
|
KB15490 | AHV host panic crashes with the "uc_decode_notifier" function reference in the crash stack | An AHV host may panic crash with the "uc_decode_notifier" function reference in the crash stack. The crash may indicate a memory failure on a page. | Thus far, this crash has been seen only in the following config and observations:
NX hardware G6 or G7 hardware model. AHV 20220304.xxx build. There are no CECC UECC or hardware issues recorded in the IPMI events. Please cross-check the IMPI SEL events for any hardware issues. “mce: [Hardware Error]: Machine check events logged” - Type of errors logged in the dmesg/messages before the crash. These errors should be present in the vmcore-dmesg.txt log file as well. However, nothing will be logged in the “mce” log file under /var/log/.OOB analysis will be “No trouble detected.” Refer to KB-2893 https://portal.nutanix.com/kb/2893 for more details. On the T/S page of IPMI GUI, the “Download” button will be grayed out.
The impacted AHV host may crash with the following stack.
[1982554.525627] Kernel panic - not syncing: Memory failure on page 50ed0b4
These crash stacks are recorded in the vmcore-dmesg.txt and the vmcore files. The vmcore and the dmesg text file will be under the/var/crash/ directory on AHV. The presence of files under this directory will also trigger the NCC check ahv_crash_file_check failure. KB-4866 https://portal.nutanix.com/kb/4866 for reference. Before the crash stack, we may also see the following MCE log entries.
[1982078.205171] mce: [Hardware Error]: Machine check events logged | The crash is due to a hardware issue. The culprit hardware could be a DIMM module or a CPU Memory Controller issue. The next step is determining which of the 2 components caused the crash. To do so, we must perform a crash dump analysis.After collecting a log bundle, open a Tech-Help ticket for the crash dump analysis. The log bundle can be of just the impacted node. In the Tech-Help ticket, include the following command outputs by following instructions from KB-8705 https://portal.nutanix.com/kb/8705.
sys |
KB14262 | Alert - A200614 - FNSVersionMisMatch | Investigating Flow Network Security PE Minimum Version (FNSVersionMisMatch) alerts on a Nutanix cluster | This Nutanix article provides the information required for troubleshooting the alert A200614 - FNSVersionMisMatch - Flow Network Security PE Minimum Version for your Nutanix cluster.Alert OverviewThe FNSVersionMisMatch - Flow Network Security PE Minimum Version alert is generated on Prism Central (PC) when a Flow Network Security version on an AHV PE(s) registered to the PC does not meet the minimum version requirements for the PC running Flow Network Security version with microsegmentation enabled.Sample Alert
Block Serial Number: 18SMXXXXXXXX
Potential ImpactCertain features of the Flow Network Security version available to configure on the PC which is raising the alert may not be implemented or work as expected on the affected PE cluster(s).Output messaging
[
{
"Check": "Description",
"FNSVersionMisMatch | Flow Network Security PE Minimum Version": "Validates Flow Network Security version on attached PEs meets minimum requirements for a PC with Flow Network Security microsegmentation enabled"
},
{
"Check": "Cause of failure",
"FNSVersionMisMatch | Flow Network Security PE Minimum Version": "Not all PEs registered to this PC meet the minimum Flow Network Security PE version required for the enabled feature(s)."
},
{
"Check": "Resolutions",
"FNSVersionMisMatch | Flow Network Security PE Minimum Version": "Use LCM to run an inventory of Flow Network Security PE on each AHV cluster attached to this microseg-enabled PC and upgrade those which do not meet the minimum requirements. Refer to KB14262 for further information"
},
{
"Check": "Impact",
"FNSVersionMisMatch | Flow Network Security PE Minimum Version": "The affected PE cluster(s) may not support Flow Network Security policy features that are in use"
},
{
"Check": "Alert ID",
"FNSVersionMisMatch | Flow Network Security PE Minimum Version": "A200614"
},
{
"Check": "Alert Title",
"FNSVersionMisMatch | Flow Network Security PE Minimum Version": "Flow Network Security version too low on registered PE cluster /\t\t\tFlow Network Security PE version too low on cluster XXXXXX"
},
{
"Check": "Alert Message",
"FNSVersionMisMatch | Flow Network Security PE Minimum Version": "Flow Network Security PE version on registered cluster XXXXXXX does not meet the minimum version\t\t\t required to support microseg feature(s) in use on this PC. XXXXXXX"
}
] | Check the current FNS version running on the reported affected PE cluster:
via PE Prism UI LCM Inventory
via CVM CLI, view value present in file /home/nutanix/flow/flow_version.txt. The following example shows how to extract this information for all CVMs in the same PE cluster:
nutanix@CVM:~$ allssh cat /home/nutanix/flow/flow_version.txt
Compare the discovered FNS for PE version with that of the PC raising the alert.
To resolve, upgrade Flow Network Security PE via LCM on the affected PE cluster(s).
Run LCM Inventory on the PE Prism UI to ensure the latest available versions are listed as upgrade options.
For example, on the 'Updates' page LCM should show the current version and any newer versions available for upgrade:
Select the recommended version for FNS on PE to match what is running on the PC which raised this alertReview the Release Notes and User Guide available on the Support Portal https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Flow%20Network%20Security%202.0%20(Next%20Generation%20of%20Flow%20Microsegmentation)Proceed with the upgrade. Refer to the LCM User Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=LCM available on the Support Portal for details.Validate the correct version is running post-upgrade using the earlier mentioned steps.
If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com. |
KB15678 | False-positive '"Target cluster is unreachable or Round-trip travel (RTT) to target cluster is greater than 5ms."' alert gets generated on PC if custom SSL certificates with additional attributes are in use | False-positive '"Target cluster is unreachable or Round-trip travel (RTT) to target cluster is greater than 5ms."' alert gets generated on PC if custom SSL certificates with additional attributes are in use | On Prism Central, 'Target cluster is unreachable or Round-trip travel (RTT) to target cluster is greater than 5ms' alerts are being generated, and remote_site_latency_check on PCVM fails with 'Latency is too high' message for the remote site PE cluster:
Detailed information for remote_site_latency_check:
As per KB-7366 http://portal.nutanix.com/kb/7366, confirmed that the RTT response time between the source PE cluster and the remote site PE cluster CVMs is less than the 5 ms threshold time window set for the above remote_site_latency_check. Similarly, the RTT response time between the source and remote site PC clusters is below the 5 ms threshold time window as well.On checking prism_proxy_access_log.out logs on the PCVM where the NCC remote_site_latency_check WARN alert is being generated, observe some 403 HTTP error responses coming back for the 'synchronous_replication_capable' fanout proxy API call going from PC to the source PE cluster registered to this PC:
/home/apache/ikat_access_logs/prism_proxy_access_log.out (PCVM)
magneto.out logs on the PCVM show BEARER_TOKEN_BAD_SIGNATURE error between the PC and PE cluster for the above call:
magneto.out (PCVM)
The prism element cluster uses a custom SSL certificate On checking the custom certificates present on the CVM level of the PE cluster, observed that the private key (/home/private/server.key) present on the CVM for these custom certificates has additional 'Bag Attributes' in it prior to the 'BEGIN PRIVATE KEY' section in it:
nutanix@NTNX-CVM:~$ allssh "sudo sed -e 1b -e '$!d' /home/private/server.key" | This issue happens due to these additional attributes being present in the custom SSL certificate on the Prism element cluster which is registered to the PC. Due to this, PCVM is unable to authenticate properly during the 'synchronous_replication_capable' fanout proxy API calls being made from the PC to the PE cluster. So, it is unable to get the RTT response back via these API calls, which in turn causes the above alert to be generated on PC. To resolve this issue,
Use the solution steps listed in KB-5775 http://portal.nutanix.com/kb/5775to remove these additional attributes from the local private key first from the customer's side (not from the CVMs) and then re-upload the SSL certificates on PE and PC clusters againOnce this is done, wait ~24 hours for the local cache to automatically clear up in aplos and the rc_refresh_token_zk_lock to automatically clear up after the correct custom SSL certificates get updated on the clusterRe-run the NCC remote_site_latency_check again on PCVM ( KB-7366 http://portal.nutanix.com/kb/7366) to verify that the check passes successfully now and the alert is resolved
|
KB14135 | NCC Health Check: duplicate_dsip_check | The NCC health check duplicate_dsip_check validates if the DSIP is duplicated on the network by comparing the MAC addresses of the CVMs for the cluster with the MAC addresses that respond when pinging the DSIP from each CVMIP. | The NCC health check plugin duplicate_dsip_check probes for any other devices which might also be using the data services IP, and will raise a notification if the data services IP is used on a non-CVM.
Running the NCC Check
It can be run as part of the complete NCC check by running
ncc health_checks run_all
Or individually as:
ncc health_checks network_checks duplicate_dsip_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 5 minutes, by default.This check will generate Critical alert A103112 after 3 consecutive failures across scheduled intervals.
Sample output
For status: PASS
nutanix@cvm:~$ ncc health_checks network_checks duplicate_dsip_check
For status: FAIL
Running : health_checks network_checks duplicate_dsip_check
With no data services IP configured:
Running : health_checks network_checks duplicate_dsip_check
Output messaging
[
{
"Check ID": "Check for duplicate data services IP address in a cluster"
},
{
"Check ID": "Nutanix cluster data services IP address might conflict with one or more interfaces on the same network."
},
{
"Check ID": "Attempt to fix the IP address of the other host first. If not possible, modify the Data services IP address following KB-8216."
},
{
"Check ID": "The connectivity to the cluster data services IP can become unstable or unavailable, leading to performance impact, redundancy concerns, and potential downtime."
},
{
"Check ID": "A103112"
},
{
"Check ID": "Duplicate data services IP Alert"
},
{
"Check ID": "Duplicate data services IP address detected in the cluster"
},
{
"Check ID": "Duplicate data service IP found for: dsip. Received response from remote_mac_list which does not correspond to MAC address of any CVM in the cluster."
}
] | Resolving the Issue
Check FAIL:
The data services IP should be unique and not duplicated by any other devices in the environment. If you receive alerts stating another device is using the data services IP, take note of the MAC address referenced and see if you can locate this within your network. If you are unable to locate the computer in question, you may need to work with your network administrator to identify what is duplicating the data services IP.
Note: Changing the data services IP is an option, but be cautious of this, as another Nutanix cluster, backup application, or entity could rely on the current data services IP configuration. See KB-8216 https://portal.nutanix.com/kb/8216 for more details on changing the data services IP.
Check INFO:
NCC check returns INFO if no DSIP configured on the PE cluster. It is recommended to add DSIP to all Prism Element clusters as this IP address is also used as a cluster-wide address by clients configured as part of Nutanix Files and other products such as CMSP.More details can be found here: iSCSI Data Services IP Address Impact https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_7:wc-volumes-external-ip-address-c.html
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@cvm$ logbay collect --aggregate=true
Attaching Files to the Case
Attach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.
Requesting Assistance
If you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact.
Closing the Case
If this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case. |
KB16875 | Prism Central: UI unavailable after upgrade to 2023.4.0.2 or 2024.1 | Prism Central UI is unavailable after upgrade to 2023.4.0.2 or 2024.1 due to ikat_proxy service not fully initializing. | Following an upgrade to 2023.4.0.2 or 2024.1, the Prism Central UI is unavailable showing "This site can't be reached".In ~/data/logs/ikat_proxy.out logs, the following repeated pattern will be seen.
2024-05-16 20:55:52,159Z rolled over log file
In ~/data/logs/genesis.out, the following line will be seen.
2024-05-16 20:55:51,144Z INFO 10512720 service_utils.py:1285 Starting ikat_proxy with cmd /home/nutanix/bin/service_monitor --run_as_user=apache /home/nutanix/data/logs/ikat_proxy.FATAL -- /usr/local/nutanix/ikat_proxy/bin/envoy -c /home/nutanix/config/ikat_proxy/envoy.yaml --disable-hot-restart --concurrency 4 --concurrency 16 |& /home/nutanix/bin/logpipe -o /home/nutanix/data/logs/ikat_proxy.out
Service start FATAL in ~/data/logs/ikat_proxy.FATAL will be seen.
nutanix@PCVM:~$ cat ~/data/logs/ikat_proxy.FATAL |tail -5 | This is seen when action was taken to address envoy resource utilization documented in KB-16264 http://portal.nutanix.com/kb/16264. Please contact Nutanix Support https://portal.nutanix.com/ for assistance in restoring access to Prism Central UI. |
KB13927 | LCM task hung with 'Error AttributeError NoneType object has no attribute value' | While performing LCM inventory the task got stalled. genesis restart worked to complete the task. | Problem
LCM Inventory is hung at for more than 24 hours.Inventory task has been already completed on some Hosts, but one Host has not progress and caused the whole process to get stalled.Standard process documented on How to clear stuck LCM inventory tasks /articles/Knowledge_Base/How-to-clear-stuck-LCM-inventory-tasks shows no upgrade in progress. Following the documented solution didn't work.
Identification
The Task can be observed as running:
Task UUID Parent Task UUID Component Sequence-id Type Status
Root Task details as follows, percentage_compete has not progress in the last 25 hours:
nutanix@NTNX-CVM:~$ ecli task.get 4f807452-aba3-4cc1-b3ef-3fc606828f11
Check last 5 lines on lcm_ops.out in all CVMs and verify the LCM inventory has completed on some CVMs. output for completed task should be: (inventory) LCM operation inventory for CVM is successful ; On the sample bellow, CVM .37 has no information regarding the LCM inventory task.
nutanix@NTNX-CVM:~$ allssh 'tail -n 5 ~/data/logs/lcm_ops.out'
Log in to the CVM you obtained on step #3 On this example. CVM .B
nutanix@NTNX-CVM:~$ ssh X.X.X.B
Check genesis.out ERROR messages and confirm the following error is observed:
nutanix@NTNX-CVM:~$ grep ERROR ~/data/logs/genesis.out
| In LCM-2.5.0.3:
We have added in log signature which will record a traceback:
Unhandled exception encountered. Traceback:
If we observe any cases post LCM-2.5.0.3, please collect LCM logs (Refer KB-7288 http://portal.nutanix.com/kb/7288) Attach the log bundle in ENG-501339 https://jira.nutanix.com/browse/ENG-501339
Workaround:
Restart genesis on the affected CVM (X.X.X.B)
nutanix@NTNX-CVM:~$ genesis stop; genesis start
Confirm inventory task keeps on working on the rest of the cluster, if it stop with the same error perform the same remedy. |
KB11369 | Prism does not load Remote Site settings after saving Network Segmentation for DR IPs | Updating a remote site with Network Segmentation for DR may result in indefinite "Loading..." in the Prism UI. | Updating a remote site with Network Segmentation for DR may result in indefinite "Loading..." in the Prism UI.When updating a remote site, the below symptoms may occur:
Network Mapping shows no network information of local site or remote site.vStore Mapping shows error: Remote site is currently not reachable. Please try again later.Inspect Element > Network tag page shows all the API calls/URL links return 200 OK.Inspect Element > Network Console page shows some TypeError errors.Click "Save" button. Remote site setting can be saved without error.
Select the Remote Site which is just saved and click the "Update" button will see below symptoms:
The remote site details are stuck at "Loading."Inspect Element > Network tag page shows all the API calls/URL links return 200 OK.Inspect Element > Network Console page shows some TypeError errors.
From the CLI, "ncli rs ls" shows the Remote Site Status is: relationship establish. | This is a known issue with the Prism UI ( ENG-370048 https://jira.nutanix.com/browse/ENG-370048) which is resolved in AOS 5.20.4 and higher.Workaround: Configure the Remote Site vStore and Network mapping configuration with ncli:1. Gather the Container name from local cluster and remote cluster. Or we can get the target Container name from Prism UI:
nutanix@CVM:~$ ncli ctr ls
2. Gather the Network list from local cluster and remote cluster. Or we can get the target Network list from Prism UI:
nutanix@CVM:~$ acli net.list
3. Configure Remote Site vStore mapping:
nutanix@CVM:~$ ncli rs edit name=<Remote Site Name> vstore-map-add=<Local Container Name>:<Remote Container Name>
4. Verify the vStore mapping:
nutanix@CVM:~$ ncli rs ls
5. Configure Remote Site Network mapping:
nutanix@CVM:~$ ncli remote-site add-network-mapping remote-site-name=<Remote Site Name> dest-network=<Remote Network Name> src-network=<Local Network Name>
6. Verify the Network mapping:
nutanix@CVM:~$ cerebro_cli list_network_mapping |
KB12426 | Nutanix Files - Launching Files Console from Prism Element accessed through Prism Central Fails | This kb helps resolving issue encountered during launch of newly introduced Files UI using Files Console | The following error will be received when launching the Files Console through Prism Central. Accessing the Files Console directly from Prism Element shows no issues.
"Http request to endpoint 127.0.0.1:7509 failed with error. Response status: 2"
or a Blank Page will be displayed Another scenario is that the page will display "404 page not found"An alternate situation involves encountering no response upon clicking the File Server Name or the absence of any choices to launch the console.No Response error in developer tool in ChromeOther possible display is:Another symptom noted is that when launching the Files page on Prism Element opened via Prism Central, the files page may not load completely. It does not give errors but does not populate the entries as well.
Additional possible symptom, when inspecting the page for errors in the Web browser, you may notice the following error:
Failed to load resource: the server responded with a status of 503 () | 1. Confirm Minerva directory is present under the below location with nutanix:apache as owners respectively on all the CVMs of PE attached to the PC
nutanix@cvm $ allssh "sudo ls -la /home/apache/www/console/ |grep minerva"
If the directory is not present, copy the Minerva directory from the old path to the new one on all CVMs.
Old path
/home/nutanix/minerva/console/minerva
New Path
/home/apache/www/console/
2. Confirm if Files Manager is enabled on Prism Central and is on version 2.0.2 or above. Below commands need to be run from Prism Central i.) Check if the Files Manager service is running
nutanix@PCVM $ genesis status | grep files_manager
ii.) Check if the Files Manager Container is Up
nutanix@PCVM $ docker ps
iii.) If Files Manager is not enabled, use the following command to enable it
nutanix@PCVM $ files_manager_cli enable_service
3. Confirm that Files Manager is on version 2.0.2 or above using the following command on Prism Central; if it isn't, upgrade Files Manager to the latest available version using LCM.
nutanix@PCVM $ files_manager_cli get_version
4. Confirm if any firewall rule exists on the network and verify Port 9440 is open between Prism Central and File Server, if not we need the port open as per Files Port Requirements https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Files&a=0232ca10aca90d5021461b7c62377c308f09386862bd6cded5e310c54408a057f04599651f4ddaed
5. Check the ~/data/logs/files_manager_service.out on PC for the following messages.
I0718 15:40:04.450959Z 21 iam_client.go:277] Dns error: Post "https://iam-proxy.ntnx-base:8445/api/iam/authz/v1/authorize": dial tcp: lookup iam-proxy.ntnx-base on XXX.XXX.XXX.129:53: no such host. Resetting glibc cache.
If the above messages are in the log, check for issues described in KB 15752 where "/etc/resolv.conf" file is incomplete in the docker container. Restart the "files_manager_service" on the PC to resolve the issue
nutanix@PCVM:~$ allssh genesis stop files_manager_service
|
KB12184 | Python Upgrade on Hyper-V hosts | The article is for upgrading Python Version from 2.7.x to 3.9.7 in Hyper-V Clusters | Caveats
Satadom replacement, re-imaging, re-installation of a node, and Hyper-V upgrade will bring the Python version back to 2.7.x. Hence, the Python version needs to be upgraded again to 3.9.7 using the given procedure.After manually upgrading Python, LCM will show the default Python version as available. LCM allows users to downgrade Python from 3.9.7 to the old version. Users should not attempt an LCM upgrade to a lower Python version in any case. Contact Nutanix Support if you have any questions.After expanding a cluster, the Python version needs to be upgraded on the new node using the given procedure. | Perform the below steps on all Hyper-V hosts in the cluster.
Uninstall Python 2.7.x
Go to Control Panel > Programs > Uninstall a Program.Go to My Computer > Properties > Advanced System Settings > Environment variables > System variables > Remove Path that contains Python 2.7.x.Reboot the Host to complete the uninstallation process.
Install Python 3.9.7
Download python python-3.9.7-amd64.exe https://www.python.org/ftp/python/3.9.7/python-3.9.7-amd64.exe file from python.org.Copy the downloaded binary file to a Disk location (Like C:\Temp) on the Hyper-V host, from the context menu of the file select 'Run as administrator'.Select 'Customize installation' and tick mark “Add Python 3.9 to path” and “Install launcher for all users”.Click Next.Tick mark “Install for all users” and “Add Python to Env Variables” and use the 'Install' button.After successful installation, restart the Nutanix Host Agent service from "services.msc" or reboot the host.
Repeat the steps for all Hyper-V hosts in the cluster. |
KB7245 | Safely adding Low-compute Node to existing Hyper-V cluster | This KB article describes an issue, Where the addition of Storage-only Nodes/Low-compute nodes Fails in an existing Hyper-V cluster. | This KB article describes an issue, Where the addition of Storage-only Nodes/Low-compute nodes Fails in an existing Hyper-V cluster.
You might see following Error : “Could not get hostname for ip could not connect to the NutanixHostAgent: No route to host”
Foundation logs show :
2018-11-08 12:47:28 ERROR expand_cluster.py:1868 Failed at failover-cluster-join step for node: x.x.x.x. Error: Could not get hostname for y.y.y.y: Could not connect to the NutanixHostAgent: [Errno 113] No route to host
Cluster expansion assumes all nodes of same type and does not consider storage nodes does and is not skipping checking for NutanixHostAgent for AHV storage nodes. | Check the Foundation Version on the cluster :
nutanix@CVM$ cat foundation/foundation_version
If you are running Foundation version older than 4.1, upgrade Foundation to the latest.You can add the nodes using one of the following methods.
Method 1
1. SSH to the CVM of the new Node you are trying to add
2. Go to
nutanix@CVM$ cat /etc/nutanix/hardware_config.json
3. Update the field "minimal_compute_node" mark it as True in hardware_attributes
4. Try the Node addition again from Prism UI. Refer to Expanding cluster https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_0:wc-cluster-expand-wc-t.html user guide for details.
Method 2
1. While adding storage-only nodes, they must be pre-imaged using the foundation option to mark it as storage only. 2. This will set a flag on the node itself that depicts it as a storage only node. 3. Once the node is pre-imaged, we can then go on to add it. Refer to Expanding cluster https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_0:wc-cluster-expand-wc-t.html user guide for details. |
KB3754 | How to Cancel, Throttle, Pause replication | This article describes how to cancel, throttle, pause replication | Note: Nutanix DRaaS is formerly known as Xi Leap. Nutanix Disaster Recovery is formerly know as LeapYou might want to cancel an existing protection domain replication in certain scenarios. For example, if multiple protection domains are running the initial synchronization, you might want to cancel one or more replications to preserve bandwidth. This article describes how to cancel individual protection domain replications as needed. | Cancel an existing protection domain replication
1. View the existing replication details.
nutanix@cvm$ ncli pd ls-repl-status
ID : 754
2. View the existing LEAP/Xi Leap On-going replication details.
nutanix@cvm$ ncli pd ls-repl-status protection-domain-type=entity-centric
Sample output:
Id : 56447579
3. Cancel the replication by using the following command.
nutanix@cvm$ ncli pd abort-repl name=<Protection_Domain_name> replication-ids=<replication_ID>
Replace Protection_Domain_name with the protect domain name and replication_ID with ID of the replication you want to cancel.
Pause the replication
1. Find the name of the Protection Domain and the Replication ID using the examples above. For Protection Domain replication use:
nutanix@cvm$ ncli pd ls-repl-status
For Leap replication:
nutanix@cvm$ ncli pd ls-repl-status protection-domain-type=entity-centric
2. Proceed to pause the replicationNote: Local replication to target is paused and all scheduled replications are queued. Once it is resumed, the paused replications and any queued replications resume.
nutanix@cvm$ ncli pd pause-repl name=<protection_domain_name> replication-ids=<replication_ID>
Resume the replication
nutanix@cvm$ ncli pd resume-repl name=<protection_domain_name> replication-ids=<replication_ID>
|
KB14098 | AWS CloudConnect backups fail due to wrong time in cloud CVM | AWS CloudConnect backups fail due to wrong time in cloud CVM | Customers using AWS CloudConnect backups may notice the replications could be stalled. Checking stargate.out on the cloud CVM you may see multiple errors like below reporting no disk space is available,
W1217 03:23:53.992792 17546 vdisk_micro_cerebro_extent_writer_op.cc:1685] vdisk_id=38533440556 operation_id=88055 Attempt to assign an egroup to targets failed with error kDiskSpaceUnavailable
Checking janus.out on the cloud CVM shows "Test Writes" are failing to S3 with 403 Forbidden error,
2022-12-17 03:28:05 INFO aws_server.py:1839 Trying test write on the bucket ntnx-4436663770768614374-5938858949289201630-7
storage pool listing shows there is enough free space in the storage pool
nutanix@NTNX-xx-xx-xx-xx-A-CVM:~/data/logs$ ncli sp ls
Checking time on the CVM it may be in future or past and not syncing with any ntp server. | S3 uses HTTP protocol which requires time difference on the communicating parties to be within 5 mins max. Due to wrong time the Cloud CVM is not able to write to S3 bucket causing the disk to be marked as offline. Due to no disks available in CLOUD tier, stargate complains kDiskSpaceUnavailable.
To resolve the issue, follow KB 6681 https://nutanix.my.salesforce.com/kA00e000000LM0l?srPos=2&srKp=ka0&lang=en_US to fix the time on Cloud CVM. Janus service will automatically transition the cloud disk to online status once the time is corrected. |
KB15295 | Nutanix Kubernetes Engine development deployment failed due to the etcd VM unable to obtain a IP address. | NKE supports development deployment and production deployment. Production deployment requires IPAM while development deployment can use either IPAM or non-IPAM network. If a development deployment uses a non-IPAM network and the network does not have a DHCP server, the deployment will fail since the newly created etcd VM cannot obtain an IP address. | When using development deployment for Nutanix Kubernetes Engine (NKE) deployment, the deployment may fail with the error message:
Failed to create node: internal err or: Operation timed out"
From Prism Central (PC) VM, /home/nutanix/data/logs/karbon_core.out shows "Failed to retrieve etcd cluster" and "Failed to create node pool for etcd: invalid argument: internal error: failed to create node: internal err or: Operation timed out"
2023-06-30T16:45:12.948Z etcd_lib_deploy_task.go:138: [DEBUG] [k8s_cluster=devkub] Failed to retrieve etcd cluster data from IDF, for cleanup deletion: failed to get ETCD cluster: etcd node tag is not set when reading from IDF
When retrying the deployment, the Karbon etcd VM is created on the cluster, but no IP address is assigned. Later, the development deployment fails again due to the etcd VM being unable to obtain an IP address. If a development deployment uses a non-IPAM network and the network does not have a DHCP server, the deployment will fail since the newly created etcd VM cannot obtain an IP address. | Change the Kubernetes Node Network to use an IPAM network or a non-IPAM network with a running DHCP server. |
KB5199 | NCC Health Check: dataservice_connectivity_check | NCC 3.6.0. The NCC health check dataservice_connectivity_check verifies if a Calm-enabled Prism Central VM can talk to Data services ports on the Prism Element cluster where Prism Central is running. | The NCC health check dataservice_connectivity_check verifies if a Calm-enabled Prism Central VM can talk to Dataservices ports on the Prism Element cluster where Prism Central is running.
This check is only applicable if Calm has been enabled on the Prism Central VM. Nutanix Self-Service (NSS) is formerly known as Calm.
This check is included from NCC 3.6.0 version.
Running the NCC check
This check is part of the NCC checks run by using:
nutanix@pcvm$ ncc health_checks run_all
You can also run it individually as:
nutanix@pcvm$ ncc health_checks system_checks dataservice_connectivity_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 15 minutes, by default.
This check will generate an alert after 1 failure.
Sample Output
For Status: FAIL
Detailed information for dataservice_connectivity_check:
Output messaging
[
{
"Check ID": "Data service IP is not reachable"
},
{
"Check ID": "Invalid or empty data service ip"
},
{
"Check ID": "Please provide correct data service ip."
},
{
"Check ID": "Nucalm containers will be inaccessible"
},
{
"Check ID": "A400105"
},
{
"Check ID": "Dataservice IP is unreachable"
},
{
"Check ID": "Data service ip '{data_service_ip}' is unreachable from prism central {node_name}"
}
] | Calm utilizes the volume group (Nutanix Volumes) on Nutanix clusters. This volume group is used by Calm to store the logs from application launches. Prism Central VM uses iscsi to connect and mount the volume group.
When Calm is enabled in the Prism Central VM, a new volume group is created on the Prism Element cluster where the Prism Central VM is running by the Nutanix volume plugin running in Prism Central VM.
For iscsi connectivity to work between Prism Central VM and the PE cluster hosting the volume group, the following firewall requirements should be met.
Port 3260 needs to be open inbound on the Dataservices IP address on the Prism Element cluster where the Prism Central VM is runningPort 3205 needs to be open inbound on all the CVMs (Controller VMs) in the Prism Element cluster where Prism Central VM is running.
This check fails if the requirements mentioned above are not met and also results in the "Enable Calm" page getting stuck at Enabling state.
Below events are seen in genesis.out of PC VM:
2020-02-26 23:41:27 INFO node_manager.py:3533 Service EpsilonService is not running.
If you are running scale-out PC, genesis leader would contain the above logs. Run the below command in any PC VM to determine the genesis leader:
nutanix@PCVM$ allssh ntpq -pn
And check which CVM is syncing time to external NTP server - this one is genesis leader. All other CVMs will be syncing time from genesis leader.In /home/nutanix/data/logs/epsilon.out you will see the following errors when trying to start Epsilon service:
I0226 23:53:57.334721 19107 containersvc.go:867] Created container f1cacbf5961b0c4d6ed56a851c7da31e5f71f08df198b158aa1502cc08714050
To verify if the firewall ports are open between Prism Central VM and the Prism Element cluster where the Prism Central VM is running, follow the steps below.
Note: In the instructions below, the Prism Element cluster refers to the AHV or ESXi cluster that is hosting the Prism Element VM.
Identify the Prism Element cluster where Prism Central VM is running.
In the Prism Central VM, go to Explore -> VMs. Search for the Prism Central VM name, the Cluster column will show which Prism Element cluster hosts the Prism Central VM.
Now, click the cluster name hyperlink in the Cluster column and Select the Launch Prism Element button on the top right-hand corner.
Once the Prism Element UI loads, navigate to the Hardware page and choose the Table view to see all the CVM IP addresses that are part of this cluster. Take a note of the CVM IP addresses.
Now, click on the name of the cluster on the top left corner and take a note of the iSCSI Data Service IP.
From the Prism Central VM cli, run the following command to test connectivity on the data services IP address obtained from step 3.
nutanix@PCVM$ echo $'\cc' | nc -tv <data_services_ip> 3260
From the Prism Central VM cli, for each of the IP addresses identified in step 2 above, run the following command to test connectivity.
nutanix@PCVM$ echo $'\cc' | nc -tv <cvm_ip_address> 3205
If the connection is established successfully, the output will look similar to the below output.
nutanix@PCVM$ echo $'\cc' | nc -tv xx.xx.xx.xx 3205
If a firewall is blocking the connection between Prism Central VM and Prism Element CVMs, you will see a "Connection timed out" error like below.
nutanix@PCVM$ echo $'\cc' | nc -tv xx.xx.xx.xx 3260
Confirm if PC firewall is allowing iscsi traffic over port 3260:
nutanix@PCVM$ sudo iptables --list |grep iscsi
iSCSI discovery would fail to target with connection time outs as below:
nutanix@PCVM$ sudo iscsiadm -m discovery -t sendtargets -p 1x.1x.x.x6:3260
If the connectivity fails, open the firewall ports that are blocked between the Prism Central VM and the Prism Element cluster CVM IP addresses. Enable/Allow the port communication between PC VM and PE through port 3260 on Data Services IP to resolve the issue.
If you require further assistance, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com.
For further details about Volumes, refer to the documentation in this link: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Volumes-Guide:vol-volumes-requirements-r.html https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Volumes-Guide:vol-volumes-requirements-r.htmlNOTE: In NCC 4.1.0, you may experience following error on Prism Central if there is no Prism Element is registered. Nutanix Engineering is aware of the issue, and will provide a fix in future release.
Detailed information for dataservice_connectivity_check: |
KB13754 | Upgrade and cluster expansion pre-checks: test_license_enforcement_check, test1_1_license_enforcement_check | The pre-upgrade check test_license_enforcement_check checks cluster license. | Upgrade pre-check test_license_enforcement_check and cluster expansion pre-check test1_1_license_enforcement_check are validating if a valid license is installed on the cluster.In case of failure, you can see one of the following errors:
Upgrades and Security patches are blocked due to expired licenses on cluster | Install a valid license on the cluster to proceed with cluster expansion or upgrade. Refer to the License Manager Guide https://portal.nutanix.com/page/documents/details?targetId=License-Manager:License-Manager for more information. |
KB5377 | NCC Health Check: pcvm_same_mem_level_check | The NCC health check pcvm_same_mem_level_check verifies if all Prism Central (PC) VMs have the same memory level. | The NCC health check pcvm_same_mem_level_check verifies if all Prism Central (PC) VMs have the same memory level.
This check returns a PASS status if all PC VMs have the same memory level. Otherwise, if there is any PC VM with a different memory level, it returns a WARN status.
Note: This NCC check on Prism Central was introduced in NCC version 3.5.1.
Running the NCC check
This check can be run as part of the complete NCC health checks:
nutanix@pcvm$ ncc health_checks run_all
Or individually as:
nutanix@pcvm$ ncc health_checks system_checks pcvm_same_mem_level_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run once every day, by default.
Sample output
For Status: WARN (NCC 4.6.0 and later)
Running : health_checks system_checks pcvm_same_mem_level_check
For Status: WARN (NCC prior to 4.6.0)
Running : health_checks system_checks pcvm_same_mem_level_check
Output messaging
[
{
"Check ID": "Check all Prism Central VMs have the same memory level."
},
{
"Check ID": "Memory configuration among Prism Central VMs is inconsistent."
},
{
"Check ID": "Fix Prism Central VM memory configuration by providing same amount of memory for all the Prism Central VMs"
},
{
"Check ID": "The Prism Central VM will not perform at the level necessary to manage the cluster."
},
{
"Check ID": "A200306"
},
{
"Check ID": "Memory configuration inconsistent."
},
{
"Check ID": "The Prism Central VMs are not configured to have the same amount of memory."
}
] | If the NCC check pcvm_same_mem_level_check returns a WARN status, the check indicates the PC VM that is running a memory level different from the other PC VMs in the cluster.
To resolve this issue, upgrade the memory of the PC VM to match the minimum memory requirement.Note: If a minor MemTotal discrepancy is observed in /proc/meminfo output and the NCC pcvm_same_mem_level_check returns a WARN status (see example below), a power cycle of the PCVM should bring it back in line with the others and allow the check to PASS.
nutanix@PCVM:~$ allssh "cat /proc/meminfo | grep MemTotal | awk '{print $2}'"
To power cycle the Prism Central VM, refer to Prism Central Guide: Shutting Down or Starting Up Prism Central VM https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-pc-shutdown-pcvm-t.html. |
KB14042 | Manual upgrade of LCM framework using deploy_framework.py script | This KB provides steps to manually deploy LCM framework using the deploy_framework.py script. | There will be conditions where customers cannot use LCM in connected site and also cannot use Dark site LCM upgrade methods of Web Server and LCM Upload.
If that is the case, then you can use the below manual LCM framework upgrade method by using the deploy_framework.py script.The first step is always re-direct customers to use LCM Dark site methods (If not connected site). Refer Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_7:Life-Cycle-Manager-Dark-Site-Guide-v2_7.
IMPORTANT NOTES
deploy_framework.py script should be followed under the guidance of Nutanix Support.Please ensure the NCC checks are all clean and the cluster is in healthy condition.Manual upgrades should be restricted to only particular scenarios. Using manual upgrade to any inventory stuck scenario is not rightNever downgrade LCM Framework version using this script, as Nutanix does not support downgrade and it is not a tested workflow to be shared with customers.Cluster Modification Tracker: Upon applying the script, the changes must be documented as a private case comment by selecting the template as “Cluster Modification” as directed below:
Navigate to the Case -> New Case Comment -> Template -> Cluster modification -> fill the template -> post as private comment.
Cluster modification tracker | This KB should only be considered if we have exhausted all other troubleshooting or need to upgrade LCM to a version that supports Direct Upload. Please consult a Sr. SRE Specialist or LCM SME before moving forward with the below workflow.The following script deploys LCM out of band and can be used to manually deploy LCM given a bundle URL or paths to the image and module .tgz files in either Prism Element cluster (CVM) or Prism Central cluster (PCVM).
Requirements and Steps:
Please download the script in local CVM or PCVM or use the previous link if the dark site and CVMs/PCVM do not have access to the Nutanix portal and upload using any method for example, FTP.
nutanix@CVM$ wget -O deploy_framework.py https://download.nutanix.com/kbattachments/14042/deploy_framework_v3.py
You can use the script in two different methods:
Please note:
After LCM-2.7 the URL is changed to https://download.nutanix.com/lcm/3.0 Hence in all the below examples if you are running LCM-2.7 or later, please replace the URL with https://download.nutanix.com/lcm/3.0 instead of https://download.nutanix.com/lcm/2.0
Please choose any one of the two methods
Deploy LCM on all CVMs/PCVMs using portal URL and restart genesis.
LCM >= 3.0:
Deploy LCM only on the local CVM using bundle from portal URL
LCM >= 3.0:
Deploy LCM on CVM of specific IPs, and don't restart genesis.
LCM >= 3.0:
Deploy LCM on all CVMs/PCVMs using local files
To get the image path and module path:
If the customer's CVM or PCVM does not have access to Nutanix Portal - then they should manually download in a system that has access to the portal and then transfer the files to CVMs/PCVMs, Skip 2a)
2a) If the CVM/PCVM can access Nutanix portal follow the below steps and skip part 2b)
Change the directory to ~/tmp and get the lcm manifest:
nutanix@CVM$ cd ~/tmp; wget -qO - http://download.nutanix.com/lcm/2.0/master_manifest.tgz | tar xvzf -
Download the nutanix-lcm-lcm* packages:
nutanix@CVM:~/tmp$ for i in $(grep -oP '(nutanix-lcm-lcm.*.tgz)' master_manifest.json);do wget http://download.nutanix.com/lcm/2.0/modules/$i;done
Please note there are two files :
Image Path = nutanix-lcm-lcm_<lcm version>-<256hash>.tgz Module Path = nutanix-lcm-lcm_update-<256hash>.tgz
nutanix@CVM:~/tmp$ ls -la | grep nutanix-lcm
2b) If the customer is a dark site and does not have direct access to Nutanix portal.
Please ask the Customer to download the below files and upload them using FTP to Nutanix CVM/PCVM
Download the master_manifest file and extract it
http://download.nutanix.com/lcm/2.0/master_manifest.tgz http://download.nutanix.com/lcm/2.0/master_manifest.tgz for LCM-2.6.2 or before. http://download.nutanix.com/lcm/3.0/master_manifest.tgz http://download.nutanix.com/lcm/3.0/master_manifest.tgz for LCM-2.7 or later.
After extracting it, you get master_manifest.json; open it using any editor ex. visual studio code and search for nutanix-lcm-lcm and use both URLs to download files.Example for master_manifest
"modules": [
Download both files from the browser or any other method using the URLs from the master_manifest.json
http://download.nutanix.com/lcm/2.0/modules/nutanix-lcm-lcm_2.6.1.38012-d293e0fb078ddc7e8b49082aea317094579d551b9b046b7ffe376eb1a6e2ee5e.tgz
Please use http://download.nutanix.com/lcm/3.0 if you are using LCM-2.7 or later. Upload both LCM files to CVM/PCVM under ~/tmp using FTP or any other method.
3. Deployment
Run the deploy_framework.py script using the above path.
LCM >= 3.0:
Please note: Do not use tilde / while mentioning the path for the image and module. Rather use /home/nutanix instead of tilde / (e.g.: /home/nutanix/tmp/lcm_<x>.tgz) or if we are running the script from tmp, just point to local (e.g.: tmp$: ./lcm_<x>.tgz), it would work ok.
In case we use tilde ~/ in the path, we may encounter an error:
"Error checking staged files Could not find lcm image file: ~/tmp/nutanix-lcm-lcm_2.5.0.2.32663-e8cc034ee09f16a16f1ffe8f13251062f049b888b70bd74de1a21d6d38508ef8.tgz"
Please remove the following files in the /home/nutanix/tmp directory after finishing deploying LCM using this deploy_framework.py script.
/home/nutanix/tmp/master_manifest.tgz/home/nutanix/tmp/master_manifest.jsonThe image file: /home/nutanix/tmp/nutanix-lcm-lcm_<lcm version>-<256hash>.tgzThe module file: /home/nutanix/tmp/nutanix-lcm-lcm_update-<256hash>.tgzThe deploy framework Python script file: /home/nutanix/deploy_framework.py
Note : For LCM version < 2.4.5 upgraded using deploy_framework.py script, we may see issues with LCM Direct upload functionality failing with error "Could not read V2 public-key from Zookeeper"
Perform another LCM inventory operation, before attempting to use the Direct Upload operation.Refer KB 000015420 for more details.This issue is fixed with deploy_framework_v2.py |
KB16354 | NKE - Elastic Search disk full causing Kibana to be in CrashLoopBackOff. | Elastic search hitting 96% usage on the pvc will cause all indices to be in read only state which causes Kibana to be in CrashloopBackOff. One possibility is the elastic search curator cron job is in suspended state causing curator to not run and cleanup old data. | Users may notice kibana-logging container is in crashloopbackoff state. Checking the logs for kibana container, you may notice (for readability lines have been split in the below output)
[root@karbon-xxx-yyyy-k8s-master-0 nutanix]# kubectl logs -n ntnx-system -l=k8s-app=kibana-logging | grep error
Checking the elastic search container running df -h shows usage /usr/share/elasticsearch/data mount point is at 96% usage
[root@karbon-xxx-yyyy-master-0 nutanix]# kubectl exec -it -n ntnx-system elasticsearch-logging-0 -c elasticsearch-logging bash
Checking the ElasticSearch logs you may notice warnings about marking all indices read-only.
[root@karbon-xxx-yyyy-k8s-master-0 nutanix]# kubectl logs -n ntnx-system -l=k8s-app=elasticsearch-logging | grep exceeded
Checking for the elasticsearch-curator-cron job is in suspended state
[root@karbon-xxx-yyyy-k8s-master-0 nutanix]# kubectl get cronjobs -n ntnx-system
Checking for the last run of elasticsearch-curator-cron you will notice the AGE is very long time ago not recent,
[root@karbon-xxxx-yyyy-master-0 nutanix]# kubectl get jobs -n ntnx-system
| To reduce the space usage inside ElasticSearch pod you will need to un-suspend the cronjob to run a clean up job. To unsuspend the schedule run the below command,
[root@karbon-xxxx=yyyy-master-0 nutanix]# kubectl patch cronjobs elasticsearch-curator-cron -p '{"spec" : {"suspend" : false }}' -n ntnx-system
Verify the SUSPEND column says False
[root@karbon-xxxx-yyyy-master-0 nutanix]# kubectl get cronjobs -n ntnx-system
This should auto trigger a new job that will reduce the elastic search usage lower. Once elasticsearch usage goes down, delete the kibana-logging pod to restart it again. |
KB17165 | Files analytics GUI became inaccessible due to the VG unmounting. | Files analytics GUI may become inaccessible because the FA VG was unmounted due to an incorrect PCI address. | After rebooting or powering cycling the File Analytics VM (FAVM), the Volume group (VG) might unmount from the VM, causing GUI inaccessibility as docker services run from the VG. This can be seen in KB 9962 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000brXQCAY; please review the description section of that KB to see if the outputs match before proceeding. ESXi uses iscsi to mount VGs to File Analytics. For AHV, we leverage direct-attached VGs to the FAVM. The mount script (mount_volume_group.sh) is called during docker init and determines what type of VG will be used. In the case of AHV, it grabs the PCI address and compares it to the disk path to mount the VG.In rare situations, the PCI address in the config file may increment while the disk path still has the original address. This will cause the VG to fail in mounting as the disk path looks for a PCI address different from the one in the config file. Here are the steps to identify if the PCI address has changed:
Locate the PCI address in the config file: [nutanix@FAVM ~]$ cat /opt/nutanix/volume_group_management/config/vg_pci_slot
pci-0000:00:04.0-scsi-0:0:1:0
Locate the PCI address on the disk itself: [nutanix@FAVM ~]$ ls -l /dev/disk/by-path/
total 0
lrwxrwxrwx. 1 root root 9 Jun 26 20:28 pci-0000:00:01.1-ata-1.0 -> ../../sr0
lrwxrwxrwx. 1 root root 9 Jun 26 20:28 pci-0000:00:03.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Jun 26 20:28 pci-0000:00:03.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 9 Jun 26 20:28 pci-0000:00:03.0-scsi-0:0:1:0 -> ../../sdb <---
In the above example, the PCI address in the config file is 04.0, whereas the actual disk PCI address in step 2 is 03.0. So, in this scenario, the PCI address has incremented by 1. Note: sdb will always be the device for the VG.
If the output from step 2 differs from what the config file in step 1 has, then this KB is a match. | We can follow the below steps to manually edit the /opt/nutanix/volume_group_management/config/vg_pci_slot file to change the PCI address so that it matches.
Edit the /opt/nutanix/volume_group_management/config/vg_pci_slot file using Vim editor to update the PCI address to match the disk:[nutanix@FAVM ~]$ vi /opt/nutanix/volume_group_management/config/vg_pci_slot
Check the file to make sure your changes were saved:[nutanix@FAVM ~]$ cat /opt/nutanix/volume_group_management/config/vg_pci_slot
pci-0000:00:03.0-scsi-0:0:1:0
Restart the Docker daemon:[nutanix@FAVM ~]$ sudo systemctl restart docker
Confirm the disk was re-mounted:[nutanix@FAVM ~]$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 4.0T 19G 3.8T 1% /mnt
Confirm the file analytics dockers containers have successfully restarted:[nutanix@FAVM ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
96c43249340e analytics_gateway:3.3.0.2 "/opt/nutanix/analyt…" 6 months ago Up 6 seconds Analytics_Gateway
e8efc55fb98f analytics_es:3.3.0.2 "/opt/nutanix/analyt…" 6 months ago Up 6 seconds (health: starting) Analytics_ES1
2520fc0cadd2 analytics_kafka:3.3.0.2 "/opt/nutanix/analyt…" 6 months ago Up 6 seconds (health: starting)
Reboot the FA VM to ensure the VG mounts on boot. Once the FA VM is up and the VG mounted, confirm that GUI access has been restored. |
KB16242 | Prism Central UI - slow user action responses and unavailability | This KB article describes a known issue affecting Prism Central versions PC.2023.3, PC.2023.3.0.1, and PC.2023.4 where users may experiences slow responses from actions taken in the PC GUI, eventually leading to complete unavailability of the PC GUI. | There is a known issue affecting Prism Central versions PC.2023.3, PC.2023.3.0.1, and PC.2023.4 where users may experiences slow responses from actions taken in the PC GUI, eventually leading to complete unavailability of the PC GUI. The symptoms include:
Prism Central GUI may be slow when responding to user actionsPC GUI may be completely inaccessiblePrism Central GUI login screen may not load, in which case it will show a blank white screen.ncli commands may fail with the following error:
nutanix@PCVM:~$ ncli
nuclei commands will fail with an error similar to the following,
nutanix@PCVM:~$ nuclei user.list
A very high number of tcp connections stuck in CLOSE_WAIT state inside the "iam-user-authn" pod.Note: this command may be very slow to return output.
nutanix@PCVM:~$ for POD in $(sudo kubectl -n ntnx-base get pods --field-selector=status.phase=Running --selector='app in (iam-user-authn)' -o jsonpath='{.items[*].metadata.name}'); do echo ${POD}; sudo kubectl -n ntnx-base exec -it ${POD} -- /bin/sh -c 'netstat -a |grep -i CLOSE_WAIT| wc -l'; done
FATALs of various services may be observed at the time the PC experiences this issue, as shown in this sample:
-rw-r-----. 1 nutanix nutanix 4479 Dec 12 11:36 go_ergon.FATAL
Restarting the PC VM or restarting "iam-user-authn" pod will always resolve the problem, but only for a couple of days, then it comes back again. | Nutanix Engineering is aware of the issue, and a fix has been integrated into PC.2023.4.0.2 and PC.2024.1. This problem is tracked in ENG-620967 https://jira.nutanix.com/browse/ENG-620967.The issue is caused by a monitoring script inside the "iam-user-authn" pod not waiting enough time for the connections to close gracefully.
Find the "iam-user-authn" pod,
nutanix@PCVM:~$ sudo kubectl get pods -n ntnx-base
From PCVM command line, run the following command for iam-user-authn-XXXXXXXXXX-XXXXX pod found in above output and verify that "--connect-timeout 10 --max-time 10" entry is not present,
nutanix@PCVM:~$ sudo kubectl exec -it iam-user-authn-XXXXXXXXXX-XXXXX -n ntnx-base -- cat health/ping_healthz.sh
Please refer following steps to resolve this issue,
Navigate to following path on PCVM,
nutanix@PCVM:~$ cd /home/docker/msp_controller/bootstrap/services/IAMv2/
Take a backup of "iam-user-authn.yaml" file before making any changes.Update "iam-user-authn.yaml" file and add "--connect-timeout 10 --max-time 10" as shown below,
---
Find CMSP (prism_central) UUID,
nutanix@PCVM:~$ mspctl cluster list
Re-deploy "iam-user-authn" service by running following commands using CMSP (prism_central) UUID, (Please note that PC UI will be inaccessible for few mins when these commands are run)
nutanix@PCVM:/home/docker/msp_controller/bootstrap/services/IAMv2$ mspctl application -u <UUID> delete iam-user-authn -f iam-user-authn.yaml
Verify that "iam-user-authn" is recreated and running,
nutanix@PCVM:~$ sudo kubectl get pods -A | grep authn
This change will help the pod not to pile up CLOSE_WAIT connections during the ongoing monitoring process. Monitor the PC VM behavior to confirm relief is provided after making this change. |
KB2028 | Integrating Nutanix with SolarWinds | SolarWinds is a common network monitoring tool. This guide explains the setup process for a Nutanix cluster to work with SolarWinds. | SolarWinds is a common network monitoring tool. This guide explains the setup process for a Nutanix cluster to work with SolarWinds. | API MethodIn this method, SolarWinds leverages the APIs provided by Nutanix. Follow the SolarWinds guide https://documentation.solarwinds.com/en/success_center/orionplatform/content/core-nutanix-hwh-setup.htm to set up Hardware Health monitoring for Nutanix clusters.
SNMP MethodStep 1 - Nutanix SNMP Configuration
Click the gear icon at the right top of the Prism page and click on SNMP:
Next, click on "Users" and then "+ New User":
Note: In 4.1.1 AES/SHA are the only options. Prior to this, there was DES and MD5.
Select AES/SHA for the highest level of security. The keys can be of any value. Make sure to save them for use in SolarWinds. In this example, "12341234" was used for both.
Step 2 - SolarWinds configuration
Open the Orion Console and click "settings" in the top right. Choose "Discovery Central":
Next, choose "DISCOVER MY NETWORK":
Add the SNMPv3 credentials:
The Password/Key value is what you set in Prism. AES128 is the correct option. (There are also 192 and 256 but they will not work.) Make sure there are no special characters in the password.
Continue through the wizard until you reach the IP page. Fill out the correct IP range for your Controller VMs (CVMs) here:
Finish out the wizard and run the discovery. This will automatically bring you to another wizard to import the discovered modules into SolarWinds. In the end, your node list should look like this:
As you can see, CVM C is there twice because it owns the external IP. At this point, there is basic CVM reporting - CVM up/down, restarts and outages.
Then, configure a universal poller and assign them to the Nutanix nodes. |
KB11566 | Expand cluster pre-check - test_current_proto | Expand cluster pre-check - test_current_proto | Expand cluster pre-check test_current_proto checks the following conditions:Case 1. Verifies that the new node is not already part of the cluster configuration.Case 2. Checks if there is an ongoing node removal.In case of failure, you can see one of the following error messages:
New nodes cannot be added because node <IP> is being removed currently
Node with ip <IP> already exists in cluster configuration
| Case 1. You are trying to add a node that is already present in the cluster configuration. Sometimes, the error occurs if the node was not properly removed from the cluster. Make sure that you are adding the correct node to the cluster.Case 2. Wait until the ongoing node removal is completed and retry Expand cluster operation. |
KB15447 | Nutanix Files - Unable to view smart DR policies from Prism Central | Unable to view smart DR policies from Prism Central, error "Failed to fetch error: Request failed with status code 504 undefined" | Unable to view the smart DR replication policies. On Prism Central, we see the error "Failed to fetch error: Request failed with status code 504 undefined" as can be seen in the snippet below:
The Files Manager logs under ~/data/logs/files_manager_service.out will show the below signature:
I0907 08:58:34.660323Z 23 iam_client.go:277] Dns error: Post "https://iam-proxy.ntnx-base:8445/api/iam/authz/v1/authorize": dial tcp: lookup iam-proxy.ntnx-base on X.X.X.X:53: no such host. Resetting glibc cache.I0907 08:58:34.660340Z 23 iam_client.go:277] Dns error: Post "https://iam-proxy.ntnx-base:8445/api/iam/authz/v1/authorize": dial tcp: lookup iam-proxy.ntnx-base on X.X.X.X:53: no such host. Resetting glibc cache.
The DNS server will be reachable, but the nslookup may fail:
nutanix@PCVM:~$ nslookup iam-proxy.ntnx-base <DNS IP>
Server: x.x.x.xAddress: x.x.x.x#53** server can't find iam-proxy.ntnx-base: NXDOMAIN
This behaviour was seen after updating the prism central to pc.2022.6.0.7 and enabling microservices. | ENG-519308 https://jira.nutanix.com/browse/ENG-519308 is resolved in pc.2024.1 and ENG-530780 https://jira.nutanix.com/browse/ENG-530780 in pc.2024.1 + Files Manager 5.0
Please upgrade Prism Central and the Files Manager component to the specified version or newer.WorkaroundIf upgrading is not an option we will need to restart the files manager service on Prism Central:
nutanix@NTNX-A-PCVM:~$ allssh "files_manager_cli stop_service && files_manager_cli start_service" |
""Title"": ""Typically | the customer receives an alert indicating that a CVM rebooted. Looking at the host uptime we find that the host rebooted at the same time as when the CVM went down. Hypervisor logs do not indicate that this was a user initiated reboot.\n\n\t\t\tThe customer may also experience cluster wide latency spikes (upwards of +1 | 000 ms) and overall elevated latency that can last for hours. The problem disappears by itself or sometimes after the customer reboots their nodes one after the other. The Nutanix logs do not indicate a degraded node scenario | null | no degraded alert will be fired."" |
""Verify all the services in CVM (Controller VM) | start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""Sort disk space"" | null | null | null |
KB14915 | NVMe SSD disk is not mounted by CVM and list_disks command returns an error complaining “Layout file does not have any information for VMD” | Due to one of several potential issues, users of nodes with NVMe SSDs may observe that one or more of these disks is not able to be brought online by AOS due to a conflict in the mapping between physical VMD controllers and their logical vmd_index numbers in the hardware_config.json. As a result, Hades may report that unexpected VMD controllers are in-use or that NVMe disks are detected in the wrong root ports. In some cases, this issue is the result of a problem with the physical layout, but in other instances this can be seen even when the VMD controller passthrough, physical disks, and internal chassis cabling are all placed correctly. | Table of Contents
Potential Symptoms #Potential_symptoms Issue Identification #Issue_identification Checking if VMD Controller Assignment in CVM is Accurate #Checking_vmd_controller_assignment_accuracy
Step 1: Find BAR value from the hypervisor #step_1_find_bar Step 2: Find the mapping #step_2_find_mapping Step 3: Get BAR2 of the found device #Step_3_get_bar2 Step 4: Find offset_1 for this device #Step_4_find_offset Step 5: Arithmetic calculation #Step_5_arithmetic Step 6: Validation #Step_6_validation
Potential Causes and Workarounds #Potential_causes_workarounds
Due to one of several potential issues, users of nodes with NVMe SSDs may observe that one or more of these disks is not able to be brought online by AOS due to a conflict in the mapping between physical VMD controllers and their logical vmd_index numbers in the hardware_config.json. As a result, Hades may report that unexpected VMD controllers are in-use or that NVMe disks are detected in the wrong root ports. In some cases, this issue is the result of a problem with the physical layout, but in other instances this can be seen even when the VMD controller passthrough, physical disks, and internal chassis cabling are all placed correctly.
Potential Symptoms undefined
Expected NVMe disks are not shown in “list_disks” output and may be marked offline by Stargate. This can appear in the form of disk slots reporting as empty when they are in fact populated or as disks showing-up in the wrong slots.After an ESXi upgrade or ESXi re-image, the CVM goes to a boot loop because it cannot find a valid boot partition (one variation of this issue is described in KB-13561 http://portal.nutanix.com/kb/13561).
Issue Identification undefined
1. If your hypervisor is ESXi 7.0+ and the affected node supports the VMD feature, check KB-13561 http://portal.nutanix.com/kb/13561 to see if the "VMD Reset Quirk" was left over in the passthru.map file on the hypervisor. Apart from CVM reboot loop, this misconfiguration can also cause NVMe disks mappings to get confused. The aforementioned KB explains how to recover from this and you do not to proceed with this article. If you are using a different hypervisor or have confirmed that the "VMD Reset Quirk" entry was already removed, proceed with the further steps for identification in this article.2. Following a disk replacement or some other type of maintenance, one or more NVMe disks are visible in “sudo nvme list” output but are not mounted in “df -h”.Example:
nutanix@CVM:xx.xx.xx.25:~$ sudo nvme list
Notice that /dev/nvme2n1 with s/n S438NA0NC09358 does not appear in the "df -h" output below.
nutanix@CVM:xx.xx.xx.25:~$ df -h
3. This article only applies to nodes using the Volume Management Device or “VMD” NVMe mode. Nodes using “Direct Attach” mode should not see these issues. Refer to KB-15461 http://portal.nutanix.com/kb/15461 for more information about NVMe modes. Confirm whether VMD was originally enabled on the affected node by seeing if the disks associated with this CVM have the attribute “self_managed_nvme: true” in their zeus_config_printer disk_list entries. If this flag is “false”, it means that the software and firmware in-use did not support VMD when this node was first deployed. Foundation will automatically enable VMD during cluster creation if all the prerequisites meet the requirement. However, Nutanix currently does not support enablement of VMD for brownfield environments, such as those where the node was only upgraded to the minimum-required software/firmware after the date of initial imaging.Here are the steps for confirming if VMD is supposed to be enabled on a given node:a. Check to make sure that the cluster uses at least the minimum required versions of AOS, Foundation, BMC, and BIOS (refer to KB-15461 http://portal.nutanix.com/kb/15461 for a list of the minimum software versions and the steps for seeing if these were in-place when the CVM was initially created). b. Check the value of the self_managed_nvme parameter in Zeus.
nutanix@CVM:~$ panacea_cli show_zeus_config | less
c. Compare the results of steps “2a” and “2b”.
If you have all the minimum software versions installed AND the self_managed_nvme parameter is set to true, then VMD is enabled on the node and should proceed with this article.If any of the listed software components use versions earlier than the required minimum, OR if the self_managed_nvme parameter is set to false, then VMD is not enabled on the cluster. The issues described in this article do not apply to nodes that only support Direct Attach NVMe mode. You are not encountering this issue and should continue your search elsewhere.
d. NX-Only: From KB-12360 http://portal.nutanix.com/kb/12360: Check if VMD is enabled in the BMC. If your node is an NX-8170-G7 and VMD is not enabled in the BMC, but should be based on the output of zeus_config_printer, refer to KB-12360 http://portal.nutanix.com/kb/12360 for the steps for enabling the feature. Once this is done, check to see if the CVM is now able to start and if list_disks command output appears normal. Do not modify the below settings unless your node is an NX-8170-G7 AND Zeus says that VMD should be enabled.AHV:
root~# ipmitool raw 0x30 0x70 0x90 0x00
ESXi:
root~# /ipmitool raw 0x30 0x70 0x90 0x00
Output explanation:0 -> VMD is disabled1 -> VMD is enabled2 -> VMD is in default state (indicates "disabled" on NX-8170-G7)4. Check hades.out log on the affected CVM to see if a similar error to the one below appears there.
nutanix@CVM:xx.xx.xx.25:~$ less ~/data/logs/hades.out
5. Check the output of the list_disks command to see if any errors similar to those below appear. The message suggests that disks may be inserted in the wrong physical slots, but this is not necessarily the case. The issue may be a confused logical mapping of these drives to the rootports within specific VMD controllers, which is what this KB talks about.
nutanix@CVM:xx.xx.xx.25:~$ list_disks
The WARNING seen above contains a path_to_slot_map, which is the expected default mapping of VMD Controllers, VMD root ports, and NVMe disk devices to specific physical disk slots. Each VMD controller has up to four root ports which are capable of serving individual NVMe drives. Below is a transcribed layout of this configuration on the NX-8170-G7.VMD Controller Number : Rootport Number: Disk SlotVMD:1:Rootport:0:NVMe:0: 1VMD:1:Rootport:1:NVMe:0: 2VMD:2:Rootport:0:NVMe:0: 3VMD:2:Rootport:1:NVMe:0: 4VMD:2:Rootport:2:NVMe:0: 5VMD:2:Rootport:3:NVMe:0: 6VMD:3:Rootport:2:NVMe:0: 7VMD:3:Rootport:3:NVMe:0: 8VMD:0:Rootport:2:NVMe:0: 9VMD:0:Rootport:3:NVMe:0: 10The list_disks command throws an ERROR because it detects NVMe disk devices at the below addresses. Notice that these are not designated any Disk Slots in the default mapping shown above. In other words, Root Ports 0 and 1 on VMD Controller number 3 are not intended to be in-use but we detect drives there and so Hades throws an error, refusing to mount the drives.VMD:3:Rootport:1:NVMe:0VMD:3:Rootport:0:NVMe:0The source of truth for this mapping is contained in the hardware_config.json file.
nutanix@CVM:xx.xx.xx.25:~$ less /etc/nutanix/hardware_config.json
Now, if you cross-reference the hardware_config.json output shown above with the layout provided in the output of “list_disks”, we can understand the following:List_disks warning:“path_to_slot_map = {u'VMD:2:Rootport:1:NVMe:0': 4, u'VMD:3:Rootport:2:NVMe:0': 7, u'VMD:0:Rootport:2:NVMe:0': 9, u'VMD:2:Rootport:0:NVMe:0': 3,”In hardware_config.json:
VMD:X corresponds with "vmd_index": "X"Rootport:X corresponds with "hba_address": "X"Final number in sequence corresponds with "slot_designation": "X"
This tells us the following:
nvme0 and nvme1 devices (/dev/nvmeX names are dynamic, just like /dev/sdX names are not a consistent designation for regular SSDs/HDDs) show up in supported VMD Controller root ports and are registered by list_disks slots 3 and 4.However, the other two drives are appearing on the wrong rootports under VMD controller ID 3.nvme2 and nvme3 are registered in VMD3 rootports 0 and 1 when the hardware_config.json expects only rootports 2 or 3 to be populated on this particular controller.
6. Next, check the edit-hades output. Notice that Slots 1 and 2 look as if they are empty. Whereas, if you check Slots 3 and 4, these disks appear normally. If you look at the nvme_pcie_path entry for these working drives, this contains a reference to the VMD controller which is serving those disks.Note: The nvme_pcie_path variable is used exclusively for the SPDK feature and it is set by Hades only when activating this feature. This is not relied upon for the logical slot mappings of physical NVMe disks via their VMD controllers. Nevertheless, it can be useful in spotting inconsistencies between this and the "PCIe path" given by list_disks.
nutanix@CVM:xx.xx.xx.25:~$ edit-hades -p
The address 0000:13:00.0 (domain:bus:slot.function) in nvme_pcie_path shown above can be checked against the VMD controllers that were passed through to the CVM and reflected in lspci output.
nutanix@CVM:xx.xx.xx.25:~$ lspci | grep -i nvme
Furthermore, we can tell what root port the device is detected in by referencing a later argument inside the same nvme_pcie_path variable. For example, the address 10002:00:01.0 shown in the nvme_pcie_path for the disk assigned to Slot 4 tells us that this disk is detected on root port 1 (out of a possible 0-3). The first part of the address, 10002, represents the domain. This number gets assigned based on the order in which the VMD controllers are detected by CentOS and can change when the CVM is rebooted. For this reason, the domain cannot be relied upon to consistently link an NVMe device to a specific root port or VMD controller.Now that we know how to interpret the nvme_pcie_path, it is possible to interpret the errors from list_disks output. The device paths shown there tell us that the two drives that appeared on unexpected root ports were associated with the VMD controller seen by the CVM at address 0000:1b. Additionally, we can see that these devices appear in root ports 0 and 1.
ERROR:root:Layout file does not have any NVMe information for VMD:3:Rootport:0:NVMe:0 hierarchy for /sys/devices/pci0000:00/0000:00:18.0/0000:1b:00.0/pci10003:00/10003:00:00.0/10003:01:00.0 PCIe path | The crux of the problem is in the way that physical VMD Controller PCI Devices are mapped to the vmd_index numbers shown in the hardware_config.json file. If this is done incorrectly by our software, then it is possible that Hades will see NVMe devices associated with root ports on VMD controllers that should not be in-use.G7 nodes, which use Intel Purley generation of CPUs, lack the innate capability to facilitate passthrough (also called direct assign) of VMD controller to the CVM. To circumvent this architectural limitation, software changes were made in both hypervisor (ESXi and AHV) as well as in AOS. Here, a calculation is performed using the BAR address of the VMD controller, as seen by the hypervisor, and an offset stored within the CVM's sysfs to put the different VMD controllers into a consistent order. This way, the vmd_index number assignments shown in the hardware_config.json file are mapped to the bus addresses of the correct VMD controller devices that the CVM can see in lspci. If something goes wrong in this process, then we may see the problem of Hades refusing to mount certain NVMe disks or it may be that these disks appear as if they are in the wrong slots.
Checking if VMD Controller Assignment in CVM is Accurate undefined
Here are the steps you can follow to check if the VMD controllers passed through to your CVM are getting assigned to the proper vmd_index numbers in the hardware_config.json file.
Step 1: Find BAR value from the Hypervisor undefined
In this step, use lspci to gather the hexadecimal BAR value for Region 2 of a VMD controller device, as seen from the AHV hypervisor.AHV
# lspci -vv | grep Volume -A18
The above output shows that for VMD controller at hypervisor bus 3a:05.5, the Region 2 BAR value is b6000000 (i.e. 0xb6000000 as this notation will be used later on).ESXiSince "lspci" on ESXi only accepts a single "-v" flag, the BAR value of the VMD controllers cannot be obtained from this. Here are two ways to get around this limitation:Option 1 - Simple but indirect methodA. Determine the bus address of the VMD controller(s) you are interested in by querying this with lspci.
nutanix@NTNX-CVM:xx.yy.zz.20:~$ hostssh "lspci -v | grep -A2 Volume"
B. Check inside the vmware.log file for the CVM in the local datastore to find the log line citing the “barIndex 2” hex address for the passthrough device with the same bus address as the VMD controller you found in lspci.
nutanix@NTNX-CVM:xx.yy.zz.20:~$ hostssh "grep 'barIndex 2 type 2 realaddr' /vmfs/volumes/NTNX*local*/Service*/vmware.log"
Option 2 - More complex but definitive methodAlternatively, we can output the raw lspci output from ESXi to a file and copy this to the CVM, which can then present the full “lspci -vv” containing the BAR address we need.A. From the CVM which has the problem, copy the raw lspci dump from the local ESXi host to a temporary file on the CVM.
nutanix@CVM:~$ ssh [email protected] 'lspci -e' | tee ~/tmp/esx_lspci_raw.txt
B. Using the “-F” flag, we can run “lspci -vv” against the contents of the text file instead of the local hardware. Gather the Region 2 hex address for the VMD controller(s) you are interested in.
nutanix@CVM:~$ lspci -vv -F ~/tmp/esx_lspci_raw.txt | grep Volume -A8
Step 2: Find the Mapping undefined
AHVIn AHV it is possible to see the PCI address for the VMD controllers directly from the CVM .xml configuration file. In the below output, we see a direct mapping of hypervisor VMD controller at 0000:3a:05.5 to the same device being inserted into 0000:00:07.0 on the CVM.In the “lspci -vv” output on the CVM, note the hex address of “Region 2” for this VMD device, as this will be needed later on.AHV - CVM .xml config file
[root@AHV ~#] virsh dumpxml <cvm-name-in-virsh-list> | less
lspci output on CVM
nutanix@CVM:~$ sudo lspci -s 00:07.0 -vv
ESXiWhen the hypervisor is vSphere ESXi, start by getting the “Dependent Device” PCI address for your VMD controller on the hypervisor. You can do this with "esxcfg-info -a" or the same information is shown in the bus, slot, and function attributes shown for the device in "vim-cmd hostsvc/hosthardware". This specific address is what is referenced in the .vmx config file.
[root@10:~] esxcli hardware pci list | less
Above we can see that the VMD controller which the hypervisor sees at address 0000:3a:05.5 (in lspci output) is labeled domain:bus:slot:function 0:58:5:5 in terms of its role as a passthrough device.Next, consult the vmx configuration file for the CVM. Find which number pciPassthruX number and pciPassthruX.pciSlotNumber is assigned to the device 58:5.5.
nutanix@CVM:~$ hostssh "grep Passth /vmfs/volumes/NTNX*local*/Service*/Service*vmx"
Above, we see that the physical VMD controller that the hypervisor has given a Dependent Device address of 58, was subsequently mapped to pciPassthru1 and given pciSlotNumber 224.Finally, once the CVM is booted we can see the same pciSlotNumber listed as the Physical Slot number for the PCI device when running "lspci -vv" against the VMD controllers inside the CVM. We see from the output below that Physical Slot 224 is recognized at domain:bus:slot.function 0000:13:00.0 on the CVM.So, to summarize, the VMD controller seen by the ESXi hypervisor at address 0000:3a:05.5 is ultimately recognized by the CVM at address 0000:13:00.0.
nutanix@CVM:~$ allssh "lspci -vv | grep -A12 Volume | egrep 'RAID|Region 2|Physical Slot'"
Note: The pciSlotNumber value for the VMD controller in the CVM .vmx configuration file on the ESXi hypervisor and the Physical Slot number for the same device shown in "lspci -vv" output on the running CVM does not always match. Sometimes the Physical Slot numbers for the VMD controllers are different from what you find in the .vmx configuration file. This is not necessarily a sign of a problem. Compare BAR memory addresses between VMD controllers seen by the hypervisor and CVM and make sure that the correct number of VMD controllers are being passed through.
Step 3: Get BAR2 of the found device undefined
Gather the BAR2 (Region 2) memory address for the VMD controller as shown in "lspci -vv" output on the CVM (refer to Step 2 output above) and then represent this with hexadecimal notation.AHV
nutanix@CVM:~$ sudo lspci -s 00:07.0 -vv
Looking at the VMD controller at address 0000:00:07 with lspci, the memory address for Region 2 is ec000000. In hex, this is 0xec000000.ESXiRecall that in Step 2 we determined that the VMD controller seen by the ESXi hypervisor at address 0000:3a:05.5 is ultimately recognized by the CVM at address 0000:13:00.0. We can get details on the device directly using the command below.
nutanix@CVM:~$ lspci -vv -s 0000:13:00.0
Looking at the VMD controller at address 0000:13:00.0 with lspci, the memory address for Region 2 is f6000000. In hex, this is 0xf6000000.
Step 4: Find offset_1 for this device undefined
The offset is required to find the relative address of the VMD controller with respect to the bare-metal hardware and is applicable for all the platforms that use VMD.On 8170-G7, which uses the Intel Purley platform, “offsets” files are found in the below-mentioned directory for each VMD controller. These are used by AOS to maintain a consistent mapping between the bus addresses of the VMD controllers and vmd_index numbers found in the hardware_config.json.In this step, we obtain the value of the offset_1 attribute inside the “offsets” file associated with each VMD controller.AHVReplace 0000:00:07.0 in the path below with the full device address shown in “lspci -vv” for each of your VMD controllers.
nutanix@CVM:~$ cat /sys/bus/pci/drivers/vmd/0000:00:07.0/iavmd/offsets
Note: You only need to keep track of offset_1 (it is in hex): 0x36000000ESXiIn the example below, I show the contents of the offsets file for each of the four VMD controllers on my system. In the earlier example, we focused on the controller at address 0000:13:00.0, so I highlight that below.
nutanix@CVM:xx.yy.zz.20:~$ cat /sys/bus/pci/drivers/vmd/0000:03:00.0/iavmd/offsets
On this ESXi node, the VMD controller at address 0000:13:00.0 has offset_1 of 0x40000000.Note: The offset may change from one reboot to another and from one system to another. So please calculate, do not take this as a reference even for the same platform.
Step 5: Arithmetic calculation undefined
The values obtained in Step 3 and Step 4 should be subtracted. You can just put this equation into Google search to see the result.AHV0xec000000 - 0x36000000 = 0xB6000000ESXiBelow are the results of the calculation if you perform it against all four VMD controllers on this 8170-G7 system.0000:03:00.00xfa000000 - 0x52000000 = 0xA80000000000:0c:00.00xf0000000 - 0x12000000 = 0xDE0000000000:13:00.00xf6000000 - 0x40000000 = 0xB6000000 <----------------------0000:1b:00.00xf4000000 - 0x32000000 = 0xC2000000 Focusing on the VMD controller at address 0000:13:00.0, the calculation yields 0xB6000000.On CVMs which have multiple VMD controllers passed-through, a configuration that is commonly seen in the field, this calculation is performed by AOS in order to achieve consistent “phy-to-slot” mapping between the individual VMD controller devices and the vmd_index numbers shown for each of these in the hardware_config.json file. This is necessary because the PCI domain (e.g., 10000, 10001, 10002, etc.) under which the individual NVMe disks are associated is decided based on the order in which the VMD controllers are detected by the CVM kernel during startup, meaning that the addressing can change across reboots. Using this software logic, assuming that the physical layout and configuration files are all all correct, we should always see the same physical NVMe disks associated with the same logical disk slots.Going back to the arithmetic calculations we just performed, we can can replicate the process just described in AOS by sorting the resulting hex values and then assigning vmd_index numbers, starting with zero going to the lowest and then incrementing the index for each hex value that is higher.Using the four hex values we calculated for the ESXi cluster, the sorting of these gives us the following mapping of VMD controller bus to vmd_index number:0xA8000000 - VMD controller with bus 03 is assigned to vmd_index 00xB6000000 - VMD controller with bus 13 is assigned to vmd_index 10xC2000000 - VMD controller with bus 1b is assigned to vmd_index 20xDE000000 - VMD controller with bus 0c is assigned to vmd_index 3Now that we know the assignment of VMD controller to vmd_index number, we can refer back to the hardware_config.json file to know definitively whether a given NVMe disk is appearing under the correct VMD controller and root port.
Step 6: Validation undefined
The final step is to compare the results of the calculation in Step 5 against the BAR Region 2 memory address gathered in Step 1. If the two hex numbers match, then that should mean that we have a correct mapping of physical VMD controllers to vmd_index numbers and root ports (hba_address) shown in the hardware_config.json. If these numbers do not match, then something is wrong with this mapping. Refer to the Solution section of this article for potential causes and workarounds.AHV
[root@AHV ~]# lspci -s 3a:05.5 -vv
The value obtained through calculation using CVM offset_1 0xb6000000 matches Region 2 memory address of device as seen on AHV hypervisor, which was 0xb6000000.ESXi
nutanix@CVM:xx.yy.zz.20:~$ hostssh "grep 'barIndex 2 type 2 realaddr' /vmfs/volumes/NTNX*local*/Service*/vmware.log"
The value obtained through calculation using CVM offset_1 0xB6000000 correctly matches the Region 2 memory address of the device as seen on the ESXi hypervisor, which was 0xb6000000. This result means that the correct information is being provided by the hypervisor and we should see consistent assignment of physical VMD controllers to vmd_index numbers as shown in hardware_config.json.
Potential Causes and Workarounds undefined
Here's what to do if the Step 6 calculation reveals an inconsistent VMD controller mapping:
If the node you are troubleshooting is an 8170-G7 and the hypervisor is ESXi, refer to KB-13561 http://portal.nutanix.com/kb/13561 to see if you have an unnecessary entry in the passthru.map file on the hypervisor. Follow the steps shown there to modify the file and then reboot your node to see if this resolves the issue.For other hardware models or hypervisors, rule out potential BIOS misconfigurations by booting the affected node into the BIOS menu and selecting F3 to load “Optimized Defaults”. Then, save and exit the BIOS menu with F4.If neither of the above fixes your issue, check KB-10265 http://portal.nutanix.com/kb/10265 to see if there is a possible miscabling of the system. This can only occur in single-node platforms as multi-node (2U2N, 2U4N) models do not have NVMe cables. Consult Hardware Engineering in the #hw Slack channel if you need an internal cabling diagram for a specific NX model.
In case you are unsure of how to proceed, assemble the data you have gathered on the problem and file a TH/ONCALL ticket in Jira depending on the urgency of the problem.[
{
"Bus address of VMD controller on hypervisor": "0000:17:05.5",
"Bus address on CVM": "0000:03:00.0",
"vmd_index # in hardware_config.json": "vmd_index 0"
},
{
"Bus address of VMD controller on hypervisor": "0000:3a:05.5",
"Bus address on CVM": "0000:13:00.0",
"vmd_index # in hardware_config.json": "vmd_index 1"
},
{
"Bus address of VMD controller on hypervisor": "0000:5d:05.5",
"Bus address on CVM": "0000:1b:00.0",
"vmd_index # in hardware_config.json": "vmd_index 2"
},
{
"Bus address of VMD controller on hypervisor": "0000:85:05.5",
"Bus address on CVM": "0000:0c:00.0",
"vmd_index # in hardware_config.json": "vmd_index 3"
}
] |
KB12266 | High CPU usage causes the ping latency on CVMs. | High CPU usage on CVMs can cause the ping process to take a longer time to complete measuring the latency because of the lower process priority. Customer can ignore this alert. | The following alert is generated.
Critical: Latency between CVMs is higher than 15 ms.
The NCC health check reports a Fail status.
Detailed information for inter_cvm_ping_latency_check:
inter_cvm_ping_latency_check shows FAIL from health_server.log.
INFO ncc_task.py:575 [inter_cvm_ping_latency_check] Status for plugin inter_cvm_ping_latency_check is FAIL
High latency were noticed from data/logs/sysstats/ping_all.INFO on all CVMs.
IP : latency (threshold = 0.5 ms, payload = 1472 bytes)
Customer did not find any RX/TX errors or any large amounts of traffic issues on their network switch side. | High CPU usage on CVMs can cause the ping process to take longer to complete measuring the latency because of the lower process priority. In this case, the ping process reports higher latency than the actual network latency.
The following sample graphs (on Panacea) show that the ping latency goes up when the CPU idle% drops.Ping Latency
CPU Idle on CVMsBoth
As a signature, the ping latency usually looks fine, but higher latency appears when CPU contention occurs on CVM (CPU idle% drops, for example).
The above can be a known symptom and addressed in ENG-224894 https://jira.nutanix.com/browse/ENG-224894 to improve NCC detecting accuracy.
The customer can ignore it unless it is a chronic issue
NCC should be upgraded to NCC-4.6.0 or later for the fix of ENG-224894 https://jira.nutanix.com/browse/ENG-224894. Please ask the customers to upgrade NCC as we have changed the interval and count to raise the corresponding alert to detect continuous latency rather than temporary latency spikes. The interval is changed from 3600 to 300. The count is changed from 1 to 3. ENG-403775 https://jira.nutanix.com/browse/ENG-403775.
If we observe a chronic or heavy CPU contention on CVM, it should be assessed and planned to reduce or expand the cluster. |
KB15121 | Script to Fetch Efficiency Details of the VM's from PC | Script to fetch the efficiency details from IDF for VM's in PC | Script to Fetch Efficiency Details of VMs via IDF in PC | NOTE: This script should only be run once for debugging purposes only by STLs in the Prism Central VM.While working over TH-10191, we created a script to be run from PC CLI to fetch the efficiency details of the VMs via IDF ( Behavioral Learning Tools https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2022_9%3Amul-behavioral-learning-pc-c.html&a=89f18873206bfa8c424215c1167b3eacccfa37642954a12f2d4e0602ab8cc2435755b77051b5de2b). Here are the steps that we can use to run the script: -1. SSH to the Prism Central VM as a nutanix user.2. Go to ~/cluster/bin directory and download the script using the below link: - find_vm_efficiency_details_v1.py https://download.nutanix.com/kbattachments/15121/find_vm_efficiency_details_v1.py3. Verify that the md5sum of the file downloaded is correct:
nutanix@PCVM: md5sum find_vm_efficiency_details_v1.py
4. Now, execute the script using the below command and save its output in ~/tmp directory: -
nutanix@PCVM: python ~/cluster/bin/find_vm_efficiency_details_v1.py > ~/tmp/vm_efficiency_details.out
5. Here's an example of the above script output which has been saved in vm_efficiency_details.out file. Now you can grep for Inactive or over-provisioned VM as per requirement : -
(u'3f4468e4-38f9-45ee-a38c-e7279afca2f2', u'Veerappan', u'WIN10_2021_DND', u'Inactive', u'{"inactive_vm": "The VM was powered off for the last 553 days."}', u'off', 'Last state was 554 day/s ago') |
KB15516 | HPE DX Gen11 - One or more drives may be missing post Host reboot | This article describes an issue on HPE DX Gen 11 platforms where one or more drives may go missing after reboot on AOS 6.5.3.5 and (or) above. | On HPE ProLiant Gen11 platforms, one or more drives may go missing in list_disks and iLO after reboot on AOS 6.5.3.5 and (or) above.This issue impacts ESXi hypervisor and the below platform models:
DX380 Gen11 24SFFDX360 Gen11 8SFF
On iLO, a drive is missing under System Information > Storage > HPE MR216i-p Gen11.list_disks output:
nutanix@NTNX-A-CVM:10.x.x.55:~$ list_disks
In /home/nutanix/data/logs/hades.out, drive is listed as failed, removed, and tombstoned:
2023-07-10 21:35:47,166Z INFO Thread-7 disk_manager.py:2233 Handling tombstoned disks [u'S6Mxxxxxxxxx04']
Information on the removed drive is provided in IML (Integrated Management Log) (iLO Page > Information > IML):
In /var/log/dmesg on a CVM, a bad drive will be present under megaraid controller:
[ 69.435005] megaraid_sas 0000:03:00.0: 140139 (742339372s/0x0004/CRIT) - Enclosure Encl 252(Port 1I/Box 1) phy bad for slot 8
[] | Nutanix and HPE Engineering are working to resolve this issue in an upcoming firmware version.As a workaround, follow the guide to Reboot the host https://portal.nutanix.com/page/documents/details?targetId=vSphere-Admin6-AOS-v6_7:wc-request-reboot-wc-t.html. After the reboot, the drive will be added to the list_disks output and the iLO page. |
""NVMe"": ""Disk presence: list_disks | lsscsi | lsblkLogical configuration: panacea_cli show_disk_list | hades.out | panacea_cli show_zeus_configDisk health: smartctlController health: lsiutilLogs: stargate.INFO |
KB10953 | Using Nutanix Objects Self-Signed Certificate with Veritas Enterprise Vault | This article describes how to import the Nutanix Objects self-signed certificate to Veritas Enterprise Vault servers. | This article describes how to import the Nutanix Objects self-signed certificate to Veritas Enterprise Vault (EV) servers. Failure to do so will result in an SSL communication error between the Enterprise Vault Servers and Objects.
When Nutanix Objects uses a self-signed CA certificate, then that certificate is required to be imported to all Enterprise Vault Servers (storage servers) for proper TLS communication.
This article applies to Nutanix Objects 3.1 and above. | On the Primary Storage System
Perform the following steps on all Enterprise Vault servers to import the Nutanix Objects self-signed CA certificate.
Ensure you have the Nutanix Objects self-signed CA certificate in PEM format.From the Windows Start menu, open mmc.exe.
Select File -> Add/Remove Snap-in.
Select Certificates, then click Add.
Select Computer account, then click Next.
Select Local computer, click Finish, then click OK.
Go to Console Root -> Certificates -> Trusted Root Certification Authorities -> Certificates.
Right-click Certificates -> All Tasks -> Import.
This will open a Certificate Import Wizard.Select Local Machine, then click Next.
Provide the certificate file using Browse button, then click Next.
Select the option Place all certificates in the following store.Browse and select Trusted Root Certification Authorities, then click Next.
Click Finish to complete the certificate import.
Verify imported CA certificate.
Note: Enterprise Vault does not recommend using self-signed certificates.
On the Secondary Storage System
Perform the following steps on all Enterprise Vault servers to import the Nutanix Objects self-signed CA certificate.
Ensure you have the Nutanix Objects self-signed CA certificate in PEM format.
Copy its contents into a clipboard for pasting on the EV server.
The first line should contain the first "-----BEGIN CERTIFICATE-----".
The last line should contain the final "-----END CERTIFICATE-----".
On the EV server, open the cacert.pem file located in the <EnterpriseVault_Install_Path>\OST\x64 directory.
Example: C:\Program Files (x86)\Enterprise Vault\OST\x64
Append the earlier saved Nutanix Objects self-signed CA certificate to the very bottom of cacert.pem, then save the file.
Note: AN UPGRADE OF THE ENTERPRISE VAULT SOFTWARE MAY REVERT ANY CHANGES MADE TO THE 'CACERT.PEM' FILE, MAKING IT NECESSARY TO REPEAT THESE STEPS AFTER THE UPGRADE.
Note: Enterprise Vault does not recommend using self-signed certificates. |
KB15896 | AHV crash or VM power operation may fail for vGPU enabled VMs | AHV crash or VM power operation may fail with "Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainMemoryStats)" if a VM with vGPU is running on AHV. | The issue symptoms might materialise in either AHV crash or VM power operation failing, both scenarios are explained below:
Scenario 1
AHV kernel panic triggered, detecting CPU lock-up causing inability to free-up the ring buffers. Below prints can be seen in the vmcore_dmesg log file:
[3413353.807457] CPU: 24 PID: 3712705 Comm: python Kdump: loaded Tainted: P W O 5.10.177-2.el7.nutanix.20220304.441.x86_64 #1
Scenario 2
VM power operation may fail with "Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainMemoryStats)" if a VM with vGPU is running on AHV.Run the following command to find out which host is having a problem:
nutanix@CVM:~$ hostssh 'grep remoteDispatchDomaininMemoryStats /var/log/libvirt/libvirtd.log | tail -1'
On the host from above, the libvirtd.log shows blocked tasks in NVIDIA driver (nvidia_vgpu_vfio) like below if the power operation fails.
2023-10-20 06:17:28.923+0000: 5375: warning : qemuDomainObjBeginJobInternal:9906 : Cannot start job (modify, none, none) for domain 8b5dfef6-62e0-4393-a00f-5487e27da269; current job is (query, none, none) owned by (5374 remoteDispatchDomainMemoryStats, 0 <null>, 0 <null> (flags=0x0)) for (2656s, 0s, 0s)
This issue is observed with multiple NVIDIA driver versions like 13.3 (470.129.04) and 15.2 (525.105.14).
How to check NVIDIA driver versions:
Run the following command on AHV host
nvidia-smi
The above output will give you the driver version but it will not give you the related version like 13.3 or 15.2. The related 13.3 version will have multiple driver versions. In-order to find the related driver version, login Nutanix Portal and navigate to Downloads >> AHV >> use the Dropdown on the top and select NVIDIA >> Look for the versions that you found, like 470.129.04 or other. The Version will be written next to the driver version 470.xxx.xx. See the below picture for example | NVIDIA has found the resolution for this issue and it will be included in the 16.3, 17.0, and later releases.Workaround: Reboot the AHV host where the affected VM is running.Note: This will require a full power cycle and the host will likely be unable to enter maintenance mode nor will VMs migrate. For the workaround to proceed, customer needs to be made aware that this will trigger an HA event and VM restarts. Impact to VMs is expected. |
KB12659 | Nutanix Disaster Recovery - Cross Cluster Live Migration (CCLM) fails with error "Given input is invalid. Multiple IP addresses requested." | VM migration (planned failure) through recovery plan fails with error " Given input is invalid. Multiple IP addresses requested" when more than one IP configured on single network interface (sub interface configuration) | Note: Nutanix Disaster Recovery (DR) was formerly known as Leap.VM migration through recovery plan fails with error "Given input is invalid. Multiple IP addresses requested" when more than one IP is configured on a single network interface (sub-interface configuration).
Planned failover (Live and cold migration) is affected by this issue.
The below issue is observed while trying to perform the Planned Migration activity through Recovery Plan for a VM that has multiple IP addresses assigned to a single virtual NIC.
The Recovery Plan initiated from the DR Prism Central (PC) will have the below-failed tasks.
nutanix@pcvm$ ecli task.list status_list=kFailed
Checking the failed epsilon task returns the below error log along with the VM name.
nutanix@pcvm$ ecli task.get f9629de2-0758-4790-b2b7-9d1bbdef96c2
Reported VM has multiple IP addresses assigned to a single vnic.
nutanix@pcvm$ nuclei vm.get applslsapxhXXXX
| ENG-445064 https://jira.nutanix.com/browse/ENG-445064 has been created for the issue and has been fixed with the pc.2022.4 release.
Workaround:
For live migrating the VMs, "Migrate Outside cluster" option from the VM tab can be used. This option would work on individual VM from the source PC.
|
KB7347 | LCM upgrade on a two-node cluster fails with zookeeper session failure | null | In AOS 5.10 and earlier an upgrade via LCM may fail for a two-node cluster. Upgrade pre-check test_two_node_cluster_checks fails with "failed to revoke token" ERROR :
2019-03-25 11:31:34 ERROR lcm_prechecks.py:283 precheck test_two_node_cluster_checks failed: Reason(s): ['Cannot upgrade two node cluster when cluster has a leader fixed. Current leader svm id:X. Try again after some time ', "Failed to revoke token from 'IP', taken for reason 'life_cycle_management'"]
Logs point to a zookeeper session problem. ~/data/logs/lcm_ops.out on LCM leader:
2019-03-25 10:17:01 INFO lcm_actions_helper.py:368 action_name: get_shutdown_token action_description: Executing pre-actions: getting shutdown token:
~/data/logs/genesis.out on the CVM being upgraded indicate that zookeeper is unresponsive:
2019-03-25 10:17:12 INFO node_manager.py:3144 Stopping 7th service: KafkaService
~/data/logs/zookeeper.out shows that zeus timed out while waiting for zookeeper session to be established further confirming zookeeper being unresponsive:
2019-03-25 10:17:16,725:30122(0x7fa6a8cd3700):ZOO_INFO@zookeeper_interest@1951: Zookeeper handle state changed to ZOO_CONNECTING_STATE for socket [10.xx.x.55:9876]
Other possible signatures in~/data/logs/lcm_ops.out, later in the workflow (when the node where the upgrade failed is stuck in the phoenix prompt):
2019-03-09 02:07:31 INFO lcm_ops_by_phoenix:249 Preparing phoenix of [172.xx.x.101]
| This is a known issue that is resolved in AOS 5.10.2. Upgrade to AOS 5.10.2 or later. |
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""lspci"" | null | null | null | null |
KB14176 | Nutanix Kubernetes Engine - Restoring an etcd database snapshot | The Nutanix Kubernetes Engine (NKE) Guide provides steps to backup an etcd database, but does not include the steps to restore the backup. Users should contact Nutanix Support for assistance restoring an etcd database snapshot. | Nutanix Kubernetes Engine (NKE) is formerly known as Karbon.The NKE Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Nutanix%20Kubernetes%20Engine%20(formerly%20Karbon) provides steps to backup the etcd database under the Backing up Etcd section of the Guide. For example, for NKE 2.7, the procedure is documented in Nutanix Kubernetes Engine Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_7:top-backing-up-etcd.html. The steps result in a snapshot.db file containing the backup snapshot file for use in restoration; however, there are no restoration steps documented in the NKE Guide. | To restore an etcd snapshot that is created via the documented Backing up Etcd procedure, contact Nutanix Support https://portal.nutanix.com (login required). |
KB12030 | Frequent Lazan service restarts on AHV clusters with vGPUs after upgrading to 5.20.x or 6.0.x and later | Lazan service frequently restarts observed on clusters with a high number of VMs and or nodes with vGPU due RPCs to Pithos dropped as the size is above the maximum default threshold of 16MB.
Clusters upgraded to 5.20.x, 6.0.x and later where FEAT-11930 - ADS for vGPU enabled VMs is delivered are susceptible to this issue. | Lazan service frequently restarts observed on clusters with a high number of VMs and or nodes with vGPU due to RPCs to Pithos dropped as the size is above the maximum default threshold of 16MB.Clusters upgraded to 5.20.x, 6.0.x, and later where FEAT-11930 https://jira.nutanix.com/browse/FEAT-11930 (ADS for vGPU enabled VMs) is delivered are susceptible to this issue.
Symptoms
The following alert is raised in the cluster. Note the alert auto resolves so in some cases it is reported intermittently:
Cluster Service ['lazan'] Restarting Frequently
Lazan logs in /home/nutanix/data/logs/lazan.out report a Traceback when attempting to detect hotspots due to RpcClientTransportError for Received buffer too short:
2021-08-21 17:53:54,293Z CRITICAL decorators.py:47 Traceback (most recent call last):
The Traceback above occurs while Lazan is trying to communicate with Pithos. Pithos leader shows the reason for the RPC error in /home/nutanix/data/logs/pithos.out as the RPC size is above the maximum allowed size of 16MB:
E20210831 00:05:59.480355Z 7192 tcp_connection.cc:326] Message too long on socket 23 Max allowed size 16777216 bytes Read packet size 19188085 bytes
Additionally, in some cases there are also Lazan restarts due to the service reaching cgroup memory limit and the kernel stopping the service as it can be seen in the dmesg -T output of the CVMs:
[Tue Sep 21 00:58:07 2021] java invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=100
Placement solver thread may lead to Lazan crashes when reaching the default heap limit of 128MB:
java.lang.OutOfMemoryError: Java heap space | This issue is resolved in:
AOS 5.20.X family (LTS): AOS 5.20.4AOS 6.0.X family (STS): AOS 6.1.1
Please upgrade AOS to versions specified above or newer. |
KB13669 | Removing duplicate name from NKE Private Registry | Removing duplicate name from NKE Private Registry | Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.By default, NKE does not add additional container image registries to Kubernetes clusters. To use your own images for container deployment, The customers are able to add a private registry to NKE and configure private registry access for the intended Kubernetes clusters.In some rare condition, The admin can add two ( or maybe more) private registries with the same name and also they can't delete them with the standard instruction provided in the documentation:Here is some example of the issue:
nutanix@PCVM:~$ ./karbon/karbonctl registry list
Note: Same name with two diffrent UUIDkarbon_core.log:
2022-08-25T16:58:54.001Z cfs.go:354: [INFO] acs_stats_table has been updated |
Deleting a Private Registryhttps://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:top-registry-delete-t.htmlPlease note: the instruction is based on the registry-name
|
KB10459 | NCC Health Check: usage_discrepancy_check | The NCC usage_discrepancy_check | The NCC health check usage_discrepancy_check reports discrepancy between storage container total- and disk- logical transformed usage.Transformed usage refers to storage capacity after applying space saving optimization features such as Compression https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:sto-compression-c.html, Deduplication https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:sto-dedup-recommend-c.html, Erasure Coding https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:wc-erasure-coding-overview-wc-c.html.Check logic runs for each disk in a node, and WARN / FAIL status is reported separately for each disk having discrepancy above the threshold. The minimum value for the check to report a discrepancy is 10 GiB. The Warning threshold is 15%, and the Critical threshold is 25%.
Running the NCC Check
It can be run as part of the complete NCC check by running
nutanix@cvm$ ncc health_checks run_all
Or individually as:
nutanix@cvm$ ncc health_checks stargate_checks usage_discrepancy_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check is scheduled to run every day, by default.This check will not generate an alert.
Sample OutputNCC 4.6.0 and prior:For Status: PASS
Running : health_checks stargate_checks usage_discrepancy_check
For Status: WARN
/health_checks/stargate_checks/usage_discrepancy_check [ WARN ]
For Status: FAIL
/health_checks/stargate_checks/usage_discrepancy_check [ FAIL ]
NCC 4.6.1 and higher:For Status: WARN
/health_checks/stargate_checks/usage_discrepancy_check [ FAIL ]
For Status: FAIL
/health_checks/stargate_checks/usage_discrepancy_check [ FAIL ]
Output messaging
[
{
"Check ID": "Check that the discrepancy between the total container utilization and total transformed usage are below the recommended threshold"
},
{
"Check ID": "Discrepancy between usage is over the recommended threshold"
},
{
"Check ID": "Review KB 10459. Engage support to check the disk usage and perform more aggressive background scans."
},
{
"Check ID": "Storage usage increasing faster than normal"
}
] | The recommended solution is upgrading to AOS 5.20.4 and higher (or to AOS 6.5.1 and higher) to get improvement in the logic of Stargate background scans.In case AOS has already updated to the version with improvement but NCC usage_discrepancy_check still reports FAIL, consider engaging Nutanix Support https://portal.nutanix.com.NOTE: In situations where large data removal either on the Guest OS level or on the cluster level (removal of VMs and/or their snapshots) has been performed, this NCC check might throw a warning or a failure about usage discrepancy. This can be because enough Stargate disk scans have not yet been completed. Over a few days, extent store scans should fix the usage discrepancy warning flagged by the NCC check. In these cases, NCC will stop reporting the issue after a few days without any intervention. However, if usage_discrepancy_check reports warning or failure for an extended period of time, consider engaging Nutanix Support https://portal.nutanix.com.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@cvm:~$ logbay collect --aggregate=true
Attaching Files to the Case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.
If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. |
KB14515 | Space Accounting - Determining VM Space Usage | This article outlines how to determine the VMs with the highest space usage on a cluster. | For other Space Accounting issues not covered in this article, see Space Accounting | General Troubleshooting https://portal.nutanix.com/kbs/14475.
While managing a Nutanix cluster and investigating storage utilization, it can be helpful to investigate which VMs are using the most space and track their growth. Doing this may help identify which VMs can be deleted to recover space or determine which VMs have grown recently in space usage. Of course, VMs should only be deleted if their data is no longer needed. | There are a handful of options for tracking your space usage. It is worth noting that the usages seen with the methods below do not account for cloned data or snapshot data. Cloned data is shared among all of the clones, and so deleting a VM or vDisk that has been cloned will not release all of the space unless all of the clones have been deleted as well. Similarly, if a VM has snapshots taken, deleting the VM may not reclaim the associated space if it is still being referenced by a snapshot.
You can determine how much space your VMs are taking up through the VM page in Prism Element (PE). You can sort based on the "Storage" column to sort by the most used space.
You can see similar information except at the container level on the Storage page within Prism Element. If you click on the desired container, you can see a breakdown of the vDisks in that container and their usage, which is also sortable. The related VM or volume group for the vDisk is shown on the left, so you can relate the disk to a specific VM/VG.
Once you have determined the VMs with the largest space usage, you may be interested to see which VMs have grown in space recently. You can track this from Prism Central (PC). You can follow these steps to accomplish this:
Log into Prism CentralGo to the Operations > Analysis pageClick 'Add Chart'Select the VM(s) whose space usage you want to trackSelect Disk Usage or Snapshot usageAdd a title and hit save
The above steps will yield a chart like the one below. You can have multiple VMs added, and you can adjust the timeframe at the top of the screen. You can see in the example below that the "Windows 2012 VM" saw an increase in usage at around 10:00 AM. This can help you investigate sudden increases in VM space usage. You can make a similar chart to show the snapshot usage of the VM instead of the live usage.
|
KB14197 | VM migration task may get stuck on AOS 6.0 or newer if Acropolis service is restarted while both "HA failover" and "VM migrate" tasks were running | VM migration task may get stuck on AOS 6.0 or newer if Acropolis service is restarted while both "HA failover" and "VM migrate" tasks were running | VM migration task may get stuck on AOS 6.0 or newer if the following sequence of events happens:
HA failover has occurred on one of the hosts in the cluster.While HA failover is processing, the VM migration task is initiated (between any other hosts in the cluster).While both "HA failover" and "VM migrate" tasks are running, the Acropolis leader is changed.
As a result, "VM migrate" will indefinitely stay in the Running state, even post-completion of the "HA failover". | This issue is resolved in:
AOS 6.8.X family (eSTS): AOS 6.8
Upgrade AOS to versions specified above or newer.Generic guidance: check task history, review "HA failover" and "VM migrate" task creation and duration times, and try correlating it with acropolis.out log to see if the Acropolis leader was changed at the same time.Open an ONCALL so DevEx/Engineering can review the cluster state and apply a workaround. |
KB12781 | How to capture a sample Pulse payload | This article describes how to capture a sample Pulse payload. | Pulse payloads cannot be analyzed using network monitoring tools such as Wireshark since, for security reasons, all communications from Nutanix clusters to the Nutanix Insights service are encrypted. Hence, to allow customers to access this Pulse payload and get a clear insight into what is being collected from Nutanix clusters and sent back to the Nutanix Insights service, Nutanix has provided the following mechanism. The steps below enable the Pulse payload to be dumped as a JSON file at a node level and can be executed by accessing the CVM (Controller VM) or PCVM (Prism Central VM) via SSH.
Note: Ensure that there are no /home partition space issues on the CVMs prior to executing the following as, after about 12 hours of being enabled, the JSON files can reach a maximum of 10 MB in size. Verify that there are no /home partition space issues on the CVMs by executing the following command:
nutanix@cvm$ ncc health_checks hardware_checks disk_checks disk_usage_check
Even if Pulse is disabled on a cluster, employing the below workflow results in populated JSON files across any CVM from which the Pulse Payload Dumping is enabled. | Download the nutanix_pulse_packet_helper.py script
For NCC 4.6.1 and later, the script is already installed in the system. Therefore, there is no need to download it.
For NCC versions prior to 4.6.1, upgrade NCC to 4.6.1 or later to have access to the script. If upgrading to NCC 4.6.1 or later is not an option at this time, but your NCC version is at least 4.2.0, you can download the script from this link https://download.nutanix.com/kbattachments/12781/nutanix_pulse_packet_helper.py and save it in the CVM's ~/ncc/bin directory.
Downloading the script is not an option for NCC versions prior to 4.2.0. In this case, the only option is to upgrade NCC.
After downloading the script, make sure it has execute permission by running the following command:
nutanix@cvm$ chmod u+x ~/ncc/bin/nutanix_pulse_packet_helper.py
Enable Pulse Payload Dumping
To enable the Pulse packet dumping process, run the following command:
nutanix@cvm$ ~/ncc/bin/nutanix_pulse_packet_helper.py --pulse_data_preview enable
Verify that Pulse Payload dumping is now enabled by executing the following commands:
It may take several minutes before CFS is started on all nodes.
nutanix@cvm$ tail -F /home/nutanix/data/nusights/pulse_data_preview.json
The file /home/nutanix/data/nusights/pulse_data_preview.json is where the Pulse Payload dumping will occur on each CVM or PCVM on which it is enabled.
Disable Pulse Payload Dumping (after a few hours - 12 hours is recommended)
To disable the Pulse packet dumping process, run the following command:
nutanix@cvm$ ~/ncc/bin/nutanix_pulse_packet_helper.py --pulse_data_preview disable
|
KB7902 | Nutanix Cost Governance - General FAQ | Frequently asked questions about Nutanix Cost Governance. | This FAQ document lists out possible questions under general category.Nutanix Cost Governance is formerly known as Beam. | If a user wants to give “Read-Only” permissions, will they still be able to see cost and security optimization data in their Beam account?Yes, the user with “Read-Only” permission can see Cost Governance/Security Compliance data depending on cloud accounts assigned to him by the admin. For more details on permissions see Nutanix Beam User Guide https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Beam-User-Guide:bea-user-add-t.html.
How do I raise a support case for Beam?When logged in to Beam console, click ‘Help’ on the top right corner and choose the option ‘raise a support ticket’.
How can I enable SAML/SSO for Beam?For more information on enabling SAML/SSO support for Beam, refer to Beam User Guide https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Beam-User-Guide:bea-adfs-integrate-r.html.How can I enable MFA for my Beam account?Beam uses Google Authenticator to facilitate MFA codes. Click on ‘Profile’ under your user name in Beam, then click ‘My Nutanix Profile’ and click MFA. To set MFA, it will ask you to scan the QR code on Beam from your Google Authenticator app. To learn more, see Xi Cloud Services Administration Guide https://portal.nutanix.com/#/page/docs/details?targetId=Xi-Cloud-Services-Administration-Guide:adm-reset-multifactor-authentication-xi-t.html.
Where can I access Beam API docs?Beam API docs can be accessed at dev.beam.nutanix.com. https://dev.beam.nutanix.com |
KB1171 | NX Hardware [Power Supply] – Connecting Power to the Nutanix Block | This article lists the types of power cords and outlets required for a Nutanix block. | This article lists the types of power cords and outlets required for a Nutanix block. | Nutanix recommends 208/240V power across all platforms to ensure power supply redundancy. On the outlet side, we recommend a dedicated 208V, single-phase 20A circuit (NEMA L6-20). However, NEMA L6-30 can also be used.
The Nutanix equipment end is IEC-C13 except for some configurations in the NX-3060/3060N-G8 platforms. Nutanix ships with (2) C13->C14 cords to plug into a PDU. If you do not have a PDU and need to connect directly to the wall receptacle, you will need (2) C13->L6 cords.
Notes:
Make sure that the 2 sources have the same earth ground. If the ground is at a differential potential between the 2 sources, you can have arcing and the NTNX box will merge the 2 grounds.For NX-3060/3060N-G8 platforms, where the Thermal Design Power (TDP) of the CPU is greater than 135 W, use 3000 W PSU (SKUs 4316, 5315Y, 5317, 5220T, 5318Y and 6338T). Use C20/C21 power cable for the C19 PDU outlet with 3000 W PSUs. DO NOT USE a wall adaptor or any other adaptor cable, for example, a C21 to a C14 power cord. Use C13/C14 cable with the 2200W PSU for all other CPU SKUs on NX-3060/3060N-G8 and other Multinode platforms/generations. |
KB15091 | NDB: Migration_orchestrator.py failed to convert the Volume Group to VMDK disk for PostgreSQL Database | Migration Script to convert the Volume Group to VMDK Disk that has the PostgreSQL Database data residing failed with below Traceback error,
File "/opt/migration/VG_to_VMDK_Migration.py", line 761, in migrate_to_disk Migration.__pre_compute_mandate_informat | Terminology:
For a customer using the NDB PostgreSQL Database solution, providing a solution to convert their existing PostgreSQL DBVM with Volume Group to the VMDK Disk only for the database disk.As part of the current workflow in NDB for the PostgreSQL Database provisioning, the disk is created with Volume Group.The custom solution, "Migration_orchestrator.py" script, was shared to convert the existing Volume Group to VMDK for the PostgreSQL Database managed through NDB.
Validation
The migration script failed while converting the Volume Group to VMDK, and the sub-task to mount the VMDK disk to the DBVM failed with the below traceback error in the "/opt/migration/<DBServer_IP>/<Timestamp>" on the NDB Server.
Traceback (most recent call last):
On the PostgreSQL DBVM looked into the filesystem and found for two disks, the migration script had failed "SDG" and "SDH".
[erauser@localhost ~]$ lsscsi
Below are the filesystem details from the DBVM for the "PVS" command.
[erauser@localhost ~]$ sudo pvs
Below are the filesystem details from the DBVM for the "VGS" command.
[erauser@localhost ~]$ sudo vgs
| To fix the issue with the two filesystems "SDG" and "SDH", follow the below steps,
Validate the PostgreSQL Database is running, by running the below command on the DBVM.
sudo systemctl status patroni
If the PostgreSQL Database is running, run the below command to stop the Patroni Service.
sudo systemctl stop patroni
The mount details of the "data" and "tablespace" can be found in the "fstab" entry.Unmount the filesystem using the below command.
sudo umount <for all the data and tablespace mounts
Change the Volume Group state to inactive
sudo vgchange -an < for data and tablespace Volume-groups>
Change the Volume Group state to active
sudo vgchange -ay < for data and tablespace Volume-groups>
Run the command to mount the filesystem back
sudo mount -a
Start the PostgreSQL Database, by running the below command.
sudo systemctl start patroni
Run the migration script "Migration_orchestrator.py" again from the NDB Server, and the task was completed successfully.Validated the filesystem by running the "lsscsi" command on the DBVM, all the disks are listed as virtual disks.
[erauser@localhost ~]$ lsscsi
In case of an issue, the NDB Engineering Team must be involved through On-Call for further troubleshooting.
|
KB15853 | Lenovo HX3331 nodes shown as HX3330 in Prism | Some Lenovo nodes with model HX3331 may be shown as HX3330 in Prism if they were foundationed with version prior to 5.2.2, this KB provides the steps on how to identify this problem and resolve it. | Some Lenovo nodes with model HX3331 may be shown as HX3330 in Prism if they were foundationed with a version prior to 5.2.2. This KB provides the steps on how to identify this problem and resolve it.One or more nodes may show an incorrect model in the Prism UI, however, the dmidecode output or XCC (XClarity console) displays the correct model. To validate if you are experiencing one of two known scenarios that may cause the wrong node model to display in Prism please refer to the following information.
Scenario A. Confirm the purchased Hardware model against the model being displayed, If XCC (XClarity console) console shows the wrong VPD Data this needs to be reported to Lenovo support to get corrected. Once done, follow steps described on Scenario B.Scenario B. If XCC (XClarity console) information is correct but prism shows the incorrect model please proceed with the solution below;
Example:
Hardware Page on Prism where the Incorrect model (HX3330) can be seen:
This can also be confirmed by running zeus_config_printer command and looking for the model in rackable_unit_model_name:
nutanix@NTNXCVM:10.10.10.10:~$ zeus_config_printer | grep model
The XCC (Xclarity Console) Page shows the correct model (HX3331):
And the dmidecode output as well shows the correct hardware model:
[root@hostahv~]# dmidecode
| Scenario A:If VPD data on XCC (XClarity console) is incorrect, please contact Lenovo Support to fix it. Scenario B: If VPD Data on XCC (XClarity console) is Correct but Prism UI Shows different as XCC (XClarity console) please proceed with the below solution:
Remove the node from the cluster (Please refer to the Prism Element Web Console Guide on Removing a Single Node https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_7:wc-cluster-modify-removing-node-wc-t.html)Once removed, update the factory_config.json with the correct model:
Log on the controller VM through SSH. Replace the incorrect model under the "rackable_unit_model" value on the /etc/nutanix/factory_config.json file by using a text editor. (Use Sudo to edit the file.)
nutanix@cvm$ sudo vi /etc/nutanix/factory_config.json
Restart Genesis:
nutanix@cvm$ genesis restart
Add the node back.
If further assistance is needed, please contact Nutanix Support. |
KB16540 | File Analytics : Analytics VM mnt partition 100% utilised due to Elastic Search cores. | We ran into an issue on FA 3.3.0 where /mnt was filled almost entirely by Elastic Search cores. | This indicates that the File Analytics VM is utilizing its disk storage more than the accepted threshold. It is applicable to the mnt partition. This requires freeing up storage capacity in the affected disk. If the disk exhausts its storage capacity then it might block all operations on the FA VM.1. SSH to FA VM2. Verify the usage for the mnt partition using the below command.
[nutanix@NTNX-FAVM ~]$ df -h
3. verify the "/mnt/logs/containers/elasticsearch/elasticsearch.out" file, and you will notice the "not enough space" message
[nutanix@NTNX-FAVM ~]$ less /mnt/logs/containers/elasticsearch/elasticsearch.out
4. Review the filesystem /mnt, which is experiencing utilization issues, to identify files that can be deleted and determine which files are consuming the most space.
[nutanix@NTNX-FAVM ~]$ sudo du -aSxh /mnt/* | sort -h | tail | 1. Connect to bash shell of Analytics_ES1 docker container.
[nutanix@NTNX-FAVM ~]$ docker ps
2. Validate further for elastic search cores under elastic search container which will be consuming more space.
[root@7fd900fa5220 /]# ls -lathr /usr/share/elasticsearch/elasticsearch-7.16.3/*core*
3. Delete the cores files for the Elastic search container which will help reduce the space usage on the /mnt partition.
[root@7fd900fa5220 /]# rm -v /usr/share/elasticsearch/elasticsearch-7.16.3/core.43332
|
KB11741 | Nutanix Self Service (Formerly CALM): Prism Central Projects and LDAP Integration | Prism Central Projects functionality implements a new construct of "Projects" for integration into Nutanix Self Service (Formerly CALM). Projects are a set of Active Directory users with a common set of requirements or a common structure and function, such as a team of engineers collaborating on an engineering project.
This knowledge base article is created to expand on the Projects concept and provide users with guidance on using this Project/ LDAP integration. | Projects provide logical groupings of user roles to access and useNutanix Self Service (Formerly CALM) within your organization. To assign different roles to users or groups in a project, you use configured Active Directory in Prism Central.In Nutanix Self Service (Formerly CALM), a project is a set of Active Directory users with a common set of requirements or a common structure and function, such as a team of engineers collaborating on an engineering project. The project construct defines various environment-specific settings. For example:
Permissions, such as the user accounts and groups that can deploy a marketplace application.The networks that can be used when deploying an application.Default VM specifications and deployment options, such as vCPUs, vRAM, storage, base images, Cloud-Init or Sysprep specs.Credentials
Prism Central (PC) allows LDAP to Project mappings at individual user, user group, or OU level. These constructs are created within the LDAP provider and then mapped to the PC Project. Once this mapping has occurred in Prism Central an authentication request will be handled as seen in the following diagram.Logical Representation
Note that LDAP providers will not replicate changes of the directory structure to Prism Central. User changes such as deletion or removal should be performed in Prism Central prior to modification in LDAP. Failure to do so may result in an orphaned entity in Prism Central Projects.If an individual user is removed from LDAP prior to removal from PC Projects. This individual user will still be shown in the Projects UI, as LDAP will not send an update of this user's status change.Typically this can be resolved by simply selecting the individual user and removing this user account from Prism Central Projects. In some instances, additional resources are owned by that individual, and the user cannot be removed from the project until these resources are re-allocated to another user within that project.A common error message for this type of scenario would be:
User cannot be deleted as there are resources associated with this user. Change the ownership on the associated entities and try again.
The steps in the solution section of this KB can be used to identify an individual user's access to projects and the resources associated with them. | Identifying an individual user1) Check the users in PC with nuclei user.list. NOTE: Add "count=400" to the command:
nutanix@PCVM:~$ nuclei user.list count=400
The following additional arguments can be used for user.list command to ease the search in larger environments.
<nuclei> user.list
2) Confirm the individual user(s) which have been deleted in LDAP by the administrator. Confirmation from the sysadmin managing LDAP is highly recommended.Example Output
<nuclei> user.list count=400
Identifying resources assigned to an individual user.1) Gather the UUID from the associated user
In this example, we will use [email protected] as the user account.
2) Pass this UUID value into the <nuclei> user.get argumentsExample Output
<nuclei> user.get b35383ba-5d24-54f9-84b2-43d7c55809e5
Note the following fields for identification of the resources associated with this individual user account.
projects_reference_list A list of projects that this user is associated withresource_usage_summary Virtual resources (vCPU and vRAM, VMs, and vNetworks) associated with this account
If you are encounter errors when attempting to remove a user account from a project. It is recommended first to reallocate virtual resources that this individual has exclusive access to, and then re-attempting to remove the user from the project. |
KB13109 | pc.2022.1: Prism Self Service: The "Edit CD-ROM image" widget shows Image Name in blank for Catalog items | In Prism Central pc.2022.1 or later releases, the Prism Self Service users cannot see the Catalog items' names when they try to mount the catalog items to a VM's virtual optical drive. The "Edit CD-ROM image" widget of the "Edit VM" popup shows an empty Image Name column for each Catalog item. This does not affect pc.2021.9.0.5 or earlier. | In Prism Central pc.2022.1 or later releases, the Prism Self Service users cannot see the Catalog items' names when they try to mount the catalog items to a VM's virtual optical drive. The "Edit CD-ROM image" widget of the "Edit VM" popup shows an empty Image Name column for each Catalog item. This issue does not affect pc.2021.9.0.5 or earlier. | This issue was fixed in pc.2022.6, or later. Please upgrade Prism Central. |
KB9600 | SAML 2 Compliance restricted to ADFS | Official Prism documentation lacks clarity, KB needed to avert cases | Customers may be wanting to setup authentication using a third-party provider using SAML. The Nutanix Bible, for example, lists;
SAML / 2FA
Technical document TN-2042-Prism https://portal.nutanix.com/page/documents/solutions/details?targetId=TN-2042-Prism:TN-2042-Prism also states the following;
Configuring Prism to use IDP supports Microsoft Active Directory, OpenLDAP or SAML as an authentication source, which in turn allows organizations to use existing user accounts and groups to control access to Prism. | For improved SAML2 compliance, please update Prism Central to pc.2021.3 or later. Prior to Prism Central version pc.2021.3, Prism Central was not fully SAML 2 compliant and is limited to ADFS implementation only.This is noted in the Nutanix Security Guide https://portal.nutanix.com/page/documents/details/?targetId=Nutanix-Security-Guide-v5_17%3Amul-security-authentication-pc-t.html, in an info block in the "About this guide" section:
Note: ADFS is the only supported IDP for Single Sign-on. |
null | null | null | null | null |
KB13074 | Prism Central: Prism Central upgrade vDisk - Compatibility matrix for Non-Nutanix clusters and workaround | This article outlines the compatibility, limitations and workaround for provisioning Prism Central upgrade vdisk on HyperV and Non-Nutanix clusters. | Starting PC.2022.6 - A dedicated new 30 GB disk is created and mounted for PC upgrades. This vdisk will be used for downloading and extracting upgrade binaries from consecutive upgrades.Two ways of getting this implemented:
Greenfield: A fresh PC deployment is called the Greenfield Scenario (Will be supported in future version of Prism Central)Brownfield: A PC upgraded from version n-1 to n is the Brownfield Scenario (Supported starting PC.2022.6)
Limitations of this feature:
For non-nutanix deployments, we do not support the upgrade vdisk during PC deployment.
For greenfield scenarios, the disk will not be added as a part of the Prism central deployment.
Updated for PC 2024.1: Starting with Prism Central version 2024.1, for greenfield scenarios, the disk will automatically be added during the deployment process. This enhancement is not applicable to earlier versions of Prism Central.
For brownfield scenarios, we provide an option to add a 30G disk manually before the upgrade, which would be considered as the new upgrade vdisk and would be used for subsequent upgrades.
This would be applicable in the below scenarios:
The underlying Prism element on which Prism Central is hosted is a Hyper-V cluster.Prism Central is deployed on a cluster where the hosting Prism element is not registered to the Prism Central cluster. (no trust established)
Compatibility matrix:
Brownfield
Note:
There is no NCC check/alert which will notify about this disk addition in Prism Central.In the case of scenario-2 above where there are out-of-space issues in the interim period, please follow the steps in the solution section to add a 30 GB unformatted vdisk from Prism UI that can be manually added to Prism Central. This disk will automatically be formatted by the prism central code during the upcoming upgrade and used in the subsequent upgrades
Greenfield
Heterogeneous Vdisk Scenario:This can occur in the following scenario
PC is upgraded from 2022.4 to 2022.6 and gets the PC Upgrade Vdisk (It is hosted on a PE of version lower than 6.6)PC is scaled out, now the two new nodes will not have the upgrade vdisk
In such a scenario, the upgrade vdisk would be added in the next upgrade (2022.9) to the other two nodes, which will then be used by the consecutive upgrades.
Note: When a 3-node PC is in a heterogeneous scenario, the PC tar bundle is downloaded to the upgrade vdisk if the vdisk is available on that node, in case it is the leader at that point. Else - it will be downloaded in the home partition, but the untar happens on the /home partition.[
{
"Brownfield scenario-1:\t\t\tHosting PE is registered to PC\n\n\t\t\tThe user gets the new 30 GB disk when they upgrade to PC.2022.6 (AOS version they are on does not matter. For ALL AOS versions, the disk will be provisioned. A regular Prism Central or Legacy Prism Central deployments do not matter as well).": "Brownfield scenario-2:\t\t\tHosting PE is not registered to PC and user upgrades to:\n\n\t\t\tPC.2022.9 – CMSP is made default and trust is established with hosting PE; Upgrade disk is still not addedPC.2022.11 – PC gets the upgrade diskConsequent upgrades – Upgrade disk gets used\n\t\t\tNote: This is just an example scenario, considering PC.2022.9 has the default CMSP enabled."
},
{
"Brownfield scenario-1:\t\t\tHosting PE is registered to PC\n\n\t\t\tThe user gets the new 30 GB disk when they upgrade to PC.2022.6 (AOS version they are on does not matter. For ALL AOS versions, the disk will be provisioned. A regular Prism Central or Legacy Prism Central deployments do not matter as well).": "Brownfield scenario-3:\t\t\tNon-Nutanix clusters:\n\t\t\tNot supported (includes Prism central clusters deployed on Hyper-V clusters and Prism Central clusters deployed on Non-AOS clusters)"
},
{
"Brownfield scenario-1:\t\t\tHosting PE is registered to PC\n\n\t\t\tThe user gets the new 30 GB disk when they upgrade to PC.2022.6 (AOS version they are on does not matter. For ALL AOS versions, the disk will be provisioned. A regular Prism Central or Legacy Prism Central deployments do not matter as well).": "Greenfield scenario-1:\n\n\t\t\tNew upgrade vdisk will be added when Prism Central is deployed on AOS 6.6 and above."
},
{
"Brownfield scenario-1:\t\t\tHosting PE is registered to PC\n\n\t\t\tThe user gets the new 30 GB disk when they upgrade to PC.2022.6 (AOS version they are on does not matter. For ALL AOS versions, the disk will be provisioned. A regular Prism Central or Legacy Prism Central deployments do not matter as well).": "Greenfield scenario-2:\n\n\t\t\tNo new upgrade vdisks will be added when Prism Central is deployed on AOS 6.6 and above on Non-Nutanix clusters, as stated in the limitations above.\n\n\t\t\tUpdated for PC 2024.1: Starting with Prism Central version 2024.1, for greenfield scenarios, the disk will automatically be added during the deployment process."
}
] | For brownfield scenarios - In non-Nutanix deployments (where there is no hosting PE), we provide an option to add a 30 GB disk manually before the upgrade that can be used to provision the new upgrade vdisk during the upgrade, which can be used for the subsequent upgrades.Steps:
Select the prism central VM from the list of VMs in Prism.Click on edit and go to the disks sectionAdd a new SATA disk of 30 GB size and attach it to the prism centralSave the configuration and validate that the new disk is showing up for prism central.
Provisioning of disks, formatting the file system to extend the space, and adding fstab entries will be taken care of by the cluster during the upgrade to PC.2022.6 (or later) and this upgrade disk can be used for downloading/untar upgrade packages from the consequent upgrades.Note: This does not apply to Greenfield scenarios. |
KB9899 | Nutanix Files - Backup for millions of files stalls & won't complete using Commvault Backup | Backing up millions of files using Commvault can cause the backup to stall and won't complete. | Customers may experience issues backing up Files environments using Commvault when the dataset is extremely large. For example:
Backing up more than 100 million files may result in slow backups taking forever or eventually timeout and fail.Backing up millions of files using 3rd party backup product (Commvault) stalls and doesn't complete.Commvault client is configured using the new method 'Nutanix Files Client' and uses Change File Tracking (CFT) |
Confirm that CFT is enabled, from Commvault Console, 'Nutanix Files Client's properties.
Confirm that caching is turned off ( Nutanix Files - Metadata latency showing high for Nutanix files /articles/Knowledge_Base/Metadata-latency-showing-high-for-Nutanix-files)
nutanix@FSVM:~$ afs smb.get_conf 'metadata cache ttl' section=global
Confirm that GRO is off ( Nutanix Files - Steps to disable GRO (generic receive offload) settings /articles/Knowledge_Base/Nutanix-Files-disable-GRO-generic-receive-offload-settings)
nutanix@FSVM:~$ afs fs.get_gro
Create multiple small subclients according to folders/files change rate and stagger the backups to start at different schedule / time :
Eg.: If the customer has ~200 million files, identify the folders that are static and the ones that change daily/monthly.
Configure all backups using Synthetic Full and Incremental forever.Backup the folders that are static or with less change rate in a separate sub-client.Create a new sub-client for more frequently changing folders and CFT should be able to manage backing up only changed data.Adjust scheduling so that these jobs do not start at the same time and stagger the schedule/start time. |
KB13166 | Nutanix DRaaS - L2 Stretch implementation fails for distributed port-groups | DRaaS L2 Stretch is supported for distributed port-groups and subnets in Prism Central pc.2022.4. However, it may still fail with the following error: "error_detail": "Failed Layer2StretchCreatePrechecks \nSubnet prefix length not provided for unmanaged subnet 600 TEST Dist Port Group" | Nutanix DRaaS is formerly known as Xi Leap.Prism Central pc.2022.4 now supports DRaaS L2 Stretch with ESXi distributed ports. However, due to a known issue, the L2 Stretch create workflow in Prism Central fails with following error
Failed Layer2StretchCreatePrechecks \nSubnet prefix length not provided for unmanaged subnet 600 TEST Dist Port Group
The error can be seen in Ergon CLI for the failed L2 Stretch create task.
PCVM:~$ ecli task.list | grep kLayer2StretchCreate | This issue is resolved in Prism Central pc.2022.4 which now supports L2 Stretch implementation with subnets for distributed port-groups.However, due to a known problem, the L2 Stretch create task fails in Prism with the above error.A workaround to establish L2 Stretch exists. Please engage Nutanix Support http://portal.nutanix.com/ to implement it. |
KB12092 | Nutanix Cloud Clusters (NC2) - Network creation on Prism Element hosted on AWS fails with "No matching subnet found for CIDR" | Network creation on Prism Element hosted on AWS fails with "No matching subnet found for CIDR". | Network creation on NC2 on AWS fails with error "No matching subnet found for CIDR":You can see failed tasks for network creation:
nutanix@cvm:~$ ecli task.list
Task details show the error "No matching subnet found for CIDR":
nutanix@cvm:~$ ecli task.get ed45be7e-2d8c-477b-9d8a-8d2a1292de09
~/data/logs/acropolis.out on Acropolis leader shows messages "Failed to create network":
To find the Acropolis leader see KB-4355 https://portal.nutanix.com/kbs/4355
2021-07-29 14:33:40,353Z WARNING libvirt_connection.py:655 Failed to parse script /usr/local/bin/get_gpu JSON output:
CVM attempts to reach AWS timeout (~/data/logs/network_service.out on Acropolis leader):
I0729 14:32:30.783147 14397 aws_manager.go:613] plugnic request: 00000000-0000-0000-0000-000000000000,switch_cluster,00000000-0000-0000-0000-000000000000,X.X.X.X,0,50:6b:8d:6a:9f:ee,[sg-12345]
CVM or AHV host hosted on AWS are unable to reach the Internet:
nutanix@cvm:$ allssh "ping google.com"
Tracepath to google.com is reaching the NAT gateway and returns "no reply" after that:
nutanix@cvm:~/data/logs$ tracepath google.com | Verify that there is a subnet created with CIDR x.x.x.x/x on AWS (AWS Console > VPC > Subnets): Make sure that there is a NAT Gateway associated with the subnet in AWS. NAT Gateway enables instances in a private subnet to connect to services outside the VPC using NAT Gateway's IP address. DNS servers should be configured on all the CVMs:
nutanix@cvm:$ allssh " cat /etc/resolv.conf"
Internet Gateway should be deployed on AWS.The NAT Gateway route table should have an entry 0.0.0.0/0 pointing to Internet Gateway, which was missing in this case. After pointing NAT Gateway to the internet, CVMs and AHV hosts are communicating with the internet and can create Networks on NC2 on AWS.
AWS document links: Troubleshoot NAT gateways - Amazon Virtual Private Cloud https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-troubleshooting.html#nat-gateway-troubleshooting-no-internet-connection NAT gateways - Amazon Virtual Private Cloud https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html |
KB12683 | Nutanix Files: How to get the size of all the TLDs in a Distributed share. | Get the size of all the TLDs in a distributed share | The following steps can be used to get the size of all TLDs in a distributed share. | 1) Run the below command on any FSVM to create a working directory on each FSVM.
nutanix@FSVM:~$ allssh "mkdir /home/nutanix/tld_size"
2) Move into the tld_size directory.
nutanix@FSVM:~$ cd /home/nutanix/tld_size
3) Run the below command to create a file named tld_size, which will have the paths for each TLD. Replace <Distrubured sharename> with the actual name of the distributed share.
nutanix@FSVM:~$ afs share.tld_count <Distrubured sharename> show_names=true | egrep -ve "VG:|NVM|Total"| grep -v '^$' | while read i; do afs share.owner_fsvm <Distrubured sharename> path="$i" | egrep "Absolute"| awk -F": " '{print$2}'>>tlds_path; done
4) SCP the file to the other FSVMs. Replace <FSVM IP> with each FSVM storage/internal IP.
nutanix@FSVM:~$ scp tlds_path nutanix@<FSVM IP>:/home/nutanix/tld_size/
5) Once the /home/nutanix/tld_size file is on each FSVM, run the below command to get the size of the TLDs.
nutanix@FSVM:~$ allssh 'cat /home/nutanix/tld_size/tlds_path | while read i; do du -sh "$i"; done' | grep -v "No such file or directory"
Example:
nutanix@FSVM:~/tld_size$ allssh 'cat /home/nutanix/tld_size/tlds_path | while read i; do du -sh "$i"; done' | grep -v "No such file or directory"
|
KB14024 | Calm application deployment fails with "Failed getting banner" | During a Calm application deployment an error might occur: Script execution has failed with error "Failed getting banner". | During a Calm application deployment an error might occur:
Script execution has failed with error "Failed to get banner"
This issue indicates an error while Calm tried to connect to the application via SSH. The following SSH-related errors can trigger the 'Failed getting banner' fault:
Connection refusedConnection timeoutI/O timeout | Ensure that PCVM can communicate to the subnet that is used for application deployment. Port 22 for SSH must be available within the application being deployed. Ensure the blueprint configures SSH on the application before the script is called to prevent the issue. |
KB10149 | How to find replication bandwidth usage for Metro Availability containers | This articles helps identify the bandwidth used for metro containers. | In current AOS versions, bandwidth used by synchronous replication traffic for Metro Protection Domain(s) is not displayed on charts in Prism UI on the Analysis page (or on the Metrics tab for selected Metro Protection Domain).These charts display only bandwidth used by asynchronous replication (that occurs for Metro Protection Domain only when it is enabled for the first time or when it is re-enabled).
In normal conditions, when Metro Protection Domain status already "in-sync", all replication is happening synchronously (at oplog level) and there is no asynchronous replication traffic for this PD. In this state Replication Bandwidth (or Replication Bytes) Transmitted/Received charts for Metro Protection Domain(s) on the Analysis page will show no data as on the example below.
https://nutanix--c.na62.content.force.com/servlet/rtaImage?eid=ka00e000000cAGR&feoid=00N60000002KbPm&refid=0EM0e000000ypF9 | When Metro Availability is enabled for a container(s) all disk operation from User VMs on the container(s) will be synchronously replicated to the remote cluster.There are two scenarios, depending on which side of Metro Enabled container (active or standby) User VMs are running.
if all User VMs are running on the active side of Metro Enabled container - then only disk write operations from User VMs will be replicated (sent over the network) to the remote cluster.if User VMs are running on the standby side of Metro Enabled container - then both read and write disk operations from User VMs will be sent over the network to the remote cluster
More details can be found in the Overview section of the Metro Availability Best Practices Guide https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2009-Metro-Availability:top-nutanix-enterprise-cloud-overview.html.
Based on this we can say that if VMs are running from the active side of Metro Enabled container then bandwidth consumed by synchronous replication will be roughly the same as total write bandwidth generated by VMs on this container. The easiest way to find the minimum required bandwidth between clusters in Metro is to check write bandwidth metrics for the metro enabled container(s). This can be done using Analysis page in Prism UI.
Make sure that all VMs are running on the active side.Go to the Analysis page in Prism UI, and create New Metric Chart.
Provide some title for new chart.Select "Storage Container" as "Entity Type".Choose the "Storage Controller Bandwidth - Write" metric.From "Entity" field select and add (one by one) all Metro Enabled containers.
As a result, the new chart (similar to the example below) will be created on Analysis page, which can be used to see historical data (up to 90 days) of Write Bandwidth on Active side of Metro Enabled containers.
In case of all User VMs running only on Metro containers the same metric can be checked on cluster level to have total bandwidth values from all active containers. Otherwise (if there are some VMs that are not on the Metro Enabled containers) then this total bandwidth should be calculated manually.
Note #1:Keep in mind, that bandwidth requirements that are based on Write load from VMs are only for the ideal scenarios when all User VMs running on the active side. It is highly recommended to ensure that throughput between clusters in the metro is sufficient to handle the load when the clusters are not in optimal condition (when VMs running on the standby side, for example during planned failover, before the standby side promoted).During this time read operation from VMs will go through the network (standby side will read data from the remote active side and will see traffic as Rx), and write load will need to go via the network two times (from standby to active, and then from active to standby).For User VMs running on standby side of Metro Enabled container then bandwidth usage will be seen on standby side as:
Rx bandwidth = VM write load bandwidth + VM read load bandwidthTx bandwidth = VM write load bandwidth.
Total bandwidth required in both directions may be significantly higher when VMs are running on the standby side. Insufficient throughput between the clusters in these conditions will impact VM performance.Note #2:Bandwidth values we can get from the cluster are average for some short period of time.Real disk load from VMs can have a behaviour of short periods with high load (spikes) with pauses between spikes when the load is relatively low.In such cases, average bandwidth used will show relatively low values when in reality during such spikes in load VMs may be already limited in performance by network throughput between clusters.
For example, even if we see MBps average = 200, that does not mean that the load was steady all this time. It is possible that within some seconds inside of the sampling interval (or even a fraction of a second) VMs generated a spike of more than 400 MBps and then has the period(s) with a very low load. But on graphs in Prism UI this spike in load may not be seen, because it was "masked" due to the average value based on a 30-second sampling interval.Another example, even without seeing average 125 MBps (=1Gbps) values for total Rx (or Tx), the cluster may sometimes (for very short periods of time) be hitting a theoretical limit on network throughput of 1Gbps link between clusters. |
KB14625 | Cluster recovery after fire alarm | This article outlines best effort steps to recover cluster in case of a fire alarm. | IMPORTANT: Cluster recovery in case of fire alarm or fire suppression event is a best effort only. Please notify your manager about the incident.
A fire alarm or activation of a fire suppression system can cause multiple HDDs in the cluster to be marked for removal, causing a cluster outage. The following recovery steps can be followed to recover the cluster.
NOTE: Only HDDs are typically affected by the fire suppression system. The mechanism, roughly: acoustic shock wave from the gas release nozzles located on the datacenter server room ceiling, close enough to server racks where Nutanix blocks are installed. Enterprise HDDs are usually 7200 rpm, comprising multiple magnetic disk platters of ever-increasing density, requiring precision positioning of the disk head above them. During the high pressure gas discharge, a loud (120dB+) wide spectrum (white) noise is generated at the nozzle, which might trigger resonance vibrations of mechanical moving parts inside the HDD, inflicting genuine Read/Write disk IO errors. | Make a note of all the disks marked for removal on all CVMs. Perform smartctl checks on all HDDs on a node:
nutanix@cvm$ sudo smartctl -x /dev/sdX -T permissive
Example:
sudo smartctl /dev/sdb -a
Look for counter G-Sense_Error_Rate. This counter increments when the disk detects external shocks.
Check smartctl for errors on the disk and note the time of the errors and ensure errors are not continuing:
nutanix@cvm$ dmesg -T | grep sdX
Example:
[Mon Feb 20 13:47:58 2023] Modules linked in: tcp_diag inet_diag ip6table_mangle iptable_mangle mptctl mptbase ipmi_devintf ipmi_msghandler rdma_ucm(O) nf_log_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_multiport rdma_cm(O) ip6table_filter iw_cm(O) ip6_tables xt_limit nf_log_ipv4 nf_log_common xt_LOG ib_ipoib(O) nf_conntrack_ipv4 ib_cm(O) nf_defrag_ipv4 xt_recent xt_conntrack nf_conntrack libcrc32c iptable_filter ib_umad(O) mlx5_ib(O) mlx5_core(O) auxiliary(O) ib_uverbs(O) ib_core(O) mlx_compat(O) mlxfw(O) psample devlink ptp pps_core nfit libnvdimm iosf_mbi ttm kvm_intel drm_kms_helper kvm syscopyarea sysfillrect irqbypass sysimgblt fb_sys_fops crc32_pclmul ghash_clmulni_intel drm ses enclosure aesni_intel drm_panel_orientation_quirks loop ext4 mbcache jbd2 raid1 sr_mod sd_mod cdrom crc_t10dif sg ata_generic pata_acpi
Disks can be added back following the steps below.
Stop Hades service
nutanix@cvm$ sudo /usr/local/nutanix/bootstrap/bin/hades stop
Unmount the disk
nutanix@cvm$ sudo umount /home/nutanix/data/stargate-storage/disks/<disk serial>
Start Hades
nutanix@cvm$ sudo /usr/local/nutanix/bootstrap/bin/hades start
Use disk_operator to accept the disk. Refer KB 5957 https://portal.nutanix.com/kb/5957 for more details.
WARNING: Improper usage of disk_operator command can incur in data loss. If in doubt, consult with a Senior SRE or a STL.
nutanix@cvm$ disk_operator accept_old_disk <disk_serial> |
KB7578 | Drive Population Rules | A guideline on drive population rules for different types of platforms. | A guideline on drive population rules for different types of platforms. | GeneralFor all up-to-date guidance refer to Acropolis Advanced Administration Guide https://portal.nutanix.com/page/documents/details?targetId=Advanced-Admin-AOS-v6_7:app-nutanix-cloud-infra-minimum-field-requirements-c.html.
All SSDs in a node are required to be of similar capacity (+/- 20% accounting for the capacity variation of different disk models).
AOS 6.0 added support for mixed disk capacity for RMA. Note that while Nutanix supports mixing different-sized drives, the new larger drives will only present usable space equal to the other drives in the node. If the intent is to increase the total storage amount, then all drives in a node in a tier will need to be replaced.For pre-6.0 AOS versions, the workaround is to use the disk_skew_manager.py script described in KB-10683 https://portal.nutanix.com/kb/10683.
For storage mixing restrictions, please refer to this Guide (under Storage Restrictions): Platform ANY - Product Mixing Restrictions https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Guide:har-product-mixing-restrictions-r.html
For guidance on resources requirements for DR solutions with Asynchronous, NearSync, and Synchronous (using metro availability) replication schedules to succeed, refer to Data Protection and Recovery with Prism Element: Resource Requirements Supporting Snapshot Frequency (Asynchronous, NearSync and Metro) https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v6_5:wc-dr-nearsync-resource-requirements-r.html.
Dense Storage Nodes
From AOS 5.11 to AOS 5.20, the per-node HDD capacity cannot be more than 120 TB (136 TB for all-flash nodes). At least 2 SSDs with a minimum of 7.68 TB total capacity are required per node. This rule makes sure that there is enough flash capacity for storing metadata in dense storage nodes.
As of AOS 6.5, this maximum capacity has been increased to 185 TB per node (216 TB for all-flash nodes). Note: Depending on the workload (Nutanix Files, Objects etc...), this maximum supported capacity may be higher. See KB 7196 https://portal.nutanix.com/kb/7196 for NCC health check dense_node_configuration_checks that should alert if total HDD capacity per node is not supported by AOS, or if other requirements are not met relative to the workload.
More information:
Google doc "Drive Population Rules": https://docs.google.com/document/d/1RxasggdZqYsV1vc0uIn_gY5ZIvDPxcCxAavARyOIYIo/edit?usp=sharing https://docs.google.com/document/d/1RxasggdZqYsV1vc0uIn_gY5ZIvDPxcCxAavARyOIYIo/edit?usp=sharing
Confluence page "Drive Population Rules" that replaced Google doc: https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=PM&title=Drive+Population+Rules https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=PM&title=Drive+Population+Rules
Confluence page "Dense Storage Nodes": https://confluence.eng.nutanix.com:8443/display/PM/Dense+Storage+Nodes https://confluence.eng.nutanix.com:8443/display/PM/Dense+Storage+Nodes
FEAT-11430 https://jira.nutanix.com/browse/FEAT-11430 Support mix storage capacity for HW Swap is GA with AOS 6.0.
TOI slides: https://docs.google.com/presentation/d/1ml1N3pxqfG0rvby_IfgksrhrwKnprYWR/view https://docs.google.com/presentation/d/1ml1N3pxqfG0rvby_IfgksrhrwKnprYWR/view
For capacity variation of different disk models, see KB 9993 https://portal.nutanix.com/kb/9993 for more details.
Dell XC: Hybrid nodes with odd number of SSDs is supported assuming we still cover the 2:1 ratio of HDD to SSD but is not recommended. Reference this Slack thread https://nutanix.slack.com/archives/C08LJ8K0V/p1655133450378659 for preceding statement. |