{"text":"### Human: Can you tell about NCC Health Check: disk_online_check. ### Assistant: The NCC health check disk_online_check checks if there are offline disks.. The NCC Health Check disk_online_check checks if there are offline or unmounted disks.\n\nRunning the NCC Check\n\nIt can be run as part of the complete NCC check by running:\n\nnutanix@cvm$ ncc health_checks run_all\n\nor individually as:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: Select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 5 minutes, by default.This check generates a Warning alert A101048 after 6 consecutive failures.\n\nSample Output\n\nFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID101048DescriptionCheck for the offline and unmounted disksCauses of failureA disk may have failed or was manually removed.ResolutionsPlease refer to KB 1536 for more information.ImpactCluster performance may be degraded.Alert IDA101048Alert TitleOffline and unmounted disks in a clusterAlert Smart TitleDisks mounted at mount_paths on CVM cvm_ip are marked offline.\t\t\tDisk serials disk_serial_number are not mounted on CVM cvm_ip.Alert MessageDisks mounted at mount_paths on CVM cvm_ip are marked offline.\t\t\tDisk serials disk_serial_number are not mounted on CVM cvm_ip.\nNOTE: It has been observed that this check displays a WARNING regarding the CVM boot volumes for NC2 clusters on AWS, at the same time you may also observe the same disks shows as not mounted in PE UI. You can safely ignore this warning and no further actions are needed. Engineering is aware of the issue and is working to have it resolved in future versions of NCC. \nnutanix@cvm$ ncc health_checks run_all\n . Confirm the current disk status and check if the disks are mounted.\n\n\n\tIn the case of a single disk, check the output of the following:\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\n\tReplace <diskserialnumber> with the serial number of the currently mounted disk.\n\t\n\tFor multiple disks, check the output of the following:\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\n\tReplace diskserialnumber1 and diskserialnumber2 with the serial numbers of the currently mounted disks.\tFor example, if you enter Z1ZBAHZ0 for disknumber1 and Z1ZBAAEE for disknumber2:\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\t\n\tConfirm that there are no issues with Hades or offline mounting paths. If there are no issues, a blank output is generated:\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\t\n\tNow, confirm that there are no phantom entries in the disk configuration json files for the disk serial number you enter:\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\n\tReplace diskserialnumber with the serial number of the currently mounted disk. For illustration, Z1ZBAAEE and Z1ZBAHZ0 are used here.\tThe correct output must look similar to the following:\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\t\n\tIf these conditions are met, proceed to remove the disk IDs with:\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\n\tReplace <disk ID to be removed> with the ID of the disk you want to remove.\tFor example, for disk ID 135646378 and ID 135646379,\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\t\n\nWait for the operation to complete. This might take up to 30 minutes, sometimes longer.The Prism web console should no longer contain those erroneous disk entries. Confirm by running this check\n\nnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\nIf a disk is marked or reported as offline and does not have a phantom entry, then the disk is not being utilized by the cluster and might potentially have failed.\n\n\n\tAs an additional check, you can manually check the failing disk:\n\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\t\n\nReplace sdX with the disk device node you want to check.Run the following command from a CVM (Controller VM) to determine the mapping of drive to disk serial number:\n\nnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\nRun the following command to check based on the above output:\n\nnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\n    Note: For Hyper-V, smartctl is located on the Hyper-V host.\n\n\n\tFrom the CVM, run winsh to get into the PowerShell.\n\t\n\tFrom the PowerShell, go to C:\\\"Program Files\"\\Nutanix\\Utils\\\n\t\n\tRun the following command for SSD:\n\tnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\t\n\t\n\nOr the following for HDD:\nnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n\n\nNote: The -a parameter is used in smartctl to list all the stats for SSD or HDD. If you do not use the -a parameter, only the test result as either PASSED or FAILED is displayed. \n\nYou may be able to check with the DiskSerialNumber in the hades.out logs to understand why the disk was marked for removal and when.\nFor Example, here DiskSerialNumber is WSE02E46  \nnutanix@cvm$ allssh 'df -h | grep <diskserialnumber>'\n \n\nFor assistance in obtaining and installing a replacement drive, engage Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Configuring AOS for VMware Fault-Tolerance on ESXi 6.7. ### Assistant: AOS 5.19.x now supports VMware Fault Tolerance running ESXi 6.7. The Nutanix cluster requires configuring two Stargate timeouts as well as RPC timeouts on ESXi to ensure the timely failover of NFS services.. Starting from AOS 5.19.x, VMware Fault Tolerance is now supported with ESXi 6.7. To enable this feature, two Stargate timeouts, as well as RPC timeouts, must be configured on ESXi hosts to ensure timely failover of NFS services as required to support Fault Tolerant VMs.\n\nIn addition to the two Stargate timeouts, the following two parameters value must be increased to 30 seconds on each ESXi host in a cluster:\n\n/NFS/CreateRPCTimeout/NFS/RenameRPCTimeout. Contact Nutanix Support for assistance to set the Stargate timeouts. This configuration change requires a rolling restart of the Stargate process on each node of the cluster.\n\nTo set the ESXi host parameters /NFS/CreateRPCTimeout and /NFS/RenameRPCTimeout follow instructions below:\n\nTo set the parameters:\n\n[root@ESXi]# esxcfg-advcfg -s 30 /NFS/CreateRPCTimeout\n\nTo check the parameters:\n\n[root@ESXi]# esxcfg-advcfg -s 30 /NFS/CreateRPCTimeout. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix user being locked out from scale-out PC. ### Assistant: Nutanix user is locked on scale out Prism Central. Users cannot login as \"nutanix\" via SSH to a scale-out Prism Central deployment.\nAs an admin user via SSH, running the below command, it's possible to notice that the \"nutanix\" user is locked due to several \"su -l\" commands:\n\nadmin@PCVM:~$ sudo faillock --user nutanix| tail -15\n\nSwitch to root with \"sudo su -\" and look into /home/log/secure logs for the below error message:\n\nadmin@PCVM:~$ sudo faillock --user nutanix| tail -15\n\nAnother way to identify the issue is to check /home/log/messages file:\n\nadmin@PCVM:~$ sudo faillock --user nutanix| tail -15\n\nBy tracing one of the failed PIDs in sysstats top.INFO log, it points to the below command:\n\nadmin@PCVM:~$ sudo faillock --user nutanix| tail -15\n\nWhich turns out to be a command run every minute by the vip_cleanup script:\n\nadmin@PCVM:~$ sudo faillock --user nutanix| tail -15\n** NOTE **: The script only blocks 2 PCVMs out of 3 since it does not run on the PCVM with the VIP. This is also why it only affects some PCVMs, as there's always a VIP in scale-out deployments. If needed, it's possible to unlock the \"nutanix\" user with the below command in the affected PCVM: \n\nadmin@PCVM:~$ sudo faillock --user nutanix| tail -15. This issue is resolved in pc.2022.6.0.6. Nutanix recommends that the user upgrade Prism Central to pc.2022.6.0.6 or higher. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Prism Central slowness as it receives a huge amount of incoming v1/users/details API calls from PE Clusters causing availability issues. ### Assistant: Prism Central deployment with a large count (hundreds) of managed PE nodes (CVMs) may experience prism gateway service availability problems due to the constant flow of incoming PrismGateway/services/rest/v1/users/details API calls coming from Prism Elements with very high frequency.. Prism Central deployment with a large count (hundreds) of managed PE nodes (CVMs) may experience prism gateway service availability problems due to the constant flow of incoming PrismGateway/services/rest/v1/users/details API calls coming from Prism Elements with very high frequency.Identification:Verify that Prism Central manages multiple Prism Elements with a large total count of nodes in connected PEs (hundreds); To check the total number of managed PE nodes, use the command below:\nnutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l\nSymptoms may include:\n\nThe constant incoming flow of v1/users/details API calls in prism_proxy_access_log.out from connected Prism Elements. The frequency of incoming v1/users/details calls is very high (hundreds per minute)\n\tnutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l\n\tPrism Gateway service may be killed with OOM:\n\tnutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l\n\tYou may see frequent entries of pods getting killed with OOM:\n\tnutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l\n\tSome API calls to Prism Gateway service may return HTTP response code 429 Too Many Requests:\n\tnutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l\n\tIAM enabled Prism Central may experience iam-proxy pod OOM restarts:\n\tnutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e node -a | grep entity_id | wc -l\n\t\nOn every Prism Eelement, CVM's ikat_control_plane service sends an excessive number of PC availability checks that cause a DDOS-like effect for PCs managing hundreds of nodes resulting in this issue. Note: Not always the above symptoms are caused by the problem discussed in this article. Technically any source of API requests incoming to PCVM with high frequency may cause similar issues. This KB article is relevant only for PC deployments managing hundreds of nodes and experiencing problems due to connected PEs sending large counts of v1/users/details API calls.. This issue is resolved in AOS versions 6.6.1, 6.5.2, and above. Upgrade the connected Prism Element clusters to the fix versions.\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A110026 - Nutanix Cloud Clusters (NC2) - CloudAPIEndpointUnreachable. ### Assistant: This Nutanix article provides the information required for troubleshooting the alert CloudAPIEndpointUnreachable for your NC2 cluster.. Alert Overview\n\nNutanix Clusters deployed in public clouds are required to make cloud API calls for several operations such as VM IP address assignment validations, VM power-on, VM power-off, etc. These operations may face issues if cloud API calls fail due to underlying cloud infrastructure issues like network connectivity misconfiguration(s), temporary outages, permissions misconfiguration etc. The CloudAPIEndpointUnreachable alert is generated when the backend service detects failure executing cloud API calls due to cloud API endpoint  unreachability.\n\nSample Alert \n\nCheck ID110026DescriptionConnectivity to Cloud API Endpoint Failed.Impact Networking operations may be impacted on the affected node(s) in the cluster.Alert Title Cloud API Endpoint UnreachableAlert IDA110026Alert Smart Title Cloud API Endpoint may be down or unreachable: {message}Cause #1Cloud API endpoints may be unreachable due to intermittent network connectivity issues in the underlying public cloud.Resolution #1Check if the cloud infrastructure is down due to an outage.Cause #2Missing NAT gateway on Cluster Management subnet.Resolution #2Ensure the route table for the Cluster Management subnet in the cloud contains a default route (0.0.0.0/0) to the NAT gateway. Add the route if not present.Cause #3Cloud subnets may have incorrect route entries in the route tables.Resolution #3Check if the route tables for all cloud subnets contain the appropriate routes from cloud management console.Cause #4IAM role, policy misconfiguration.Resolution #4Ensure IAM roles and policies are configured correctly.Cause #5Security group(s) and/or firewall misconfiguration.Resolution #5From the cloud management console, check if the inbound and outbound rules are correctly configured in the cloud security group(s). Ensure firewall rules do not block access..  Check if the cloud infrastructure si down due to an outage. To get additional details, login to a CVM in the cluster and check the Network Service logs at \"/home/nutanix/data/logs/network_service.out\" Validate Cloud API endpoints are functional:\n\tCheck cloud components (NAT/Internet GW, Subnet, etc) are not accidentally misconfigured or deleted. Check firewall policies are not misconfiguredCheck route tables contain correct route entries Check network access control list (ACL) and security group are not misconfigured \n\tAWS deployments\n\tCheck if the correct IAM role has been assigned to the instance by executing 'curl 169.254.169.254/latest/meta-data/iam/security-credentials; echo' in the CVM. The output should display 'Nutanix-Clusters-High-Nc2-Cluster-Role-Prod'If the role is not attached to the instance, attach it using the steps below:\n\t\tOpen AWS Mgmt Console and type 'EC2' in the search bar. Select EC2 service. Make sure that the AWS region is the one where the Nutanix Cluster is deployed.In the EC2 search bar, select the filter as VPC-ID. Specify the VPC ID in which the cluster is deployed. The VPC ID can be obtained from the NC2 Console page.Select the required instance from the filtered list of instances. Choose Actions, Security, Modify IAM role. Select the IAM role 'Nutanix-Clusters-High-Nc2-Cluster-Role-Prod' to attach to your instance and choose Save. \n\t\t\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about VM power on failure on Nutanix Cloud Clusters (NC2) on AWS. Error: Failed to PlugNic. ### Assistant: Sharing AWS subnets between multiple NC2 or reusing the AWS subnets results in VM power On task failure.. NC2 does not support the sharing of the same AWS subnet among multiple clusters and reusing of AWS subnets. Other instances or services in the VPC may occupy IP addresses in AHV's pool of IPs. The AWS EC2 API does not provide a method to list used/available IP addresses.If you reuse or share AWS subnets in a cluster or among multiple clusters, Powering on or migrating VM may result in a failure with the message as in the screenshot:Log messages in /home/nutanix/data/logs/acropolis.out on CVM reports on \"Failed to PlugNic\":\nnutanix@NTNX-i-0XXXXXXXXXXX-A-CVM:x.x.x.186:~$ allssh 'grep \"Failed to PlugNic for IP\" /home/nutanix/data/logs/acropolis.out'. Scenario 1. Multiple NC2 are sharing of the same AWS subnet\nNutanix Cluster A is using Subnet A. VM in the cluster is connected to Subnet ANutanix Cluster B is configured with a Network with Subnet AFailure to power on a VM in Nutanix Cluster B\nSolution for Scenario 1\n\nConnect all VMs in Nutanix Cluster A a subnet other than Subnet A. Alternatively, delete all VMs in Nutanix Cluster A.Delete Network with Subnet A via Prism Web Console in Nutanix Cluster A.Run the command from one of the CVM in Nutanix Cluster A:\n\tnutanix@CVM:~$ hostssh 'cloudnet forcerun-gc br0.uvms'\n\tPower on VM in Nutanix Cluster B\nScenario 2. Reusing AWS subnets in NC2\n\nNutanix Cluster used to use Subnet A in the pastNutanix Cluster is configured with a Network with Subnet AFailed to power on VM in the Nutanix Cluster\nSolution for Scenario 2\n\nConnect all VMs in Nutanix Cluster A  to a subnet other than Subnet A. Alternatively, delete all VMs in the Nutanix ClusterDelete Network with Subnet A via Prism Web ConsoleRun the command from one of the CVM in Nutanix Cluster\n\tnutanix@CVM:~$ hostssh 'cloudnet forcerun-gc br0.uvms'\n\tCreate a Network with Subnet A via Prism Web ConsoleConnect VM in Nutanix Cluster A to Subnet APower on VM in Nutanix Cluster\nScenario 3. IP address occupied by non-Nutanix instance or other AWS object/service in the VPC.\n\nNutanix Cluster is configured with a Network with Subnet ACustomer adds a non-Nutanix AWS instance, a transit gateway, or other AWS object or service endpoint in the same VPCFailed to PlugNic error similar to the following is seen: \n\tnutanix@CVM:~$ hostssh 'cloudnet forcerun-gc br0.uvms'\n\t\nSolution for Scenario 3\n\nConfirm the IP address is used intentionally (should be identifiable from AWS console. The error message above will identify the ENI in the InterfaceID field. Run the command from one of the CVM in Nutanix Cluster, to blacklist the IP reported in the error.\n\tnutanix@CVM:~$ hostssh 'cloudnet forcerun-gc br0.uvms'\n\tRetry the operation.\nNote: The command cloudnet forcerun-gc br0.uvms is non-disruptive to the running UVMs. It cleans up ENIs that are not in use by UVMs. This is a per-host command therefore the need to do the command with the hostssh. If you only have issues with one host, you can also run the command on a single ahv host. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about How to create long living tokens to integrate CI/CD pipeline in Nutanix Kubernetes Engine. ### Assistant: How to create long living tokens to integrate CI/CD pipeline.. Nutanix Kubernetes Engine is formerly known as Karbon or Karbon Platform Services. Nutanix Kubernetes Engine default token is valid for only 24 hours, which makes integration difficult with external components like CI/CD pipeline and Kubernetes cluster deployed by Nutanix Kubernetes Engine. . Nutanix recommend using the Nutanix Kubernetes Engine API to regenerate the kubeconfig before the 24 hours expiry and integrate that process in your CI/CD workflow. See Nutanix Dev for reference.Alternatively, you can use Nutanix Kubernetes Engine Service Account. This is not recommended as it requires an experienced administrator.In the procedure below, we are going to create a service account for Jenkins integration.\n\nFor the sake of simplicity, admin privilege is assigned via ClusterRole.More restricted access can be assigned using RBAC.\n\nCreate a service account\n\t$ kubectl create serviceaccount jenkins\n\tCreate a role binding based on the permission needed by the application\n\t$ kubectl create serviceaccount jenkins\n\tExtract Service account token.\n\t$ kubectl create serviceaccount jenkins\n\t\tClusters running Kubernetes > 1.24:\tStarting Kubernetes 1.24, Kubernetes won't generate Secrets automatically any longer for ServiceAccounts. This means the command from step 3 will return an empty output on k8s > 1.24\tIf the cluster is running  Kubernetes 1.24 or above, use the alternative step 3a:\t\t3a. Create a service account token secret or create the token manually for the ServiceAccount\tOption 1: Create a secret of type service-account-token\n\t$ kubectl create serviceaccount jenkins\n\t\tThen extract the token\n\t$ kubectl create serviceaccount jenkins\n\t\tOption 2: Create a token manually for the Service Account:\n\t$ kubectl create serviceaccount jenkins\n\tDownload a new kubeconfig file from Karbon.Update the token in the kubeconfig file with the token we generated in the steps above.Use the modified kubeconfig in Jenkins pipe to integrate the k8s cluster.\n\n$ kubectl create serviceaccount jenkins\nImportant Notes:\n\nNutanix recommends using kubeconfig from Rest API or UI for User logins.If you have to use Service accounts limit restrict the use to service integration like CI/CD pipeline. Avoid using service accounts for general-purpose tasks.Distributing long-lived tokens to users may introduce secret sprawl, thus, administrators must ensure that the tokens are not used for unintended purposes.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Adding or Removing a PCI device causes LSI controller to disconnect.. ### Assistant: Currently if you add a PCI device, physical or virtual, to an ESXi host the PCI bus number enumeration will change. This causes the LSI Adapter to go into a disconnected state on the Nutanix CVM.. Currently if you add a PCI device, physical or virtual to an ESXi host the PCI bus number enumeration will change. This causes the LSI Adapter to go into a disconnected state and the CVM will not power on.\n\nUCS is inherently susceptible to this as you can add and remove as many vNIC's and vHBA's to a service profile as you need, but this has also been observed on the NX-8150 when adding a 10GB NIC card. It may also occur if a customer adds a new 10GB NIC if they initially only purchased and set up the cluster with 1GB NIC's.\n\nTo identify this, check on the individual hosts configuration under Configuration > Advanced Settings you will see the message \"No devices currently enabled for passthrough\"\n\n\n\nThe CVM itself will not power on. If you look at the VM settings you will see \"Unavailable Device\" for the PCI attached device 0.. To resolve this, in the respective host vSphere instance, go to the Host Configuration > Advanced Settings page and click Configure Passthrough on the right-hand side. From here, find the LSI Logic (LSIModel) device(s) / ID(s), check the box and click Ok to enable it. Some nodes, like the NX-8150-G5, can have one to three LSI HBA cards.\n\nNext, reboot the host for the configuration to take effect. After reboot, the controller should be listed as below.Once the host is back up, edit the CVM configuration in vCenter. Remove the old PCI device(s), and add the new one(s). Select the LSI adapter(s) where PCI Passthrough just made active, and save.After hitting OK, power on the CVM.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Prism Central - pc.2021.9.0.5, pc.2022.1, or later: Unable to access PE via Cluster Quick Access after upgrading the AOS of the PE. ### Assistant: After upgrading AOS in a Prism Element which has been managed by a Prism Central that version is pc.2021.9.0.5, pc.2022.1 or later, you may not be able to access the Prism Element by using Cluster Quick Access feature of the Prims Central.. After upgrading an AOS cluster that is managed by a Prism Central running version pc.2021.9.0.5, pc.2022.1, or later, the user may not be able to access the Prism Element by using the Cluster Quick Access feature of the Prism Central.Behind the scene, the Prism service in Prism Central downloads the new Prism UI data from the upgraded Prism Element. This Prism service may have downloaded the new Prism UI data from PE but may not extract it for Cluster Quick Access use.\nIn the /home/nutanix/data/logs/prism_gateway.log in the Prism Central VM, you may find the ProtobufZNodeManagementException$BadVersionException exception happening with the following stack trace almost every minute.\n\nnutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'\n\nIf the Prism service has been restarted, you may see this ProtobufZNodeManagementException$BadVersionException once after the restart, and also you may see NullPointerException happening almost every minute, as the followings.\n\nnutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'\n \n\nnutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'\n\nUsers may also notice that the Prism service downloaded the Prism UI data from the upgraded PE, but no \"console\" directory is created.For example, if you have upgraded your PE to AOS 5.20.2, Prism Central will download the \"PrismUI.tar.gz\" file from the PE, and then this file will be put in the \"/home/apache/www/console/el7.3-release-euphrates-5.20.2-stable-9fe17e864a79fb1f4e58eda7a12075f9427ca364\" directory. Then Prism Central will extract this file into the \"console\" directory in the same directory as the \"PrismUI.tar.gz\" file. But, this \"console\" directory is not created in this case.\n\nnutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'\n\nYou may find the delete_old_prismui_files.py script had run and deleted old unnecessary Prism UI data files before the AOS upgrade in the \"/home/nutanix/data/logs/delete_old_prismui_files.log\" file.\n\nnutanix@PCVM:-$ cat ~/data/logs/prism_gateway.log | grep 'ProtobufZNodeManagementException$BadVersionException'. This issue has been fixed in these Prism Central releases.\npc.2021.9.0.6pc.2022.6.0.6, or later\nPlease upgrade Prism Central before carrying out an AOS upgrade on PE clusters. If you have already been experiencing this issue, please get in touch with Nutanix Support for recovery.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: vmknics_subnet_check. ### Assistant: The NCC health check vmknics_subnet_check verifies if vmknics on ESXi host have IP addresses in the same IP subnet.. The NCC vmknics_subnet_check test verifies if vmknics IP addresses are configured on the same subnet on any given host.\n\nNote: vmknics in the same subnet on the same ESXi host is an unsupported configuration.\n\nRunning the NCC Check\n\nRun this check as a part of the complete NCC checks.\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr run the vmknics_subnet_check check individually.\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is not scheduled to run on an interval.\n\nThis check does not generate an alert.\n\nSample output\n\nIf multiple vmknics are found with IP addresses in the same subnet, the check reports a warning, and an output similar to the following is displayed.\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID106442DescriptionCheck if vmknics have different subnetsCauses of failurevmknics have ip address configured in the same subnetResolutionsCorrect the IP addressing in the network.\t\t\tReview KB 2722 for more details.Impactvmknics in the same ip subnet on the same esxi host is unsupported.. If vmknics_subnet_check reports a WARN status, correct the IP address on the ESXi host that is reported by the NCC.\n\nNote: Nutanix does not recommend using two vmknics in the same subnet on ESXi.\n\nWhen the two vmknics reside on the same subnet, ESXi is unable to split vMotion traffic and other management traffic apart according to the GUI configuration. vMotion is a burst-type workload that uses no bandwidth until DRS or a vSphere administrator starts a vMotion (or puts a host into maintenance mode). But when a vMotion starts, the network interface gets saturated as the vMotion is also used for the Nutanix Cluster. Nutanix recommends using two different subnets.\n\nVerify the vmknics configuration and rectify the configuration so that the vmknics belong to different subnets.\n\nAfter the problem is resolved and the vmknics are in different subnets or one of the vmknics is removed, restart the CVM (Controller VM) to read the network configuration changes.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix Files: Unable to access TLDs after rename. ### Assistant: Nutanix Files 4.3 has a regression causing TLDs to be unaccessible after renaming.. Nutanix Files 4.3 has a regression that causes TLDs to be inaccessible after renaming them.  This only applies to TLDs created on Nutanix Files versions pre-3.5.Below is the error the client will get when accessing the TLD.\n\n\\\\<File Server Name>\\<Share Name>\\<TLD Name> is unavailable. \nVerification:Run the below on any FSVM to verify that the FSVMs cannot locate the metadata for the TLD.  Replace <Name of TLD> with the actual name of the TLD\n\\\\<File Server Name>\\<Share Name>\\<TLD Name> is unavailable. . This issue has been resolved in Nutanix Files 4.3.0.1.  Please upgrade to that version or later. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix Files - Recall request has failed. ### Assistant: This article describes an issue where recall request fails on a Nutanix Files cluster with tiering enabled after being upgraded to version 4.3. This results in files not being accessed from the s3 store.. The issue occurs on clusters with Tiering enabled after upgrading the Files cluster from ≥ 4.0 and < 4.3 to version 4.3. Recall request fails, resulting in files not being accessible from the s3 store.\n\nIdentifying the problem\n\nIn the \"Data Lens Dashboard\", under the Tasks, you see a failed \"Recall Request\".When you manually recall tiered data, the task either fails after a prolonged time or takes longer than usual to succeed.If the recall request is successful, the recalled file is partially full or cannot be opened due to an error.The base URL value of the tiering profile is prefixed with the bucket name as \"bucket_name.endpoint\":\n\tnutanix@FSVM$ afs tiering.list_profiles\n\t. Nutanix is aware of the issue and is working on a resolution. Contact Nutanix Support for immediate assistance.\n\nCollecting additional information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871.Run a complete NCC health_check on the cluster. See KB-2871.Collect Files related logs. For more information on Logbay, see KB-3094.\n\tCVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run:\n\t\tnutanix@CVM$ afs info.get_leader\n\n\t\tOnce you are on the Minerva leader CVM, run the following:\n\n\t\tnutanix@CVM$ afs info.get_leader\n\n\t\tFor example:\n\n\t\tnutanix@CVM$ afs info.get_leader\n\t\t\n\t\n\nAttaching files to the caseTo attach files to the case, follow KB-1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Deploying Witness VM on an ESXi cluster version 6.5 or earlier fails with the error \"Selected OVA file is not a valid template\". ### Assistant: This article describes investigates an issue with deploying a Witness VM on ESXi 6.5 and earlier.. Deploying Witness VM on an ESXi cluster version 6.5 or earlier will fail with the below message: \nSelected OVA file is not a valid template.\nThe issue is hardware incompatibility, as the Witness VM ESXi deployment OVA file is built from the ESXi 6.7 image.The issue applies only to new Witness VM deployment on ESXi 6.5 or earlier using the OVA file and does not apply to upgrading the Witness VM. The below Witness VM releases are not compatible with deployment on ESXi 6.5 and earlier.5.20, 5.20.1, 5.15.7, 5.15.6, 6.0, 6.0.1, 5.19.2 . Witness VM is forward and backwards compatible with AOS STS and LTS releases.Deploy the 5.15.5 Witness VM build located under the \"Other Versions\" tab on the Witness VM downloads page. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: inode_usage_check. ### Assistant: The NCC check inode_usage_check verifies whether the number of free inodes on CVMs is getting low.. The NCC check inode_usage_check verifies whether the number of free inodes on CVMs is getting low.\n\nAn inode contains information (metadata) about a file in a file system, including where the data is stored, file name, file permissions and so on. Each directory and file has a corresponding inode, in case of a big file size – more than one inode. A maximum number of inodes for each filesystem is defined at FS creation time and cannot be increased.\n\nThe inode_usage_check returns the following statuses:\n\nPASS if inode usage is at 75 percent or belowWARN if inode usage is between 75 and 90 percentFAIL if inode usage is above 90 percent\n\nRunning the NCC Check\n\nIt can be run as part of the complete NCC check by running:\n\nnutanix@cvm$ ncc health_checks run_all\n\nor individually as:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 5 minutes, by default.This check will generate an alert A1027 after 1 failure across scheduled intervals.\n\nSample output\n\nFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: WARN\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID1004DescriptionChecks if current inode usage is high.Causes of failureInode usage is high.ResolutionsReduce disk usage or replace disk.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert IDA1027Alert Smart TitleDisk Inode Usage High on Controller VM svm_ip_addressAlert TitleDisk Inode Usage HighAlert MessageInode usage for one or more disks on Controller VM svm_ip_address has exceeded inode_usage_threshold%.\n\n . The NCC health check inode_usage_check fails when one or more filesystems on disks is running out of free inodes or the overall cluster storage is running out of free inodes.\n\nEnsure that the inodes are free on the CVM (Controller VM) which is reported in the failure section of the NCC check:\n\nnutanix@cvm:~$ df -i\n\nExample output (note the IUse% column):\n\nnutanix@cvm:~$ df -i\n\nIf you observe that /home is indicating a high inode usage percentage, then determine which directory or directories have high inode usage using the following command:\n\nnutanix@cvm:~$ df -i\n\nNote: If you see any other directory (like /home/nutanix/data/stargate-storage/disks/<serial>) with high inode usage percentage, use the following command:\n\nnutanix@cvm:~$ df -i\n\nOnce the above outputs are collected, engage Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Space Accounting - Prism show cluster/container space usage different than total usage from all VMs on it.. ### Assistant: This article explains why space usage on the cluster/container level can be different from total disk usage from all User VMs.. For other Space Accounting issues/questions that are not covered on this page, please take a look at the Space Accounting | Nutanix Storage Usage Guide and FAQ article.When comparing space usage on the cluster/container level with space usage reported in total for all User VMs from this cluster/container, it often can be found these numbers do not match.\n\nScenario A: comparing cluster-level usage with usage from all User VMs.\n\nPrism UI, VM view table shows all User VMs on the cluster with storage used/allocated for each VM.\n\nIt is possible to take the current usage numbers from all User VMs on the cluster and find the total usage number. But total usage from all User VMs almost always will not match with current usage on the cluster (on the storage pool level).\n\nHere is an example, from a lab cluster with only 6 User VMs.One VM has only 1.61 GiB of usage (only guest OS installed), and the remaining 5 VMs have 196.69 GiB of storage usage (usage was created by independently generating some data on the Guest OS level):\n\n\n\nIn total, all 6 VMs have storage usage of ~986.66 GiB of logical usage, which for an RF2 container should translate into 1.93 TiB of physical usage on cluster/pool level.\n\nAt the same time, on the storage pool level (cluster-wide), usage is at ~1.16 TiB of physical usage.\n\nnutanix@cvm$ ncli sp ls | tail -3\n\nThis is significantly less than expected for usage on the VMs/VGs level.\n\nScenario B: comparing container level space usage with usage from all User VMs.\n\nIt is possible to compare usage on the container level with total usage from all User VMs/VGs on this container.\n\nExample from the same cluster as in scenario A, where we have a set of Data VMs with an identical amount of data (~196 GiB) written on Guest OS level.Here, usage for a highlighted container named “ctr1_compression” is 25 GiB. But at the same time, on the “Breakdown” view for this container, we can see there is only one VM with two disks and total usage of ~196 GiB.\n\n\n\nIf a container ha space-saving (data reduction) features enabled, then actual usage on the container level will be noticeably lower than the logical usage on the VM level. \n\nDetails on why total logical usage from all User VMs/VGs does not always match with cluster/container usage numbers.Described comparison scenarios will often show differences in space usage on the cluster/container level compared to what reported usage on the VMs/VGs level (in total).This difference can be explained by the fact that logical usage on VM/VG level is not taking into account several factors:  Space savings (or Data Reduction), Snapshots, Clones, and system garbage.The table below has some more details that explain how each factor can affect space usage differences reported on different levels (cluster/container vs. VMs/VGs).\n\nFactor to take into account.Details on how this factor impacts space usage on Cluster/Container level vs. VM/VG level.Space savings (or data reduction) features are enabled on containers.\n\t\t\tSavings from Compression, Deduplication, and Erasure Coding are only considered on the cluster/container level but not on VM/VG logical level.\t\t\tContainer logical usage reported in Prism is obtained from physical usage related to this container, divided by the replication factor of that container.\t\t\tThe logical usage on a VM/VG level show usage (amount of written data) on virtual disks without considering replication factor or space-saving techniques, or the real amount of data stored on the physical disks in nodes.\n\n\t\t\tSo, due to savings, you can see less usage reported on the cluster/container level compared with VM/VG level.\n\t\t\tSnapshots\t\t\t(any snapshots on the Nutanix storage layer).\n\t\t\tThe usage form snapshots are not reported/reflected in the main VM/VG view or on the container breakdown view in the Prism Element UI. \t\t\tBut snapshot usage is part of cluster/container usage.\t\t\t(any snapshots on the Nutanix storage layer).\n\n\t\t\tDue to snapshots-related usage, you can see cluster/container usage numbers are higher than total VM/VG usage numbers (multiplied by RF factor for case of comparison with storage pool usage).\n\t\t\tClones\n\t\t\tThe Cloned VM/VG will report the same logical usage as the original VM, but the shared amount of data is stored only once and not multiplied on the physical layer as a result of each cloning operation.\n\n\t\t\tSo, due to shared usage from cloning – usage numbers on Cluster/Container level will be less than you can get in total on the VM/VG logical layer.\n\n\t\t\tExample:\t\t\tOne VM with 1 TiB usage was cloned 5 times. So, in Prism on VM view (or in container breakdown view), you will see 6 VMs, each with 1 TiB logical usage. But on the container level, you will see logical usage of 1 TiB (initial usage of the original VM). And only if/when each cloned VM completely overwrites shared data (so nothing is shared) – container logical usage will match with total VM logical usage (if no other factors are in play).\n\t\t\tSystem garbage/overhead.    \t\t\tThere are different types of garbage can exist in a cluster.\t\t\t\t\t\tWhen some data is deleted from the cluster/container (it can be a whole VM, VM disk, snapshot, or it is just some data/files deleted by a user inside Guest OS) – space is not immediately returned as free. Nutanix cluster usually needs some time to reclaim deleted data and return it as free space. The time required for the reclamation depends on the amount of data to be reclaimed and which data reduction techniques are in use.\t\t\tAnd all deleted but not reclaimed yet data is considered garbage.\t\t\t\t\t\tAlso, even if there were no manual actions taken by the user to remove something from the cluster (like VMs/Disks/Snapshots removed, or some data deleted from the Guest OS layer) – the Nutanix cluster is still expected to have some amount of garbage generated due to data overwrites which happens normally during UVMs/Applications work (it can be generated also due to old DR snapshots automatically expiring).\t\t\t\t\t\tThe amount of garbage is heavily dependent on workload patterns in User VMs and on storage container configuration. On heavily loaded clusters the amount of garbage often can reach 5-10% of total usage, and it usually fluctuates in accordance with workload changes (depending on the time of the day for example) and with automatic cleanup/reclamation that happens on a cluster periodically (driven by Curator service).\t\t\t\t\t\tGarbage usage is only accounted for on a Cluster/Container usage layer, and it is not considered on the VM/VG logical usage layer.\t\t\t\t\t\tSo, due to some amount of garbage present in any Cluster, it is expected to see usage on Cluster/Container level to be higher than expected VM/VG logical usage (if there are no other factors in play here).\n\nNote: Remember that all these factors are typically in play simultaneously (if all of them are in use/enabled), and often they can compensate for each other. For example, additional usage from snapshots can be partially/fully compensated by savings from Erasure Coding. Ultimately, logical usage on the container will match total logical usage from VMs/VGs.. Main recommendation: total logical usage from VM/VG level should not be directly compared with usage on cluster/container level without considering all factors mentioned in the description section of this KB.If you believe that cluster/container usage is not correct for the amount of data in User VMs/VGs - then each mentioned factor should be deeply analyzed to understand where the difference in usage comes from.It is recommended to start the analysis process by checking:\n\nThe Storage Overview view in Prism Element to see the data reduction ratio, overall efficiency ratio, amount of savings, and other details.Prism Central - your cluster in question - Capacity Tab: Storage View - this can give a quick overview of current and historical live/snapshot/garbage usage numbers on a cluster or container level.KB Space Accounting | Are the space-saving settings on my containers configured correctly - to see best practices and how to check your current configuration and the amount of savings.KB Space Accounting | Identifying Snapshots on a Nutanix Cluster to learn how to identify snapshots in a cluster and find their reclaimable space.More details about the implementation of different data reduction and data avoidance techniques can be found in the Data Efficiency tech note.\n\nFor other Space Accounting issues/questions that are not covered on this page, take a look at the Space Accounting | Nutanix Storage Usage Guide and FAQ article.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: esx_product_locker_setting_check. ### Assistant: The NCC health check esx_product_locker_setting_check checks for ESXi ProductLocker settings.. The NCC health check esx_product_locker_setting_check checks for ESXi ProductLocker settings. This check will generate a FAIL if the host contains a ProductLocker symlinked to any container.This check was introduced in NCC 4.1.0.\n\nRunning NCC Check\n\nRun this check as part of the complete NCC Health Checks\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr run this check separately:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every day\nSample outputFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\nFor Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\nFor Status: INFO\n\nnutanix@cvm$ ncc health_checks run_all\n \n\nOutput messaging\n\nCheck ID106420DescriptionCheck ESXi ProductLocker settingsCauses of failureThe host contains a ProductLocker location that is not properly configuredResolutionsManually configure the ProductLocker location. Review KB 10204.ImpactCVMs appear to be stunned or frozen periodically on an ESXi cluster, potentially causing instability of cluster services and VMs. This issue can affect multiple CVMs at once, causing a complete cluster outage.. Open SSH connection to one of the CVMs as \"nutanix\" user and execute the below command to identify the /productLocker symlink:\n\n[nutanix@CVM] hostssh 'ls -latr / | grep ^l'\nExample output:\n\n[nutanix@CVM] hostssh 'ls -latr / | grep ^l'\n\nMap the container:\n\n[nutanix@CVM] hostssh 'ls -latr / | grep ^l'\n\nIn the above example, we can see the /productLocker symlink is pointing to a non-local datastore. You may see the following ESXi setting changed to the path seen below in the String Value line:\n\n[nutanix@CVM] hostssh 'ls -latr / | grep ^l'\nNote: If the productLocker String Value points to a Metro Protected datastore, engage Nutanix Support for remediation. If Metro Availability is disabled, do not re-enable and engage Nutanix support immediately. Re-enabling Metro in this state can cause CVMs to lockup and restart, leading to a potential outage.In order to resolve this issue, reconfigured ESXi hosts to point productLocker back to local SATADOM (default config). For more information on this, you can refer to the below VMWare article. Ensure the change is effective.\n\nIn case the above-mentioned steps do not resolve the issue, engage Nutanix Support, gather the output of \"ncc health_checks run_all\" and attach it to the support case. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about ESXi - Rename an ESXi host in a Nutanix Environment. ### Assistant: This article provides a proper way to rename an ESXi host, especially after running a bare-metal Foundation install. This also includes non-configurable components which needs to be checked before changing any aspect on Esxi hosts.. For changing the hostname of an ESXi host, as far as Nutanix is concerned, renaming a host is an instant and secure procedure, and can be done by following VMware instructions, from the below VMware Article: Changing the name of an ESX or ESXi host (1010821).\nThe following ESXi settings should NOT be manually changed as some of them are non-configurable config and others have dependency on Esxi functionality (Please involve VMware technical Support in case there are any concerns on these components):\n\nNFS  datastore settingsVM swapfile locationVM startup/shutdown orderCVM NameCVM virtual hardware configuration file (.vmx file)Each AOS version and upgrade includes a specific CVM virtual hardware configuration. Therefore, do not edit or otherwise modify the CVM virtual hardware configuration file.iSCSI software adapter settingsvSwitchNutanix standard virtual switchvmk0 interface in port group \"Management Network\"vSwitchNutanix standard virtual switchSSH enabledHost firewall ports that are open\n\nFor a complete information please refer to the Non-configurable Esxi Components outlined in Non-Configurable ESXi Components.. For reference, you can check the below steps to change the hostname:\n\nLog in to the ESXi host to rename through SSH or DCUI. Display current hostname:\n\troot@esxi# hostname\n\tRun the following command to change the hostname:\n\troot@esxi# hostname\n\tRun the following command to set the fully qualified domain name. Note that --host and --fqdn are mutually exclusive.\n\troot@esxi# hostname\n\tDisplay the new hostname and fqdn:\n\troot@esxi# hostname\n\tTo clear the domain name and leave only the hostname:\n\troot@esxi# hostname\n\n\troot@esxi# hostname\n\tDepending on the ESXi version, changes might be reflected automatically or imply a service restart. \n\t​​​ESXi 5.1 / ESXi 5.5: After changing the hostname, execute the following command in the ESXi host. Prism will reflect the changes after about 30 seconds. Be aware that restarting hostd is a mildly disruptive operation.\n\t\troot@esxi# hostname\n\t\t​​​ESXi 6.x: Prism will reflect the changes automatically without restarting any service. \n\tThere is a known limitation with the NFSv3 implementation in VMware. The NFS lock files (.lck) in the VM directory will have the name of the previously named host no matter if existing, moved via VMotion or newly created VMs. This limitation only gets fixed by rebooting the ESXi host. We recommend rebooting the host after a host rename due to the fact that troubleshooting as well as log analysis can be more difficult. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: cfs_fatal_check. ### Assistant: NCC health check cfs_fatal_check is a check to determine if the CFS process is in a stable state.. The NCC health check cfs_fatal_check  is a check to determine if the CFS (Collector Framework Service) process is stable. The CFS process resides under the cluster_health service and sends Pulse data to Insights. The check will trigger if the CFS process has restarted at least 4 times in the past 2 hours. Until the CFS process has stabilized, sending remote support/pulse Insights data and proactive support may be delayed.Before running this check, upgrade NCC to the latest version. This check was introduced in NCC 4.6.1.Running the NCC CheckYou can run this check as part of the complete NCC Health Checks.\nnutanix@cvm$ ncc health_checks run_all\nOr you can run this check separately.\n\nnutanix@cvm$ ncc health_checks run_all\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 7200 seconds.This check will generate the CFS process that is not in a stable state alert.\n\nSample Outputs\n\nFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: Warning\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID140005DescriptionThis is a check to determine if the CFS process is stable. The CFS process resides under the cluster_health service and sends Pulse data to Insights. The check will trigger if the CFS process has restarted at least 4 times in the past 2 hours.Causes of failureThe CFS process could repeatedly restart if it hits an unrecoverable error, or the CFS process fails to perform a task dependent on the filesystem or zookeeper process, or if it is killed by Out Of Memory (OOM) killer.ResolutionsThe CFS process sits under the cluster_health service on the Controller VM. Verify that the cluster_health service on the affected node is running. Check the cfs.out and cluster_health service logs for more details about the cause of the crash.ImpactInsights data and proactive support can be delayed until the CFS process has stabilized.. NOTE: cfs_fatal_check may show WARN when Pulse was disabled, this is a false positive and has been fixed with NCC-4.6.3. Upgrade to the latest NCC version for a permanent fix of this issue. The CFS process could repeatedly restart if it hits an unrecoverable error, or CFS process fails to perform a task dependent on the filesystem or zookeeper process, or if it is killed by out of memory (OOM) killer.\nVerify that the CFS service is running on the cluster:\n\nnutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == \"/home/nutanix/ncc/bin/nusights/cfs\" { print $0 }'\n\nCheck if the CFS service has recently crashed. In the below command, the CFS process has been running for 2 days, 6 hours, 2 minutes and 33 seconds.\n\nnutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == \"/home/nutanix/ncc/bin/nusights/cfs\" { print $0 }'\n\nVerify the cluster_health service is running and check for recent fatals in the health_server.log. Note that a large number of PIDS next to the cluster_health service can indicate that the CFS process has been restarting:\n\nnutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == \"/home/nutanix/ncc/bin/nusights/cfs\" { print $0 }'\n\nRestart the cluster_health service to attempt to stabilize the CFS process:\n\nnutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == \"/home/nutanix/ncc/bin/nusights/cfs\" { print $0 }'\n\nMonitor the stability of the CFS process by re-running this NCC check or via steps 1 and 2.  \nIf the above steps do not resolve the issue, consider engaging Nutanix Support. Additionally, gather the following command output and attach it to the support case:  \n\nnutanix@CVM:~$ ps aux | grep /home/nutanix/ncc/bin/nusights/cfs | grep -v grep | awk '$11 == \"/home/nutanix/ncc/bin/nusights/cfs\" { print $0 }'. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: check_license_compliance, weekly_pre_expiry, daily_pre_expiry. ### Assistant: The NCC checks check_license_compliance, weekly_pre_expiry, daily_pre_expiry verifies whether the cluster license is valid or is about to expire.. The NCC Health Check check_license_compliance reads the license-related information and verifies if the license on all the nodes is valid.\n\nThis check performs the following:\n\nChecks if the license is about to expire.Checks license feature compliance.Checks license standby mode.Checks if all nodes have valid licenses.Checks for license capacity.Checks for Nutanix Calm license violation.Checks if Nutanix Calm license is about to expire.Checks if Nutanix Objects license is about to expire.\n\nNCC Health Checks weekly_pre_expiry and daily_pre_expiry are added in NCC 4.6.5 and verify if installed licenses are about to expire.\n\nThe check behavior depends on the number of days before license expiration:\n\n12 to 6 weeks to license expiration, an Info alert is generated weekly by the weekly_pre_expiry check.6 to 2 weeks to license expiration, a Warning alert is generated weekly by the weekly_pre_expiry check.2 weeks before license expiration, a Critical alert is generated daily by the daily_pre_expiry check.\n\nRunning the NCC Check\n\nYou can run this check as part of the complete NCC Health Checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr run the checks separately:\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nSample output \n\nFor status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: INFO\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: ERR\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\nFor status: WARN (From NCC-4.1.0 onwards)\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID14000DescriptionCheck if the license is about to expire.Causes of failureThe license is set to expire or has expired.ResolutionsContact Nutanix to extend the expiration of the license or apply a new license.ImpactThe cluster will be out of licensing compliance.Alert IDA1075Alert TitleLicense ExpiryAlert MessageDetailed license expiry info: license_expiry_msg.ScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID14001DescriptionCheck license feature compliance.Causes of failureThe cluster was configured to use features not allowed by the license.ResolutionsApply a license with the necessary entitlement or stop using the unlicensed features.ImpactThe cluster is out of licensing compliance.Alert IDA1077Alert TitleLicense Feature ViolationAlert MessageFeatures not permitted by license(s) applied are being used. Non-compliant features: feature_violation_listScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID14002DescriptionCheck license standby mode.Causes of failureThe license file has not been applied after cluster summary file generation.ResolutionsComplete the licensing workflow.ImpactThe cluster is out of licensing compliance.Alert IDA1085Alert TitleLicensing workflow is incompleteAlert MessageLicensing process for this cluster is incomplete.ScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 4 consecutive failures across scheduled intervals.\n\nCheck ID14003DescriptionCheck if all nodes have valid licenses.Causes of failureNodes in the cluster have different types of licenses.ResolutionsContact Nutanix to extend the expiration of the license or apply a new license.ImpactThe cluster is out of licensing compliance.Alert IDA1086Alert TitleLicense Node/Core InvalidAlert MessageCluster has nodes/cores which don't have sufficient valid licenses: license_insufficiency_listScheduleThis check is scheduled to run every day.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID14004DescriptionCheck for license capacityCauses of failureThe license capacity for the cluster is non-compliant.ResolutionsApply a new license.ImpactThe cluster is out of licensing compliance.Alert IDA14004Alert TitleLicense InvalidAlert MessageLicensed capacity is less than cluster capacity. license_capacity_msgScheduleThis check is scheduled to run every day.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID14005DescriptionNutanix Calm License ViolationCauses of failureThe total number of VMs and containers managed by Calm exceeds the licenses applied.ResolutionsPlease upgrade your Calm license with additional license packs. If you cannot resolve the issue, please contact Nutanix support.ImpactThe cluster is out of licensing compliance.Alert IDA14005Alert TitleNutanix Calm License ViolationAlert MessageNutanix Calm usage is above the licensed count by 'more_required_packs' license pack(s).ScheduleBefore NCC 4.6.5: this check is scheduled to run every 30 minutes.\t\t\tNCC 4.6.5 and newer: this check is scheduled to run every 60 minutes.Number of failures to alertThis check will generate an alert after 2 consecutive failures across scheduled intervals.\n\nCheck ID14006DescriptionCheck if Calm license is about to expire.Causes of failureThe Calm license is set to expire or has expired.ResolutionsContact Nutanix to extend the expiration of the Calm license, or apply a new Calm license.ImpactThe Calm instance will be out of licensing compliance.Alert IDA14006Alert TitleCalm License ExpiryAlert MessageThis cluster_type has license_type license with expiry date of license_expiry_date. The calm license expiry_str.ScheduleBefore NCC 4.6.5: this check is scheduled to run every 30 minutes.\t\t\tNCC 4.6.5 and newer: this check is scheduled to run every 60 minutes.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID14007DescriptionNutanix Calm Trial License ExpiredCauses of failureNutanix Calm Trial License of 60 days is set to expire or has expired.ResolutionsPlease upgrade your Calm license with additional license packs. If you cannot resolve the issue, please contact Nutanix support.ImpactThe cluster is out of licensing compliance.Alert IDA14007Alert TitleCalm Trial License ExpiryAlert MessageNutanix Calm 60 days Trial License expiry_remaining_daysScheduleBefore NCC 4.6.5: this check is scheduled to run every 30 minutes.\t\t\tNCC 4.6.5 and newer: this check is scheduled to run every 60 minutes.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID14008DescriptionCheck if licenses are about to expire.Causes of failureThe license is set to expire or has expired.ResolutionsContact Nutanix to extend the expiration of the license, or to apply a new license.ImpactThe Prism Central instance will be out of licensing compliance.Alert IDA14008Alert TitlePC License ExpiryAlert MessageDetailed license expiry info: license_expiry_msgScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.\n \n\nCheck ID14009DescriptionCheck if Storage licenses are in capacity compliance.Causes of failureTotal storage usage exceeds the license applied.ResolutionsPlease upgrade your license with additional capacity. If you cannot resolve the issue, please contact Nutanix support.ImpactThe Prism Central instance will be out of licensing compliance.Alert IDA14009Alert TitleStorage License OvershootAlert MessageStorage usage is out of compliance. Detailed info: license_insufficiency_listScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID14010DescriptionCheck if User-VM-based licenses are in capacity compliance.Causes of failureTotal number of user VMs exceeds the licensed quantity.ResolutionsUpgrade the license with additional capacity or reduce the number of User-VMs in use. If this does not resolve the issue, contact Nutanix support.ImpactThe Prism Element cluster is out of licensing compliance.Alert IDA14010Alert TitleUsers-based License OvershootAlert MessageTotal number of users is out of capacity compliance. Detailed info: license_insufficiency_listScheduleThis check is scheduled to run every hour.Number of failures to alertThis check will generate an alert after 1 failure.. Apply a valid cluster license to resolve this alert. Refer to License Manager Guide for more information about license management.If a newly purchased node was recently added to the cluster and the NCC check reports an INFO status with a message similar to the following, a new license file needs to be re-generated.\n\nINFO: Cluster has nodes which don't have Pro license. List of nodes with invalid license [u'No license for 1 nodes of model NX-3060-G4']\n\nTo re-generate a new license file, see the Nutanix Licensing Guide.\n\nYou can also disable the Prism Pro features mentioned in the Licensing Guide.\n\nTo show the feature list for your license, see Displaying License Features and Details.\n\nIf the check reports an ERR message, or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support.Known issues:For Check ID 14010, you may see instances where the check reports on the 'number of users' instead of the 'number of User-VMs.' The formula used to calculate VM license utilization is as follows:\n\nINFO: Cluster has nodes which don't have Pro license. List of nodes with invalid license [u'No license for 1 nodes of model NX-3060-G4']. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Genesis crashing \"No vmknic exists in same subnet as eth0/eth2\". ### Assistant: This article describes an issue where genesis is crashing with the message \"No vmknic exists in same subnet as eth0/eth2.\". While adding a new node to a cluster, the node is not being discovered because genesis is crashing and Foundation is not starting.\n\ngenesis.out log shows:\n\nnutanix@CVM~$ less ~/data/logs/genesis.out. There are two possible reasons for the issue:\n\nThe CVM eth0 has an APIPA address. Modify /etc/sysconfig/network-scripts/ifcfg-eth0 and assign a static IP address, gateway and network mask. Change BOOTPROTO=\"dhcp\" to BOOTPROTO=\"none\".\n\n\tHere is an example of an ifcfg-eth0 output:\n\n\tnutanix@CVM~$ vi /etc/sysconfig/network-scripts/ifcfg-eth0\n\tHost vmk port has APIPA address assigned. The Host vmknic IP configuration can be checked with esxcfg-vmknic -l. Assign a static IP either through IPMI/DCUI or if there is an ssh connection through the CVM using the command below:\n\tnutanix@CVM~$ vi /etc/sysconfig/network-scripts/ifcfg-eth0\n\tAfter making either of the changes above, reboot the CVM:\n\tnutanix@CVM~$ vi /etc/sysconfig/network-scripts/ifcfg-eth0\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Expand Cluster workflow fails on \"Imaging of node\" task. ### Assistant: Expanding an ESXi cluster with a node that requires uploading an ESXi iso for imaging may fail if the version of ESXi requested is not compatible with the AOS version running in the cluster.. Expand cluster workflow fails for an ESXi cluster when uploading an ESXi iso to reimage the new node.Clicking through Details hyperlinks of the \"Node preparation for expand cluster\" task, the task fails with \"Imaging of node <new_node_ip> failed. Status: fatal: Running validations\".Using allssh 'ls -larth ~/data/logs/foundation/2023*/' (replacing \"2023\" with the current year), we can locate node_<new_node_ip>.log.\nnutanix@NTNX-CVM:~$ allssh 'ls -larth ~/data/logs/foundation/2023*/'\nnode_<new_node_ip>.log on the previously identified CVM shows the following StandardError. (NOTE: md5sum and versions mentioned can differ)\n\nnutanix@NTNX-CVM:~$ allssh 'ls -larth ~/data/logs/foundation/2023*/'\n . Upgrade the cluster to the minimum AOS version mentioned in the error message, or use a lower version of ESXi and then upgrade the ESXi version outside of Prism Element UI to match the other hosts.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: ofpfmfc_table_full_check. ### Assistant: NCC 3.9.0. The NCC health check ofpfmfc_table_full_check checks the number of rules configured in Open vSwitch (OVS) tables.. The NCC health check ofpfmfc_table_full_check​ checks the number of rules configured in Open vSwitch (OVS) tables and alerts if this number is approaching the limit.\n\nFor every NIC on every user VM, the acropolis service configures multiple rules inside Open vSwitch (OVS) tables.\n\nIf the number of configured rules is more than 80% of the limit, then a warning alert is generated.If the number of configured rules is more than 90% of the limit, then a critical alert is generated.\n\nIf the maximum number of rules is reached, then user VMs will not be able to communicate over the network.\n\nThis check only applies to Prism Element hosts.\n\nRunning the NCC Check\n\nYou can run this check as part of the complete NCC Health Checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr you can run this check individually:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: Select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every hour, by default.\n\nThis check generates an alert after 1 failure and will self-resolve if the number of rules decreases below the threshold.\n\nSample output\n\nFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: ERR\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID200607DescriptionChecks that number of OVS rules is not reaching maximum number of rulesCauses of failureOpenFlow table is getting fullResolutionsConsider turning off unnecessary user VMs or remove unused vNICs from running user VMs on AHV host. For higher number of allowed OVS rules, ensure AOS version at least 5.10.5 and above on cluster.ImpactWhen number of OVS rules will reach maximum, some VMs may lose network connectivity.Alert IDA200607Alert TitleOpenflow Table Getting Full.Alert MessageNumber of OVS rules is num_flow_table_rules for bridge bridge_name on AHV host host_ip. This is more than usage_msg% of maximum allowed number of rules.. The \"flow_table_limit\" has been increased from 16k to 32k in AOS 5.20 which is bundled with AHV 20201105.2030. If you are running a lower version then consider upgrading AOS and AHV accordingly to avoid hitting the 16k limit in earlier versions.If you are already running AOS 5.20 and AHV 20201105.2030 or newer, and this check returns WARN or FAIL, consider powering off some user VMs or moving them to a different AHV host.\n\nScenarios that may trigger the flow table to fill up. There are currently 3 known Scenarios that may trigger the OpenFlow Table to have more entries than expected:\n\nBroadcast storm or switching loop in the environment.Trunk port that is allowing all VLANs instead of only VLANs needed in the Nutanix environment.A Cisco ACI environment running in Application Centric Mode using \"Flood in Bridge Domain\" instead of \"Flood in Encapsulation\"\nTroubleshootingRun the following commands on the AHV host with alerts. The counts from these commands will not be the same but they should be within a few hundred of each other.  First, collect a list of bridges configured in OVS:\n\nroot@AHV# ovs-vsctl list-br\nSample output:\n\nroot@AHV# ovs-vsctl list-br\n For each switch run the command:\n\nroot@AHV# ovs-vsctl list-br\nBased on this output:\n\nDoes the number of flows from br.dmx come close to the total number of flows on br0.local + any other vswitch you may have? If \"yes\" you are most likely hitting a broadcast storm in your environment. Please work with your networking team to rule out a broadcast storm or loop within the NetworkIf br.dmx reports a number MUCH higher than br0.local + other vSwitches you are most likely hitting scenarios 2 or 3.\n If you are hitting Scenario 2 or 3 please do the following\n\nVerify that VLANs are pruned to the host by adding a \"switchport trunk allowed vlan\" statement to the switch ports that only allows VLANs to the AHV host that are needed.If running ACI please engage Cisco to verify if running Flood in Encapsulation will work in your environment and use this instead of Flood in the Bridge domain.If neither of these resolves your issues please engage Nutanix Support to assist with increasing Openflow flow limits.\n  \n\nKnown issues\n\nThe check may fail with the following error if bridge chaining is disabled:\n\nroot@AHV# ovs-vsctl list-br\n\nThis issue is resolved in:\n\nAOS 5.15.X family (LTS): AOS 5.15.2AOS 5.18.X family (STS): AOS 5.18\n\nPlease upgrade AOS to versions specified above or newer.\n\nIt is not recommended to disable bridge chaining. Disabling bridge chaining should only be used for troubleshooting purposes and once troubleshooting is completed, bridge chaining should be re-enabled.\n\nRun the following command on affected AHV hosts to re-enable bridge chaining:\n\nroot@AHV# ovs-vsctl list-br\n\nAlso restart acropolis on the same node to make sure the change is made aware to other nodes.\n\nIn case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Ubuntu loads to a blank screen instead of login prompt from VM console.. ### Assistant: VM console shows black screen for Ubuntu VM, despite being able to SSH to the VM.. Ubuntu 16.04 with Kernel version 4.4.0-116 has a known issue that loads OS into a black screen instead of a login prompt (i.e., no display in Prism VM console), while SSH or OS operations are working fine.\n\nroot@vm-0:~# uname -a\n\nroot@vm-0:~# uname -a. This is a known issue in that kernel release. Upgrade the OS/Kernel to 16.04.5 and above to resolve the issue.As a workaround, use (Alt+F1) to change to tty1 to get to the login prompt. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about VirtIO 1.1.6 or newer installation on Windows 7, Windows Server 2008 R2. ### Assistant: Windows 7 and Windows 2008R2 require SHA2 support for Nutanix VirtIO 1.1.6 to be installed. Installation of VirtIO 1.1.6 or newer may fail on Windows 7 and/or Windows Server 2008R2 with the below error message: \n[Window Title]\nOr if you install Windows 7 or Windows Server 2008R2 from Nutanix VirtIO 1.1.6 ISO: \n[Window Title]. Microsoft deprecated signing of kernel-mode drivers with SHA1 certificates in late 2019, hence as of VirtIO 1.1.6, all drivers are signed with SHA2 certificate only. Windows 7 and Windows Server 2008R2 do not have SHA2 support out of the box.To install Nutanix VirtIO 1.1.6, please make sure that Microsoft KB 3033929 is installed or the operating system is fully updated with Windows Update. For fresh deployments of Windows 7 or Windows Server 2008 R2 from Windows installation ISO on AHV clusters use Nutanix VirtIO 1.1.5. Once OS is installed, please install a fix from Microsoft KB 3033929 and then update VirtIO to 1.1.6.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about USB passthrough support on Nutanix platforms. ### Assistant: Does Nutanix support USB passthrough?. Does Nutanix support USB passthrough?. Nutanix does not support USB passthrough on all platforms/hypervisors. There are no plans to support USB passthrough on Nutanix platforms due to security concerns and due to the fact that it affects features such as live migration, HA, 1-Click upgrades and more.\n\nAlternative\n\nIf you need a guest VM to access USB devices, using AnywhereUSB is a valid solution. This solution uses a USB hub accessible over TCP/IP. A driver is installed on the VM and it emulates a USB device connected to the hub as if it was directly attached to the VM. This method is well-known to work and does not affect the features mentioned above. Please see the Digi site for further details: www.digi.com/products/usb-and-serial-connectivity/usb-over-ip-hubs/anywhereusb. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about \"Host restore VM locality” task disappears in Prism if the VM migrations fail. ### Assistant: “Host restore VM locality” task may disappear in Prism if the VMs migrations fail. This may be observed if HA is enabled on the cluster and the cluster's memory usage is high.. When the AHV host is restarted, the AOS automatically creates a \"Host restore VM locality” task to migrate VMs back to the host where they were initially running. In some cases, you may notice that such a task disappears from the Tasks page in the Prism UI.. Restore VM locality task is run on the best-effort basis as we cannot guarantee that all VMs can be migrated to the original host once it is recovered.It is by design behavior that the \"Host restore VM locality” task is hidden on the Tasks page in the Prism UI if we cannot restore locality for all VMs. If locality was restored for all VMs then \"Host restore VM locality” tasks will continue to be shown on the Tasks page.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check. ### Assistant: The NCC health checks aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check check the cluster for dangling third-party backup snapshots that may be present for a protection domain longer than intended/specified.. The NCC health checks aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check check the cluster for dangling third-party backup snapshots that may be present for a protection domain. If a third-party backup snapshot has existed in the system for longer than the specified period, this check will raise an alert. These backup snapshots are snapshots that are generated from third-party backups (Veeam, Commvault, Rubrik, HYCU, etc.).\n\nRunning the NCC Check\n\nRun this check as part of the complete NCC health checks.\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr run the check separately.\n\nThird-Party backup snapshot check:\n\n\tnutanix@cvm$ ncc health_checks run_all\n\tEntity Centric Specific check:\n\tnutanix@cvm$ ncc health_checks run_all\n\t\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every day, by default.\n\nSample Output\n\nFor status: PASS\n\nThird-Party backup snapshot check:\n\nnutanix@cvm$ ncc health_checks run_all\n\nEntity Centric Check:\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: FAIL\n\nThird-Party backup snapshot check:\n\nnutanix@cvm$ ncc health_checks run_all\n\nEntity Centric Specific check:\n\nnutanix@cvm$ ncc health_checks run_all\n\nStarting AOS 5.15.3 and 5.18.1 and above, backup snapshots created without an expiration time are assigned a default 60 days expiration.From NCC 3.10.1 and higher on AOS version 5.15.3 and 5.18.1 and above, the check will only fail if the snapshots have an infinite expiration (ex 2086) and have exceeded the threshold set for the alert (default 7 days). Output messaging\n\nCheck ID110250DescriptionCheck for aged third-party backup snapshots.Causes of failureThird-party backup snapshots are present in the cluster longer than the configured threshold.ResolutionsContact Nutanix support.ImpactAged snapshots may unnecessarily consume storage space in the cluster.Alert IDA110250Alert TitleAged third-party backup snapshots presentAlert Messagepd_name has num_snapshot aged third-party backup snapshot(s) and they may unnecessarily consume storage space in the cluster.\n\nCheck ID110263DescriptionCheck for the old entity-centric third-party backup snapshots.Causes of failureEntity-centric third-party backup snapshots are present in the cluster longer than the configured threshold.ResolutionsThe Recovery Point for the below VM needs to be deleted from Recoverable Entities in Prism Central. If the issue persists, contact Nutanix support.ImpactOld snapshots may unnecessarily consume storage space in the cluster.Alert IDA110263Alert TitleOld Entity-Centric Third-Party Backup Snapshots PresentAlert MessageVM vm_name has num_snapshot old third-party backup snapshot(s) and may unnecessarily consume storage space in the cluster.. The check passes if third-party backup snapshots are not present in the cluster longer than the configured threshold. The default threshold is 7 days, but this threshold value can be modified. The threshold can range from 1 day to 365 days.\n\nAdministrators may specify how long a third-party snapshot is expected to be present in the cluster through Prism UI. This check is based on the backup schedule settings. A snapshot is created every time the third-party backup job runs - and the previous snapshot is deleted once the backup job completes. Essentially, there should be 1 snapshot per VM at all times unless otherwise is specified within the backup software. The only other time there will be multiple snapshots for a VM is if that VM is configured within multiple backup jobs.\n\nIf the check fails, verify that the threshold set for the check matches the backup schedule and/or retention for the snapshots that are linked to the protection domain. If there are multiple backup jobs linked to the protection domain, then the threshold for this check should be equal to the longest schedule.\n\nFor example, if there are 4 backup jobs linked to the protection domain:\n\nBackup Job 1 runs dailyBackup Job 2 runs once a weekBackup Job 3 runs once every 2 weeksBackup Job 4 runs only once a month\n\nThe threshold for the check should be set to 32 days. If the threshold is set correctly but the check is still failing, it is recommended to reach out to the third-party backup vendor to ensure the DELETE API calls are being sent. If the snapshots are still present after consulting with the backup vendor, consider engaging Nutanix Support at https://portal.nutanix.com.\n\nAged snapshots from third-party backups can cause unnecessary capacity utilization on the cluster. Nutanix recommends verifying the threshold is consistent with the expected snapshot time frame within the backup software.\n\nTo change the threshold for \"Aged third-party backup snapshots present\":\n\nLog in to Prism ElementFrom the Home drop-down menu, select \"Alerts\"In Alerts, click on \"Configure\" and \"Alert Policy\"Search for alert ID \"A110250\" or \"Aged third-party backup snapshots present\" and click the update policy actionConfigure the alert threshold to be consistent with the retention policy of backup software you are using then click Save\n\nEntity-Centric Snapshots:\n\nThe workflow for these snapshots is through Prism Central NOT Prism ElementWorkflow to find entity-centric snapshots. Prism Central UI -> Virtual Infrastructure -> VM Recovery Points -> Select the VM\n\nTo change the threshold for \"Old Entity-Centric Third-Party Backup Snapshots Present\":\n\nLog in to Prism Element where the alert is observedFrom the Home drop-down menu select \"Settings\"In Settings click \"Alert Policy\"Search for alert ID \"A110263\" or \"Old Entity-Centric Third-Party Backup Snapshots Present\" and click the update policy actionConfigure the alert threshold to be consistent with the retention policy of backup software you are using then click \"Save\"\n\nNOTE: These checks could also be failing if a VM that was once being backed up via the backup software has been removed from the backup configuration and/or removed from the cluster. It is up to the backup vendor to send the DELETE API call for the snapshot(s) once a VM has been removed from the backup configuration. If you suspect that snapshots are being left on the system due to a VM being removed from the configuration, engage the backup vendor for further assistance.\n\nDefault expiration in AOS 5.15.3, 5.18.1 and later\n\nStarting AOS 5.15.3 and 5.18.1, backup snapshots created without an expiration time are assigned a default 60 days expiration. Upgrade AOS to 5.15.3, 5.18.1 or later to prevent this issue in the future. However, upgrading a cluster running a previous version of AOS and having backup snapshots without an expiration date will not change those backup snapshots. They will continue to have no expiration date after the upgrade. In this case, engage the third-party backup vendor. If the snapshots are still present after consulting with the backup vendor, consider engaging Nutanix Support at https://portal.nutanix.com.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about How to troubleshoot Network Issues on ESXi in a Nutanix Block. ### Assistant: This article reviews troubleshooting techniques on the physical network layer of ESXi.. If the vSphere when a cluster is configured with Distributed Virtual Switches, health checks are enabled that identify misconfiguration of MTU, VLAN, Teaming and Failover: https://kb.vmware.com/s/article/2032878\n\nBelow are some of the things that can be checked when troubleshooting Network issues on an ESXi cluster configured with Standard Virtual Switches:\n\nCheck for the link and negotiation status:\n\n~# esxcfg-nics -l\n\n~# esxcfg-nics -l\n\nNote: In some ESXi hosts, ethtool may fail with error \"Can not get control fd: No such file or directory.\" Use esxcli network nic get -n  <vmnic#> instead to display the Network Device information. Example:\n\n~# esxcfg-nics -l\n\n  Check for link up/down/flapping. Log into the respective host and check the log file /var/log/vmkernel.log:\n\n~# esxcfg-nics -l\n\nForce re-negotiation by shutting down and bringing up a port:\n\n~# esxcfg-nics -l\n\nCheck VLAN and MTU settings. Use CDP to find the VLAN and network interface it is connected to:\n\n~# esxcfg-nics -l\n\nDisplay the VLANs each port group is connected to:\n\n~# esxcfg-nics -l\n\nCheck for MTU/duplex issues.\n\n~# esxcfg-nics -l\n\n~# esxcfg-nics -l\n\nCheck the interface for network errors. (The NIC interface is vmnic2 in this example. To see the list of NICs use the following command: esxcfg-nics -l):\n\n~# esxcfg-nics -l\n\nNote: For Mellanox, use:\n\n~# esxcfg-nics -l\n\nCheck if the vmkernel network interface (vmk1) for vSwitchNutanix is enabled:\n\n~# esxcfg-nics -l\n\nUse the following command to enable the vmk1 interface:\n\n~# esxcfg-nics -l. Resolve physical layer issues:\n\tMove it to a different switch portChange the cableReplace the NIC\n\tResolve VLAN issues. Check the VLAN tagging in the management port group. It should match the appropriate VLAN, or if it is a native VLAN, remove the tagging.Resolve packet drop and network latency issues:\n\tCheck the CVM /home/nutanix/data/logs/sysstats/ping_*.INFO for \"unreachable\" errors by running the following command:\n\t\tnutanix@cvm$ cat ~/data/logs/sysstats/ping_*.INFO | egrep -i 'TIMESTAMP|unreach'\n\t\tCheck the CVM for high network latency issues:\n\t\tnutanix@cvm$ cat ~/data/logs/sysstats/ping_*.INFO | egrep -i 'TIMESTAMP|unreach'\n\t\tPrint any latency beyond 30ms:\n\t\tnutanix@cvm$ cat ~/data/logs/sysstats/ping_*.INFO | egrep -i 'TIMESTAMP|unreach'\n\t\t\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about How to Enable Share Clipboard on Zoom. ### Assistant: How to Enable Share Clipboard on Zoom during remote session. How to Enable Share Clipboard on Zoom during the remote session.. 1. Find the hidden toolbar at the top of the shared screen2. Click on \"More\" (three dots) -> \"Remote Control\"4. Make sure \"Share Clipboard\" is ticked. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix Kubernetes Engine - Kubernetes cluster is unhealthy after reducing the number of worker nodes to fewer than three. ### Assistant: In Kubernetes clusters deployed with NKE 2.7, reducing the number of worker nodes in a Production cluster to fewer than three causes the cluster to be reported as unhealthy due to an insufficient number of calico-typha replicas.. Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.\n\nCalico-typha is a component of the Calico Container Network Interface (CNI) used by NKE to increase scale by decreasing the impact Kubernetes nodes have on the datastore used by Calico. Starting with NKE 2.7, the number of replicas of calico-typha was increased from one to three in Kubernetes Production clusters. As a result of this change, decreasing the number of worker nodes in an NKE Production Kubernetes cluster from three or more nodes to fewer than three nodes will result in the Kubernetes cluster being reported as unhealthy. This is due to the cluster expecting to have three replicas of calico-typha instead of the one replica present after reducing the number of worker nodes to one or two.\n\nWhen the number of nodes is reduced to fewer than three, the node removal task will hang for several minutes at the \"Verifying health after removal of worker node\" step until the task eventually fails and reports the following error in the NKE UI:\n\nRemove worker failed to delete worker node: 1 nodes failed to be removed out of 1: failed to get the cluster health: Operation timed out: Calico: Calico unhealthy Max retries done: Failed to verify calico: Operation timed out: expecting 3 available replicas of calico-typha deployment in kube-system namespace. Currently running: 2\n\nIn the NKE UI, the status of the cluster is reported as Critical.\n\nThe Alerts page in the NKE UI will report the following two types of alerts:\n\nRemove worker failed to delete worker node: 1 nodes failed to be removed out of 1: failed to get the cluster health: Operation timed out: Calico: Calico unhealthy Max retries done: Failed to verify calico: Operation timed out: expecting 3 available replicas of calico-typha deployment in kube-system namespace. Currently running: 2\n\nIn the Prism Central UI, the \"Remove worker <worker node name> from node pool <node pool name>\" task will show as Failed and report the following error:\n\nRemove worker failed to delete worker node: 1 nodes failed to be removed out of 1: failed to get the cluster health: Operation timed out: Calico: Calico unhealthy Max retries done: Failed to verify calico: Operation timed out: expecting 3 available replicas of calico-typha deployment in kube-system namespace. Currently running: 2\n\nThe /home/nutanix/data/logs/karbon_core.out log on the Prism Central VM(s) will contain the following type of errors:\n\nRemove worker failed to delete worker node: 1 nodes failed to be removed out of 1: failed to get the cluster health: Operation timed out: Calico: Calico unhealthy Max retries done: Failed to verify calico: Operation timed out: expecting 3 available replicas of calico-typha deployment in kube-system namespace. Currently running: 2\n\nOn the Prism Element cluster, the VM removal task for the worker node is successful, and the worker VM is deleted.\n\nNote that this issue does not impact Kubernetes clusters deployed with NKE versions prior to 2.7, as those clusters would have been deployed with a single replica of calico-typha. This issue also does not impact Development clusters deployed with NKE 2.7, as Development clusters are deployed with a single calico-typha replica.. To resolve this issue, increase the number of worker nodes to three or more. For more information on adding worker nodes, see Creating a Node Pool in the Nutanix Kubernetes Engine 2.7 Guide.\n\nDecreasing the number of worker nodes in an NKE Production Kubernetes cluster to fewer than three is not recommended and should be avoided. The NKE UI does not prevent deploying a Production cluster with one or two worker nodes, but deploying a new Production cluster with fewer than three nodes is also not recommended and should be avoided.\n\nIf the number of worker nodes cannot be increased to three or more, contact Nutanix Support (login required) for assistance.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: ec_enablement_check. ### Assistant: The NCC health check ec_enablement_check checks if erasure coding should be enabled or disabled on a storage container.. The NCC health check plugin ec_enablement_check reports whether Erasure Coding data savings technique would be efficient for each of the storage containers in a cluster.This check recommends enabling erasure coding in cases where estimated EC savings are significant.\n\nRunning the NCC Check\n\nIt can be run as part of the complete NCC check by running\n\nncc health_checks run_all\n\nOr individually as:\n\nncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 1 day, by default.This check will generate a severity INFO alert A101082 after 1 failure.\n\nSample output\n\nFor status: PASS\n\nncc health_checks run_all\n\nFor status: INFO\n\nncc health_checks run_all\n\nOutput messaging\n\nCheck ID101082DescriptionChecks if erasure coding should be enabled on a storage container.Causes of failureErasure coding is recommended to be enabled on this storage container.ResolutionsEnable erasure coding to get space savings.ImpactWorkload is suitable for erasure coding space savings, but EC is not enabled.Alert IDA101082Alert TitleEnable erasure coding on containerAlert Smart TitleEnable erasure coding on container container_nameAlert MessageEnable erasure coding on container container_name to get at least savings_pct% space savings.. Troubleshooting\n\nThe NCC ec_enablement_check uses the 18% estimated savings threshold for recommendation to enable EC on specific storage containers in a Nutanix cluster.Not every workload is suitable for erasure coding, as detailed in the Prism Web Console guide.\n\nNote: In NCC version 4.6.5, this check might recommend enabling EC on the NutanixManagementShare, which is a storage container for Nutanix-only features and cannot be updated or deleted in Prism UI.\n\nNutanix Engineering is aware of the issue and is working on a solution.\n\nResolving the issue\n\nEnable erasure coding only on the storage containers identified by this NCC check.\n\nRealizing the EC space savings will take at least two Curator full scans to do the encoding and remove the excessive replicas. Curator full scans are scheduled to run every 6 hours by default. For more information on Curator scan types and schedules, see KB 2101.\n\nCollecting additional information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691. \n\tnutanix@cvm$ logbay collect --aggregate=true\n\t\n\nAttaching files to the case\n\nAttach the files at the bottom of the support case on the support portal. If the size of the NCC log bundle being uploaded exceeds 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: ahv_file_integrity_check. ### Assistant: Introduces in NCC 4.1.0, the AHV file integrity check detects any unexpected changes in the base configuration of AHV host.. The NCC health check ahv_file_integrity_check (AHV file integrity check) detects any unexpected changes in the base configuration of AHV host.This check runs on only AHV, but also only if the cluster has enabled 'AIDE' via NCLI and additional STIG DoDIN APL enablement has been performed on supported AOS/AHV versions. By default, and unless specific steps have been taken to enable this functionality on-demand, this check will return a PASS result.Using \"aide\" as the file integrity tool, it tracks any changes in critical files on an AHV host filesystem. A list of files and their tracked properties are periodically stored and any changes in the tracked properties of these files will raise a warning alert which in turn triggers the alert workflows configured on the cluster such as email, syslog, SNMP trap, Prism Alert notifications.It can be run as part of the complete NCC check by running:\nnutanix@cvm$ ncc health_checks run_all\nor individually as:\n\nnutanix@cvm$ ncc health_checks run_all\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check runs on each CVM to assess its AHV host. It does not run on Prism Central.This check is scheduled to run once per week.This check produces a WARN alert after a single failure.Sample outputCheck Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\nCheck Status: WARN\n\nnutanix@cvm$ ncc health_checks run_all\nCheck Status: ERR\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID111086DescriptionWeekly file integrity check to detect any modifications in the baseline configuration of the host.Causes of failureDetected unexpected modifications in the baseline configuration of the host.ResolutionsRefer KB10414 to review the detected changes.ImpactBaseline configurations of the host might have been changed in an unauthorized manner.Alert IDA111086Alert TitleFile integrity check found modifications.Alert MessageFile integrity tool has found modifications on host {host_ip}.. Investigating a WARNOn detection of any unexpected changes, a file will be created under /var/log/aide/ directory of the affected AHV host.The name of the file will be in the format of file-integrity-tool-YYYY-MM-DD_HH-MM-SS.log. The timestamp in the file name is the time of execution of file integrity tool.For assistance with assessing the log output, engage Nutanix Support.To speed up the resolution time and minimize possible impact, avoid performing any activity on the cluster that would involve a host filesystem changes or reboots (including upgrades). Also, collect and attach the following information to the support case:\nA complete NCC report:\n\nnutanix@CVM$ ncc health_checks run_all\n\nA log bundle generated from the cluster. This can be collected through Prism Web Console's health page. Select Actions > Collect Logs. Logs can also be collected through the command line using logbay (KB 6691 - NCC - Logbay Quickstart Guide):\n\nnutanix@CVM$ ncc health_checks run_all\n\nInvestigating an ERRIf the output \"ERR : Caught error while extracting information using host agent API call\" is seen for any node, it may indicate that the periodic file state tracking has not yet run for the first time to initialize the aide database on the affected AHV host. Please wait for the next scheduled run of this check. In case the above-mentioned step does not resolve the issue, consider engaging Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: nvme_raid_check. ### Assistant: The NCC health check nvme_raid_check verifies if the NVMe RAID is functioning correctly.. The NCC health check nvme_raid_check verifies if the NVMe RAID is functioning correctly. The NVMe RAID controller, having two NVMe drives connected in RAID1 mode, serves as the host boot device on G8 platforms. In case of a single drive failure, the hypervisor will still be available and functional.This check contains the following parts:\n\nChecking the RAID status. Checks the RAID status for high availability. FAIL is returned if RAID is in degraded mode.Checking the NVMe disk for uncorrectable ECC errors. If a disk encounters any uncorrectable errors, this check returns FAIL.Checking the wear-out status of the NVMe boot disk. If the NVMe disk is worn out and needs to be replaced, this check returns WARN.Checking the NVMe disk temperature. If the NVMe disk temperature is not in the range given by the vendor, WARN is returned.Checking the RAID rebuild status. Checks whether RAID rebuild is currently in progress. WARN is returned if rebuild is in progress.Checking the firmware of the NVMe boot disk. If the NVMe disk needs an upgrade, this check returns WARN.Checking the firmware of the NVMe RAID Controller. Checks if the RAID controller needs a firmware update.\n \n\nRunning the NCC check\n\nYou can run this check as part of the complete NCC Health Checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr you can also run this check separately:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run\n\nSample output\n\nFor status:  PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: FAIL (on RAID status)\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: FAIL (on cluster formation)\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN (on M.2 UECC)\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN (on M.2 Temperature)\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN (on NVME wear-out)\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN (on RAID rebuild)\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN (on NVME firmware)\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN (on RAID firmware)\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: ERR (on could not fetch host hardware raid info)\n\nnutanix@cvm$ ncc health_checks run_all\n\n \n\nOutput messaging\n\nCheck ID106071DescriptionCheck the status of NVMe RAID if degraded.Causes of failureOne of the M.2s connected to the RAID card is offline or has issues. Check alert message for the drive and port details.ResolutionsReplace the failing M.2. Check for the port and drive details in the alert message. Or Contact Nutanix Support.ImpactThe node will be up and running with one M.2. But it has a risk of going down in the absence of redundancy.ScheduleEvery 30 minutes\n \n\nCheck ID106072DescriptionChecks the temperature of each drive and alert if it is not within 0C and 75CCauses of failureThe ambient temperature in the affected host environment may be too high for effective cooling, or airflow through host may be suboptimal.ResolutionsCheck for any airflow obstructions physically near the node and the vents. Make sure air conditioning systems are functional.ImpactContinued operation above temperature thresholds may damage the host boot drives and lead to unplanned downtimeAlert IDA106072Alert Smart TitleTemperature of M2 disk of node host_ip is out of rangeAlert TitleTemperature of M2 disk is out of rangeAlert Messagealert_msgScheduleEvery 30 minsNumber of failures to alert1\n \n\nCheck ID106073DescriptionChecks if the disk has returned UECC errors.Causes of failureBoth M.2s connected to the RAID card have UECC errors.ResolutionsIt is recommended to replace one of the drives having UECC.ImpactContinued operation with UECC errors may lead to unplanned downtime.Alert IDA106073Alert Smart TitleM2 disk of node host_ip, returned UECC errorsAlert TitleM2 disk returned UECC errorsAlert Messagealert_msgScheduleEvery 30 minutesNumber of failures to alert1\n \n\nCheck ID106074DescriptionChecks wear-out level of the M.2.Causes of failureM.2 has worn out and reaching its specification limit.ResolutionsThe disk needs to be replaced.ImpactSlow Performance.Alert IDA106074Alert Smart TitleM2 disk of node host_ip, worn outAlert TitleM2 disk has worn outAlert Messagealert_msgScheduleEvery 30 minutesNumber of failures to alert1G8 False Positive IssuePlease upgrade NCC to Version 4.6.2.1 or higher. There is a false positive condition that is addressed in newer NCC versions. No drive replacement is needed. To verify resolution after upgrading, run the check again: ncc health_checks hardware_checks disk_checks nvme_raid_check.\n \n\nCheck ID106075DescriptionChecks if M2 disk firmware needs to be upgraded.Causes of failureM.2 drive is running outdated firmware and requires an update.ResolutionsReview KB10355 for instructions on upgrading the host boot drive firmware.ImpactM.2 drive may not have up-to-date performance and stability features of more recent firmware.Alert IDA106075Alert Smart TitleFirmware of the RAID M2 disk of node host_ip, needs to be upgraded. Perform inventory from Life Cycle Manager to check and update to the latest firmware version.Alert TitleFirmware of the RAID M2 disk needs to be upgraded.Alert Messagealert_msgScheduleEvery 24 hoursNumber of failures to alert1\n \n\nCheck ID106076DescriptionChecks if the RAID card BIOS, Firmware needs to be updated.Causes of failureThe hardware RAID controller is running outdated BIOS or Firmware version. Please refer to the alert message.ResolutionsReview KB10355 for instructions on upgrading the BIOS, Firmware, or Bootloader of the hardware RAID controller.ImpactHardware RAID controller may not have up-to-date performance and stability features of more recent firmware.Alert IDA106076Alert Smart TitleRAID card BIOS or Firmware or boot loader of node host_ip, needs to be updated.Alert TitleRAID card BIOS or Firmware or boot loader needs to be updated.Alert Messagealert_msgScheduleEvery 24 hoursNumber of failures to alert1\n \n\nCheck ID106077DescriptionChecks the temperature of the RAID card and alert if it is more than 95CCauses of failureThe ambient temperature in the affected host environment may be too high for effective cooling, or airflow through the host may be suboptimal.ResolutionsReview the physical environment of the affected host and ensure air-conditioning systems are functioning and correctly calibrated and confirm the absence of any airflow obstructions around vents and the host itself.ImpactContinued operation above temperature thresholds may damage the host boot drives and lead to unplanned downtime.Alert IDA106077Alert Smart TitleTemperature of RAID card on node {host_ip} is above thresholdAlert TitleTemperature of RAID card is above thresholdAlert Messagealert_msgScheduleEvery 30 minutesNumber of failures to alert1\n\nNote: This hardware-related check executes on the Nutanix NX hardware.. 1. If the check status suggests a firmware update is required:\n\nFirmware for the concerned hardware component (M.2 / Raid Controller) should be upgraded. Steps to be performed:\n\nNVMe M.2 FirmwareNVMe RAID Controller\n\nThe M.2 and Controller firmware are end-user upgradable.Upgrade the firmware using LCM (Life Cycle Manager) by running an inventory, ensuring the LCM framework is up to date, and following the prompts to update the required components. For more details, see the Life Cycle Manager Guide.If there is no Internet connectivity, LCM \"dark site\" process can be used. Refer to the following document: Using the Life Cycle Manager Without Web AccessIf LCM is not an option, refer to KB 6937 for a manual update\n\n2. If the check returns a temperature warning:\n\nThe component temperature may be high due to a physical environment condition affecting node cooling, such as blocked or failed chassis cooling fans, ambient node temperature too high in the data center, AC failure, or insufficient hot air exhaust. Steps to be performed:\n\nNVMe M.2 card temperatureNVMe RAID controller temperature\n\nCross-reference NCC output for fan failures or other high component temperatures.Closely review the physical environmental cooling and heat extraction conditions for the physical node(s) and chassis.Resolve any airflow or cooling issues and re-evaluate.If environmental conditions are optimal, and there are no other signs of node/chassis temperate control issues, contact for further assistance.\n\n3. If the check status returns FAIL or WARN for the below checks, contact Nutanix Support at https://portal.nutanix.com for further assistance.\n\nRAID statusM.2 UECCM.2 wear-out\tKnown issue: In rare conditions, a false positive alert may happen when NCC fails to read the NVMe M.2 data. Nutanix recommends that customers upgrade to NCC 4.6.2.1 or later, which contains the fix for this issue.\n\n4. If the check status returns ERR:\nConfirm hypervisor is supported on the hardware platform. Contact Nutanix Support for further assistance.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about How to Configure Network Access on Node Booted into Phoenix or Rescue Shells. ### Assistant: This article describes how to configure network access on a node booted info Phoenix or Rescue shells.. Phoenix is an in-memory Gentoo Linux OS that Nutanix uses for various workflows. If you find that a host in your cluster is unreachable over the network but IPMI console shows a prompt similar to the one below, then you are in the Phoenix Shell.\n\nphoenix / #\n\nYou may find your node booted into Phoenix due to any of the following circumstances:\n\nFoundation imagingPhoenix imagingCluster expansionPrism Host Boot Device (satadom) Repair - Proactive or FailedOne-Click firmware upgradeLCM firmware upgrade\n\nThere is a similar shell that can appear when booting a CVM with svmrescue.iso. You may find yourself here in a CVM's console view if your Single SSD Repair task has failed.\n\nphoenix / #\n\nThis article explains how you can configure non-persistent network access on either of the above shells for the purposes of debugging or log collection.\n\n . Steps to configure network access in a Rescue shell. For the purposes of simplicity I am using Phoenix, but the steps are the same for Rescue.\n\nWARNING: If a VLAN tag is required, DO NOT perform the steps below. Proceed to the next procedure called \"If you need to tag a VLAN for access\" further below.\n1) List your available physical interfaces (ethX).\n\nphoenix / # ifconfig -a\n2) Cycle through the interfaces so that they can detect any active links. Do this for all the interfaces listed in the above output.\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n3) Cycle through the interfaces to find the one which can detect a link. Usually this will be a 10000baseT/Full interface. Look for \"Link detected: yes\".\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n4) Once you have found the interface with a link detected, configure the local host's CVM IP address on the interface.\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n5) Configure the correct netmask.\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n6) Configure the default gateway for the same interface.\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n7) At this point, you should be able to ping the default gateway. \n\nphoenix / # ifconfig -a\nExample:\n\nphoenix / # ifconfig -a\n8) The next step is to enable SSH access so that you can log in using PuTTY.Foundation/Phoenix < 4.3\n\nphoenix / # ifconfig -a\n\nFoundation/Phoenix 4.3-5.1\n\nphoenix / # ifconfig -a\n\nFoundation/Phoenix >= 5.2\n\nphoenix / # ifconfig -a\n9) Now try to SSH into the CVM IP address you just assigned to Phoenix using \"root\" as the username and \"nutanix/4u\" as the password.\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n\nNote: The password will always be \"nutanix/4u\".Note: If you see a message like \"Permission denied (publickey,password,keyboard-interactive)\", you'll have to remove the entry for this IP address out of the /home/nutanix/.ssh/known_hosts file and then try again. \n\nIf you need to tag a VLAN for access:\n\nDo not perform the above steps for IP configuration if a VLAN tag is required. Start from step 1 below.\n1) Add a VLAN tag to the eth0 interface. Replace vlan_tag below with the VLAN used for this subnet:\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n2) Assign an IP and subnet mask to the interface. Replace vlan_tag with the VLAN used for this subnet:\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n3) Assign the default gateway\n\nphoenix / # ifconfig -a\n\nExample:\n\nphoenix / # ifconfig -a\n4) To enable shell access, go to step 7 of regular IP configuration above.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about How To Set CPU Power Management States (C-State) in AHV. ### Assistant: This article describes how to disable processor C-States for latency sensitive applications on AHV. Modern CPUs utilized a technology called \"C-States\" to manage the amount of power that individual processor cores are utilizing.  When a core is idle, the server's BIOS will reduce its clock rate, power draw, or both in an effort to make the system more energy efficient.  In most cases, this is the desired condition as it can significantly reduce power consumption.  The unused power may be used by other CPU cores to increase their frequency (GHz), allowing instructions executing on active CPU cores to complete faster.The number of C-States available depends on the processor make and model and the capabilities of the server BIOS.  C-States can be broken down into two general buckets:\n\nActive/Idle C-States - Normal CPU activity, no power savings\n\tC0 - The CPU is actively executing instructionsC1 - The CPU is idle, but fully online (voltage + clock-speed)C1E - The CPU is idle, its clock-speed has been reduced, but is at full voltage\n\tSleep States - CPU's clock is stopped and voltage reduced (most common states listed)\n\tC2 - CPU clock is stoppedC3 - CPU clock is stopped and voltage reducedC6 - CPU internal/external clocks are stopped, voltage reduced or powered off\n\t\n\nHowever, some application CPU usage patterns are bursty - requiring CPUs to transition from a low speed/voltage to full speed/voltage.  If the application requires highly performant, low latency CPU operation, it may be sensitive to CPU cores transitioning from a low-power sleep state to full power/clock-speed.  For applications where low latency is more important than power savings, the hypervisor can be instructed to disable processor C-States, preventing the system from powering down CPUs.\n\nThese settings are recommended or may be required for applications including Epic Operational Database (ODB), Epic Hyperspace, Intersystem Caché and MEDITECH File Server, Oracle and MS SQL databases, as well as SAP HANA, RDMA (Remote Direct Memory Access) enabled systems and other environments where C-States could add latency.. Before modifying the CPU power states from the operating system, the BIOS has to allow it.  Consult the BIOS documentation for the specific platform running the workload and follow the recommendations from the hardware manufacturer for specific applications.  Ensure that the BIOS allows managing CPU power states from the OS (probably only an issue on old BIOS levels).  AHV can override UEFI/BIOS C-State settings, so you need to use this process in addition to the BIOS settings.\n\nNote that directly editing C-States in the BIOS is not supported for the Nutanix hardware platform as outlined in Release Notes | BMC and BIOS.\n\nEnable the AHV C-State service\n\nThere is a service available on AHV that will disable the use of Sleep C-States if the BIOS allows it. To use this service, run the following commands on the CVM to enable it on each AHV host and start it.\n\nOn AHV 20190916.x (bundled with AOS 5.15.4 and part of AOS 5.16 and higher):\n\tnutanix@cvm$ hostssh systemctl enable cstate\n\t\n\nThe cstate service disables C-States C3 and C6. Testing has shown that disabling these two states has the greatest impact on performance. C-States C0, C1, C1E and C2 are not affected.\n\nTo verify that the state service is running, execute the following command from a CVM:\n\nnutanix@cvm$ hostssh systemctl enable cstate\nNote: The setting will also need to be applied when additional hosts are added to the cluster or if a host is re-imaged.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A400120 - CalmIDFError. ### Assistant: Investigating CalmIDFError alerts on a Nutanix cluster. Note: Nutanix Self-Service was formerly known as Calm.\n\nThis Nutanix article provides the information required for troubleshooting the alert CalmIDFError for your Nutanix cluster.\n\nAlert Overview\n\nThe CalmIDFError alert is generated when there are IDF errors in Calm.\n\nCase: Registration of an entity type has failed\n\n\tThis also results in the subsequent failure of migration of this entity. This usually happens because there are a large number of entities present for the given entity type. The connection between Calm and IDF times out (default 5-minute timeout) between entity registration, which causes this issue.\n\t\n\nSample Alert\n\nBlock Serial Number: 16SMXXXXXXXX\n\nOutput messaging\n\nCheck ID400120DescriptionError in Calm related to IDFCause of failureIDF related error in CALMResolutionPlease refer to KB-14577ImpactDB Seed or Calm Entity migration might fail. Calm Service will get impactedAlert IDA400120Alert TitleIDF error in CalmAlert MessageIDF error: message. Troubleshooting and Resolving the Issue\n\nCase: Registration of an entity type has failed\n\n\tRestarting the nucalm docker container might fix the issue. On restarting, Calm tries to re-register the entities required and if there were intermittent issues related to IDF, they might get resolved.\n\n\tSteps to restart nucalm container:\n\n\tLog in to Prism Central (PC).Execute:\n\t\tnutanix@pcvm$ svmips\n\n\t\tThis shall output the Prism Central IPs where you need to restart nucalm container.\n\t\tLog in to each Prism Central IP and execute:\n\t\tnutanix@pcvm$ svmips\n\n\t\tAfter successfully executing the above command, you should see a list of services running on Prism Central with nucalm: [] indicating the nucalm docker container has stopped.\n\t\tNow execute:\n\t\tnutanix@pcvm$ svmips\n\n\t\tAfter successfully starting the container, you should again see a list of services on Prism Central. This time, nucalm should have some values associated with it.\n\t\tNow, to verify if all the entities were seeded successfully, you can check for the nucalm container logs present at ~/data/logs/nucalm.out. After restarting the container, if you see log traces like the one below, then the registration is still failing.\n\t\tnutanix@pcvm$ svmips\n\n\t\tIf you do not see the above error trace, the CALM entities have been registered successfully in the IDF.\n\t\t\n\t\n\nIf you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. Collect additional information and attach them to the support case.\n\nCollecting Additional Information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691.\n\tnutanix@pcvm$ svmips\n\t\n\nAttaching Files to the Case\n\nTo attach files to the case, follow KB 1294.\n\nIf the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A160167 - File Server HA state. ### Assistant: Alert to notify when a Files Cluster is in an HA state. This Nutanix article provides the information required for troubleshooting the alert File-Server-HA-state for your Nutanix Files cluster.\nAlert Overview\n\nThe File-Server-HA-state alert is generated when Files is in a High Availability state.\n\nSample Alert\n\nBlock Serial Number: 16SMXXXXXXXX\nOutput messaging\n\nCheck ID160167DescriptionFile Server HA stateCause of failureFile Server is in High Availability StateResolutionsRefer to KB article 14282. Contact Nutanix support if issue still persists or assistance needed.ImpactFile Server Performance might get affected.Alert IDA160170 Alert TitleFile Server HA stateAlert Message \"File Server Performance might get affected.\"\n . Resolving the IssueNote: This alert can trigger as an \"INFO\" event during Nutanix Files upgrades.  This is expected behavior. \n\nLogin to one of the FSVMs and confirm that this is not a false positive.\n\nnutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state\nConfirm if FSVMs are up and contactable. The emboldened text below points out where an FSVM is inaccessible. \n\nnutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state\nConfirm via the Prism GUI that the FSVMs are powered on. The name of the FSVM will start NTNX then contain the name of the Files Server. For example: NTNX-Files1-1.If the FSVM is powered on, please open the Console of the FSVM to see if there is any output displayed. Please gather logs using the below guide and attempt restarting the FSVM before providing the logs to Nutanix Support. \nCollecting Additional Information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094.\n\tCVM logs stored in ~/data/logs/minerva_cvm*\n\t\tNVM logs stored within the NVM at ~/data/logs/minerva*\n\t\t\n\t\tTo collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run\n\n\t\tnutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state\n\n\t\tOnce you are on the Minerva leader CVM, run:\n\t\t\tnutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state\n\n\t\t\tFor example:\n\n\t\t\tnutanix@NTNX-A-FSVM:~$ afs ha.minerva_check_ha_state\n\t\t\t\n\t\t\n\t\n\nAttaching Files to the Case\n\nTo attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.\n\nRequesting Assistance\n\nIf you need assistance from Nutanix Support, add a comment in the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers. You can also press the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will be in contact.\n\nClosing the Case\n\nIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about AHV ISO shows \"PRE-RELEASE/TESTING\" during installation. ### Assistant: Manual installation of AHV 20190916.x ISO downloaded from portal may show as \"Pre-Release/Testing\" on the installation screen.. Manual installation of AHV 20190916.x ISO downloaded from the portal may show as \"Pre-Release/Testing\" on the installation screen. This can be seen on UI in red.A similar message is observed during the boot in the text mode.. Installation of AHV directly from AHV ISO is not a supported workflow.In order to reinstall AHV, consider using the Host Boot Disk Repair workflow (the section about hardware replacement is not needed in this case and should be skipped).Or follow the steps in KB 3523 to make a bootable Phoenix ISO with bundled AHV and AOS bits.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A111092 - VM Deployment from Template check. ### Assistant: NCC 4.6: This Nutanix article provides the information required for troubleshooting the alert \"VM Deployment from Template check\" for your Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert \"VM Deployment from Template check\" for your Nutanix cluster.Alert OverviewThe \"VM Deployment from Template check\" alert is generated when VM deployment from a template can not be successfully completed.Sample Alert\nWarning: Report on deploy VM to cluster status\nPotential ImpactVM is not deployed.Output Messaging \nCheck ID111091DescriptionReport on deploy VM to cluster statusCause of FailureA PC service or Destination Cluster may be unreachable or undergoing maintenance.ResolutionsCheck connectivity between cluster, Prism Central and the destination Prism Element. Ensure PE-PC connection is healthy on the destination PE cluster and reinitiate the operation to make up for the partial failure.ImpactSome VM(s) will not be present on the Cluster.Alert Smart TitleFailed to deploy VM {vm_name} from Template {template_name} to {cluster_name}    Alert TitleVM deployment from Template to Prism Element Cluster failedAlert MessageVM from Template could not be fully deployed to the Cluster\n  . TroubleshootingEnsure Prism Element - Prism Central connection is healthy and retry the operation. Contact Nutanix Support at https://portal.nutanix.com if additional assistance is required. Collect additional information and attach them to the support case.\n\nCollecting Additional Information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB-2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB-6691.\n\nnutanix@cvm$ logbay collect --aggregate=true\n\nIf the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command:\n\nnutanix@cvm$ logbay collect --aggregate=true\nAttaching Files to the CaseAttach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB-1294.Requesting AssistanceIf you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact.Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Memory hot-add to a UVM running on AHV is rounded up to the next GB. ### Assistant: Hot adding memory is always rounded up to the next GB (e.g. setting VM memory to 3.1G will be rounded up to 4G). Setting UVM memory while powered off will always be set as it is without rounding up.. Hot adding memory to a user VM running on AHV will round up the added memory to the next Gb. That can lead to one of the following situations:\n\nUVM's memory is currently set to a whole X GB, so setting it to any value with a fraction will be always rounded up to the next Gb value.\n\tExample: The VM currently has 8Gb of memory:\n\n\tnutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory\n\n\tAttempting to increase the VM memory to 8392Mb:\n\n\tnutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory\n\n\tThe process completes successfully, however, verifying the actual amount of memory shows that the memory is now at 9Gb:\n\n\tnutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory\n\tUVM's memory is currently set with a fraction of GB. Increasing the memory with any value will be rounded up to a Gb and then added to the old value.\n\tExample: The VM currently has 8.5Gb of memory (8704Mb):\n\n\tnutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory\n\n\tAttempting to add 200Mb to that VM (end value 8904Mb):\n\n\tnutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory\n\n\tThe operation completes successfully, but instead of 200Mb it adds 1Gb to the original value. So, it rounds up the added amount to the next Gb value:\n\n\tnutanix@CVM:~$ acli vm.get <UVM_name> | grep -i memory\n\tCold adding/setting memory to a VM will always be set as entered and will not be rounded up.. To set the memory for a VM with a fraction of Gb, it is required to power off the VM and cold-add the memory.If the VM memory is already configured with a fraction of Gb, in order to set the whole Gb, the VM would need to be powered off.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about AHV VMs may be restarted unexpectedly due to memory corruption. ### Assistant: In some cases, unexpected user VM restart can be caused by memory corruption. If Frodo process, which is responsible for servicing I/O generated by user VM, tries to access corrupted memory page, it will stop working and user VM will be rebooted.. In some cases, unexpected guest VM restart is caused by memory corruption. It may happen if the Frodo process, which is responsible for servicing I/O generated by guest VM, tries to access a corrupted memory page or during VM live migration.VM was rebooted during normal operations/home/nutanix/data/logs/acropolis.out log on CVM (Controller VM) that hosts Acropolis leader will contain the following events:\n\n2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff\n\n/var/log/libvirt/qemu/<vm uuid>.log on the AHV host will contain the following events:\n\n2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff\n\n/var/log/messages on the AHV host will contain the following events:\n\n2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff\n\nVM was rebooted during live migrationThe migration task may fail with the following error:\n\n2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff\n\n/var/log/libvirt/qemu/<vm uuid>.log on the AHV host will contain the following events:\n\n2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff\n\n/var/log/messages on the AHV host will contain the following events:\n\n2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff\n\nIn addition, the following logs can confirm memory corruption:The following error can be seen in the /var/log/mcelog on the AHV host:\n\n2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff\n\nYou may notice on CVM that the value of the HardwareCorrupted counter is not equal to zero:\n\n2019-08-28 23:02:32 INFO state_change_event_op.py:219 VM 77ea1286-5aed-4a80-9f31-8aa7d709f680 changed state: kOn -> kOff. The hardware issue is suspected to cause this problem. Make sure that the latest recommended versions of BIOS and BMC firmware are running and engage hardware vendors to perform hardware diagnostics. If the hardware is NX hardware, consider engaging Nutanix Support at https://portal.nutanix.com/.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix Files - Steps to convert Non Distributed Share to a Distributed Share. ### Assistant: The following article provides the steps necessary to convert a share from a standard share to a distributed share.. This article applies to Nutanix Files 4.0.2 and later.\n\nBeginning in Nutanix Files 4.0.2, a feature was introduced to support converting a file share from one type of share to another. In this initial release, the workflow supported is converting from a standard share to a distributed share. \n\nNote: this article is intended to be walked through by Support with a customer to validate whether the share to be converted is a good candidate and the steps to kick off the share conversion.\n\nWhat is supported? \n\nFrom Standard to Distributed Shares (One-way conversion). CLI-based processes (start, stop, pause, resume, abort, cut-over).Max 50M files and 2000 TLDs in the share to be converted.Only a single conversion can be run at once. \n\nNote: Converting a standard (general) share to a distributed (home) share has a known limitation where the Share level attributes are not applied on the target (distributed) share. The share attributes will need to be re-applied again at the share level i.e. at the root of the target share.\n\nWhat is not supported?\n\nConversion via UI.Files at the root of the source standard share. Windows hidden or system files at the root of the standard share may prevent the share conversion. To see these files from the Windows client, please make sure to select \"Show hidden files, folders, or drives\" and uncheck \"Hide protected operating system files (Recommended)\" under the View tab of Windows Explorer Folder Options.Distributed shares to Standard shares (future roadmap).Shares with Smart Tiering or Smart DR policies.Multiple active share conversions (1 active conversion at a time).DoDIN APL ConfigurationsChanging the amount of storage allocated to Files is prohibited whilst a conversion process is running. \n\nWhat guardrails are in place?\n\nShare Attributes\n\n\tMust manually set the following post-conversion: SSR, File Blocking, and Quotas.Antivirus applied post-conversion cutover.\n\tFile Server updates such as re-IP, Node Add, and Node delete are allowed.VG Move tasks cannot be initiated during conversion.TLDs created during conversion are limited to 2,000.Smart DR or Tiering. If enabled, the shares cannot be converted.There is, as of yet, no guardrail for storage space used by Files. Make sure there's enough free space to replicate the share you're converting. . Note:\n\nAll commands will be run on the File Server VM (FSVM) command line unless stated otherwiseIn the following examples, the non-distributed share name is test_share. \nDoDIN enabled clusters only\n\nUse this command to disable DODIN temporarily to allow share conversion to complete:\n\nnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\nEnable DoDIN again with this command:\n\nnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\nIf you forget to do the above on a DoDIN-enabled cluster, the conversion will never complete.The FSVM command 'afs sync.make_distributed status conversion_id=<UUID>' will continue to cycle through all of the files for conversion returning \"BYTES TRANSFERRED - NONE\".Steps to convert a share:\n\nRun a pre-check on the standard share to be converted (note this can be skipped because a pre-check is done at the start of a share conversion):\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\t\tNote: The conversion process doesn't support files at the root of the source SMB share shares since post-conversion files at the root are not supported. This guardrail is part of the precheck process. You may see the following error:\n\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tExample:\n\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tStart the share conversion on the standard share to be converted:\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tNote: Once the share conversion process is initiated, a Share conversion operation will be initiated with the conversion id.\tExample:\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tTo check the status of the running conversion. Use the conversion ID obtained in step 2.\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tExample:\n\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tBefore proceeding with the cutover, you will want to make sure that the status shows Status: Active [L0 Completed]:\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tInitiate the cutover to the distributed share. Once this command is initiated, share connections to the standard share will be flushed. In addition, both the standard share and the new distributed share will be in No Access Mode. A final replication will take place.\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tExample:\n\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tNote: The default cutover time is 30 mins to complete the final replication. If that time limit is exceeded, the original share will be made available again, or you can modify the time by changing the “cutover_timeout” option.Check the status of the cutover and look for “status: completed”:\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tExample:\n\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tNote: The old standard share will be renamed to <standard_share_name>_ntnx_old, and a new distributed share will be created with the original share name.\n\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tTo apply the settings to the new distributed share that was set to the original standard share, run the following command:\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tExample:\n\n\tnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\tIf the clients are having file access issues post-conversion, have them remount the NFS export or the SMB share to avoid file access issues from stale file handles. In addition, they may have to clear the DFS cache on Windows clients.\n\nIf, for any reason, you need to roll back this operation, such as running out of storage space allocated to Files, you can abort the procedure using the following:\n\nnutanix@FSVM:~$ afs infra.set_sudo_auth state=off\n\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about AOS Only - What to do when /home partition or /home/nutanix directory on a Controller VM (CVM) is full. ### Assistant: This article describes ways to safely free up space if /home partition or /home/nutanix directories of a CVM become full or do not contain sufficient free space to facilitate an AOS upgrade.. Important Notes:\n\nDo not use this KB or the cleanup script if /home partition is exceeding the limit on a Prism Central VM (PCVM). For the PCVM issue, refer to KB 5228.Login to the CVMs as nutanix userDo not treat the Nutanix CVM (Controller VM) as a normal Linux machine.Do not use rm -rf under any circumstances unless stated. It will lead to data loss scenarios.If you are running LCM-2.6 or LCM-2.6.0.1, LCM log collection fills up /home directory please refer KB 14671 for workaround.If you receive /home partition usage high alert on a cluster running NCC 4.0.0, also check KB 10530.You can review the specific clusters affected by this alert via the discoveries on the Support Portal powered by Nutanix Insights hereContact Nutanix Support if you have any doubts.\n\nCVM /home partition or /home/nutanix directory being full can be reported in two scenarios:\n\nThe NCC health check disk_usage_check reports that the /home partition usage is above the threshold (by default, 75%).The pre-upgrade check test_nutanix_partition_space checks if all nodes have a minimum of 5.6 GB space in the /home/nutanix directory.\n\nThe following error messages are generated in Prism by the test_nutanix_partition_space pre-upgrade check:\n\nNot enough space on /home/nutanix directory on Controller VM [ip]. Available = x GB : Expected = x GB\n\nNot enough space on /home/nutanix directory on Controller VM [ip]. Available = x GB : Expected = x GB\n\nNot enough space on /home/nutanix directory on Controller VM [ip]. Available = x GB : Expected = x GB\n\nNot enough space on /home/nutanix directory on Controller VM [ip]. Available = x GB : Expected = x GB\n\nNutanix reserves space on the SSD tier of each CVM for its files and directories. These files and directories are in the /home folder you see when you log in to a CVM. The size of the /home folder is capped at 40 GB so that most of the space on SSD is available for user data.\n\nDue to the limited size of the /home partition, running low on free space and triggering Prism Alerts, NCC Health Check failures or warnings, or Pre-Upgrade Check failures is possible. These guardrails exist to prevent /home from becoming full, as this causes data processing services like Stargate to become unresponsive. Clusters where multiple CVMs' /home partitions are 100% full often result in downtime of user VMs.\n\nThe Scavenger service running on each CVM is responsible for the automated clean-up of old logs in /home. Improvements to the Scavenger service scope were made in AOS 5.5.9, 5.10.1, and later releases. For customers running earlier AOS releases, or in special circumstances, it may be necessary to manually clean up files in some directories to bring the space usage of /home down to a level that will permit the AOS upgrade.\n\nWhen cleaning up unused binaries and old logs on a CVM, it is important to note that all the user data partitions on each drive associated with a given node are also mounted within /home. Nutanix advises strongly against using undocumented commands like rm -rf /home since this will also wipe the user data directories mounted within this path. This article aims to guide you through identifying the files causing the CVM to have low free space and removing only those that can be deleted safely. . General Guidance\n\nChecking the space usage in /home.\n\n\tTo accommodate a potential AOS upgrade, usage should be below 70%. Use the df -h command to verify the amount of free space in /home. In the example below, CVM x.x.x.12 /home usage is 81%.\n\n\tnutanix@cvm$ allssh \"df -h /home\"\n\n\tCVM /home partition information can be collected using the logbay command (NCC 4.0.0 and above, KB 6691).\n\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tCleaning unnecessary files under /home directory.\n\tIf you have any open cases with pending Root Cause Analysis, check with the case owner whether these log files are still required or can be discarded.\n\n\tWarnings:\n\n\tDO NOT treat the Nutanix CVM (Controller VM) as a normal Linux machine.DO NOT use rm -rf under any circumstances unless stated. It will lead to data loss scenarios.Contact Nutanix Support if you have any doubts.\n\t\n\nMethod 1: Using approved script\n\nDownload and run KB-1540_clean_v10.sh to clean files from approved directories. Note: This script is NOT qualified to be used on Prism Central VM.\n\nFrom any CVM, run the following commands to download KB-1540_clean_v10.sh:\n\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tDeploy the script to a local CVM or all CVMs of the cluster:\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tExecute the script to clear files from approved directories.\n\tHelp\n\t\tnutanix@cvm$ allssh \"df -h /home\"\n\t\tInteractive mode\n\t\tnutanix@cvm$ allssh \"df -h /home\"\n\t\tNon-interactive mode\n\t\tnutanix@cvm$ allssh \"df -h /home\"\n\t\t\n\t\tNote: If the output of the script or its coloring looked incorrect, try to set the environment variable before running the script, or use \"--no_color\" option:\n\n\tnutanix@cvm$ allssh \"df -h /home\"\n\t\tInteractive mode\n\n\tMain menu\n\t\t\t\tAll PlansSwitch displaying plans (all / concerned)RescanRescan usages and update tableList AllList all targetted files/directoriesRun AllRun or Dry-run all plansExport AllExport all targetted files/directories to CSV##: Plan #Go to the plan item menu\n\t\t\t\tPlan item menu\n\t\t\t\tRescanRescan and update tableOperationChange operation (remove/shelter/etc.)ListList targetted files/directoriesDryrunDry-run this planRunRun this planExportExport targetted files/directories to CSV\n\t\t\t\t\n\tNon-interactive mode\n\n\tCommands\n\t\t\t\t$ ./nutanix_home_clean.py --scanScan and show the usages.$ ./nutanix_home_clean.py --list\t\t\t\t\t\t\t$ ./nutanix_home_clean.py --list=<##>List up the target files for all plans or specific plan-##.$ ./nutanix_home_clean.py --dryrun\t\t\t\t\t\t\t$ ./nutanix_home_clean.py --dryrun=<##>Dry-run all plans or specific plan-##.$ ./nutanix_home_clean.py --run\t\t\t\t\t\t\t$ ./nutanix_home_clean.py --run=<##>Run all plans or specific plan-##.\n\t\t\t\tAdditional Options\n\n\t\t\t\t--operation=<op>Choose operation (remove, shelter)--sheletedir= <path>Set the sheltering location.--yesFor skipping disclaimer and confirmation.\n\t\t\t\t\n\t Cleaning up after the troubleshooting\tThe downloaded script files, logs and exported files are expected to be removed manually after every troubleshooting. The total size of these files should be small and will not affect CVM's filesystem. You can remove the following files once the script becomes unnecessary.\t<yymmdd-hhmmss> is the creation date and time.\n\tIn the CVM where the KB script is deployed (/home/nutanix/tmp/):\t\tKB-1540_clean.sh - downloaded file from the KB\t\tdeploytool_yyyymmdd-hhmmss.log - deployment script's log (unnecessary after deployment)\t\tnutanix_home_clean.py - main KB script\t\tnutanix_home_clean_config.py - config file for the main scriptIn the rest of the CVMs in the cluster - if deployed to all CVM in step 2:\t\tnutanix_home_clean.py - main KB script\t\tnutanix_home_clean_config.py - config file for the main scriptEvery CVM where nutanix_home_clean.py is run:\t\tKB-1540_v10_yyyymmdd_hhmmss_nutanix_home_clean.log - KB script's log\t\tKB-1540_v10_yyyymmdd_hhmmss_export_*.csv - exported files (if exported)\n\tThe following command can remove all of the above:\n\n\tnutanix@cvm$ allssh \"df -h /home\"\n\t\n\nMethod 2: Manual method\n\nPLEASE READ: Only the files under the directories stated below are safe to delete. Take note of the specific guidance for removing files from each directory. Do not use any other commands or scripts to remove files. Do not use rm -rf under any circumstances.\n\nRemoving old logs and core files. Only delete the files inside the following directories and not the directories themselves.\n\n\t/home/nutanix/data/cores//home/nutanix/data/binary_logs//home/nutanix/data/ncc/installer//home/nutanix/data/log_collector//home/nutanix/prism/webapps/console/downloads/NCC-logs-*\n\n\tUse the following syntax for deleting files within each of these directories:\n\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tRemoving old ISOs and software binaries. Only delete the files inside the following directories and not the directories themselves.\n\tCheck the current running AOS version under \"Cluster Version\":\n\n\tnutanix@cvm$ allssh \"df -h /home\"\n\n\t/home/nutanix/software_uncompressed/ - The software_uncompressed folder is only in use when the pre-upgrade is running and should be removed after a successful upgrade. If you see a running cluster that is currently not upgrading, it is safe to remove everything within the software_uncompressed directory. Delete any old versions other than the version to which you are upgrading./home/nutanix/foundation/isos/ - Old ISOs of hypervisors or Phoenix./home/nutanix/foundation/tmp/ - Temporary files that can be deleted.\n\t\tUse the following syntax for deleting files within each of these directories:\n\n\t\tnutanix@cvm$ allssh \"df -h /home\"\n\t\t/home/nutanix/software_downloads/\n\t\tIf the files under the software_downloads directory are not required for any planned upgrades, remove them from Prism Web Console > Settings> Upgrade Software. Also check File Server, Hypervisor, NCC, and Foundation tabs to locate the downloads you may not require. The example below illustrates two versions of AOS available for upgrade, each consumes around 5 GB. Click on the 'X' to delete the files.\n\n\t\t\n\n\t\tUncheck the “Enable Automatic Download” option if it is checked. Left unmonitored, the cluster will download multiple versions, unnecessarily consuming space in the home directory.\n\t\t\n\tRe-check space usage in /home using df -h (see General Guidance of this article) to confirm that it is now below 70%.\n\nNote: If you cannot delete the files with the below error and space not claimed, contact Nutanix Support for assistance.\n\nnutanix@cvm$ allssh \"df -h /home\"\n\nImportant Notes for NC2 Clusters:It has been observed in some instances of NC2 clusters that /tmp gets close to full. You can follow the below steps to clean ~/tmp directory.\n\nSSH to the affected CVM and check the disk usage by running \"df -h\" command:\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tIn the above output, we can see /tmp is showing 100%. Change the directory to ~/tmp & sort them using 'sudo du -aSxh /tmp/* | sort -h'.\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tFrom the output you receive above, manually delete files that are bigger than 12K. For example, see below files deleted from the above output.\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tAfter deleting, you can check available free space using 'df -h':\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tAs you can see, available free space now shows 6%. You can further recheck with:\n\tnutanix@cvm$ allssh \"df -h /home\"\n\tor\n\n\tnutanix@cvm$ allssh \"df -h /home\"\n\t\n\nEven after cleaning up files from the approved directories, /home usage is still above the threshold\n\nContact Nutanix Support for assistance. Under no circumstances should you remove files from any other directories aside from those recommended by this article, as these may be critical to the CVM performance or may contain user data.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about AHV node removal stuck after successfully entering maintenance mode. ### Assistant: Node removal task for AHV host can get stuck if the the host has already entered maintenance mode before the node removal task was triggered.. Node removal task for the AHV host can be stuck if the host had already entered maintenance mode before the node removal task was kicked.Below are the symptoms and the methods to verify if you are hitting the same issue:\nAHV host has successfully entered maintenance mode. Connect to any CVM in the cluster and run the following command to check AHV host state:\n\nnutanix@cvm:~$ acli host.list\n\nExample output:\n\nnutanix@cvm:~$ acli host.list\n\nIf the host is schedulable, then True value is shown in the Schedulable column. Otherwise, False value is shown.Run the following command to check AHV host state:\n\nnutanix@cvm:~$ acli host.list\n\nExample output:\n\nnutanix@cvm:~$ acli host.list\n\nHere we can see that host is in maintenance mode.\n\nNode remove status for the host is MARKED_FOR_REMOVAL_BUT_NOT_DETACHABLE. Connect to any CVM in the cluster and run the following command to check the node removal status:\n\nnutanix@cvm:~$ acli host.list\n\nExample output:\n\nnutanix@cvm:~$ acli host.list\n\nNode is removed from the Cassandra ring. Connect to any CVM in the cluster and run the following command to check that the node has been removed from the Cassandra ring:\n\nnutanix@cvm:~$ acli host.list\n\nHere we can see that node has been removed from the Cassandra ring.\n\nNode is present in the Prism Hardware list with the status of \"Being removed\".. This is a known issue that is resolved in AOS 5.15.2 and above versions. Upgrade AOS to the latest supported version to avoid this issue.If currently affected by this issue then gather the information above and engage Nutanix Support to resolve the issue.When logging a support case with Nutanix Support on this issue. Collect the log bundle using Logbay and upload it to the case directly via FTP/SFTP, or manually via the Support Portal.For more information on how to use Logbay, see KB 6691. To automatically upload the log bundle using Logbay, use the --dst (FTP/SFTP destination) and -c (support case number) options.\n\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NKE cluster scrubbing failed. ### Assistant: NKE clusters PreUpgradeChecks fail with message \"NKE cluster scrubbing failed\".. Trying to upgrade Kubernetes or node OS-image might fail during the PreUpgradeChecks phase with the error message:\nPreUpgradeChecks: preupgrade checks failed: unable to accept the request. NKE cluster scrubbing failed, check error logs and rectify.\nThe failing precheck prevents the upgrade from starting. \n\n \n. Prism Central hosts the Karbon_core service. This service manages NKE clusters and does a scrub operation on all registered NKE clusters when the service starts and/or is restarted. The scrub operations are performed to ensure all NKE cluster nodes are configured properly; for instance, if there are any proxy-related changes on the PE cluster hosting the NKE nodes, the Karbon_core service pushes the new proxy configuration to NKE nodes and services.Scrub operations for a specific NKE cluster might not complete properly or fail for various reasons. For example, if the NKE cluster is not healthy, the error message is logged to karbon_core.log file on PCVM. This log file should be inspected to understand why scrubbing has failed:\nnutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out\nNote: On a scale-out PC, the \"karbon_core.out\" log file should be inspected on the Karbon_core leader service.Execute below command to find the NKE/karbon_core service leader on a scaled-out PC deployment:\nnutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out\nFor example, karbon_core service on PCVM \"x.x.x.103\" is the service leader:\n\nnutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out\nCheck karbon_core.out log file to see why the scrub operation has failed for the affected NKE cluster.Most of the time, this issue is not noticed until a K8s or OS-Image upgrade is done. This could be a long time since the karbon_core service was last restarted, and the logs would have been rotated already.To resolve this issue and after inspecting karbon_core.out log file for possible causes:\n\nEnsure the NKE cluster is reported as healthy, either in PC WebUI or using karbonctl command from PCVM:\thttps://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-clusters-tab-r.html\tOr via karbonctl:\thttps://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-logon-karbonctl-t.html\n\n\tnutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out\n\tRestart karbon_core service on all PCVMs; wait for 10 minutes till a new scrubbing operation is completed, then retry the upgrade task:\thttps://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-service-restart-t.html\nIf the above does not resolve the issue, contact Nutanix Support, collect Karbon_core logs from Prism Central and attach them to the case for further analysis. See KB-12133 for instructions on how to collect karbon/NKE logs. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Which Alerts Automatically Generate a Support Case with Nutanix Support. ### Assistant: This article provides details about who Nutanix contacts when a case is generated, how to change the contact, and which alerts generate cases.. When enabled, a Nutanix cluster automatically sends alerts to Nutanix Support.Alert cases are raised with Nutanix Support for a subset of alerts that have been identified as being important, so action can be taken on them.Most of the alert cases are created with a P4 priority level, which has a target response time of 24 hours. NodeDegraded and CassandraSSTableHealth are created with a P2 priority level, which has a target response time of two hours.Nutanix has also enabled an automatic first response email for most P4 alerts. If a primary Support Contact is listed for a node, the email is sent to that contact, with Nutanix Support following up on the email within the next two business days.Nutanix sends parts to the Parts Shipment Address listed for the node in the Nutanix Support Portal. It is important to keep this address up-to-date for timely dispatch of parts and prompt resolution of hardware issues.. Who is Contacted?Alert cases are handled by Nutanix Support, who reach out to the primary Support Contact (also referred to as the break/fix contact) associated with the node or cluster in question. For customers with a third-party providing front-line support, it is normal to have a representative from that company listed as the Primary Support Contact.\n\nTo Whom Are Parts Sent?By default, parts are sent to the Parts Shipment Address listed on the node's asset details in the Nutanix Support Portal. It is important to keep this address up-to-date for timely dispatch of parts and prompt resolution of hardware issues. Out-of-date addresses may result in parts being sent to the wrong address/person, which results in delays in receiving the parts and getting your system back to a healthy state.\n\nChanging the Contact and Parts Shipment AddressPerform the following to modify the break/fix contact in the Support Portal:\n\nGo to Assets > BlocksSelect the asset for which you want to modify the contact and/or Parts Shipment AddressClick Actions -> Update and specify the new contact/address details\n\nWhich Alerts Generate Cases?The following alerts generate alert cases.\n\nNotes:\n\nThe table is updated for the latest AOS and NCC releases; earlier releases do not send all of these alerts. Nutanix recommends keeping the NCC version update to the latest release. NCC can be updated independently from the AOS.Alert cases are generated only when the cluster is under an active support contract.You may need to log in to the Support Portal to view some of the KB articles below.As per FA-83, on March 1st, 2021, Nutanix has deprecated the automatic support case creation functionality via nos-alert/nos-asup email (SMTP) notifications in favour of automatic case creation via Insights Discoveries using Pulse.For performing Discovery-based case creation on Nutanix clusters, the minimum NCC requirement is 3.10.x.A1031 HomePartitionUsageHigh and A3038 FsInconsistencyCheck are temporarily disabled and do not generate support cases.\n\nCase Generating Alert Table\n\n \n\nNameAlert IDDescriptionTypeSFDC Case PriorityKBAlert SeverityAlert TypeMetadataDiskMountedCheckA101055Check that all metadata disks are mountedSWP4KB 4541CriticalNormalCassandraSSTableHealthA21021Health warnings detected in metadata serviceSWP2KB 7077CriticalCluster wideCuratorScanFailureA1081Curator scan has failedSWP4KB 3786CriticalCluster wideDiskSpaceUsageHighA1005Disk capacity is above 90%SWP4KB 3787CriticalCluster wideNodeDegradedA130087One or more critical processes on a node are not responding to the rest of the clusterSWP2KB 3827CriticalCluster wideStargateTemporarilyDownA1030The Stargate process on a node has been down for more than 3 hoursSWP4KB 3784CriticalCluster wideSATADomPECyclesErrorA1180The SATADOM has exceeded a wear thresholdHWP4KB 4137CriticalNormalECCErrorsLast1Day*A1187ECC errors over the last day have exceeded the one day thresholdHWP4KB 4116CriticalCluster wideECCErrorsLast10Days*A1188ECC errors over the last 10 days have exceeded the 10 day thresholdHWP4KB 4116CriticalCluster wideHardwareClockFailure*A1059Hardware Clock FailureHWP4KB 4120CriticalNormalPhysicalDiskBad*A1104A physical drive in the node has been reported as badHWP4KB 4158CriticalNormalDiskDiagnosticFailure*A1139Drive diagnostic has failed.HWP4KB 6287WarningNormalPowerSupplyDown*A1046One of the power supplies in the chassis has been reported downHWP4KB 4141CriticalCluster wideSystemTemperatureHigh*A1012A node in the cluster has an abnormally high temperatureHWP4KB 4138CriticalNormalSATADOMUnreachableA1177SATA DOM in the node cannot be reachedHWP4KB 7813CriticalNormalSATADOMFailedA1165SATA DOM in the node has failedHWP4KB 1850CriticalNormalShellvDisksAboveThresholdA1182Number of Shell vDisks in the cluster is above the thresholdSWP4KB 8559CriticalNormalIPMISELUECCCheckA15039Check for number of UECC errors for last one day in the IPMI SELHWP4KB 8885CriticalNormalLcmFamilyMismatchD3-0035Incorrect LCM family can cause upgrade failureSWP4KB 9898CriticalNormalM2BootDiskFailureA106042M.2 Boot Disk change checkHWP4KB 4874WarningNormalHwRaidCheckFailureA106053Host Hardware RAID status CheckHWP4KB 6627CriticalNormalDiskStatusCheckFailureA106061Disk status check for SATA disksHWP4KB 8094CriticalNormalDiskStatusCheckFailureA106062Disk status check for SAS disksHWP4KB 8094CriticalNormalDiskStatusCheckFailureA106063Disk status check for host boot disksHWP4KB 8094CriticalNormalDiskInodeUsageHighA1027High inode usage on this node may cause cluster performance to be significantly degraded.HWP2KB 1532CriticalCluster uuidCvmBootRaidDegradedA1107The node is operating wih a degraded boot RAID volume on the Controller VM.HWP2KB 2463CriticalCluster uuidStoragePoolSpaceUsageCheckA1128Space usage in the storage pool has exceeded the critical threshold (90% of usable space by default)SWP2KB 2475CriticalCluster uuidNvmeCheckFailureA1196The node is operating with a problematic NVMe driveHWP4KB 3751CriticalCluster uuidStargateMergedvBlockCrashD3-0075Stargate crashed due to merged vblock pointing to non-existing eidsSWP2KB 12431CriticalAlert group and cluster uuidMetadataDiskUnmountedFatalD6-004A metadata disk that is expected to be in-use is no longer mountedSWP4KB 4494CriticalAlert group and cluster uuidDiskRemovalStuckA130156External alert generated by Curator for when Disk removal process is stuck for a long timeSWP4KB 2159CriticalCluster uuidCassandraRingSkewThresholdD3-0114-ACluster performance at risk due to detected Cassandra metadata imbalance on 3 node clusterSWP4KB 1342CriticalCluster uuidCuratorExtentRegionInfoCrashD4-0015Curator has recently crashed due to an extent region having multiple vblocks pointing to the same egroupIDSWP2KB 13306CriticalCluster uuidCuratorMetadataInconsistencyFatalD4-0016Curator scans may crash due to metadata inconsistencySWP2KB 13779CriticalCluster uuidCuratorVblockMetadataInconsistencyFatalD4-0017Curator service has crashed due to inconsistent vblock reference in metadataSWP2KB 13860CriticalCluster uuidDiskOfflineD0-1044Node storage capacity potentially affected due to a drive marked offlineHWP4KB 8453CritcalCluster uuidStargateErasureCodingFTCrashD4-0018Stargate has recently crashed due to two EC strip members found on the same nodeSWP2KB 14286CriticalCluster uuid\n\nECC Alert Case Creation Logic \n\n\n\t\t\tECC Discovery Name\n\t\t\t\n\t\t\tPlatform\n\t\t\t\n\t\t\tAlert ID\n\t\t\t\n\t\t\tAdditional Logic\n\t\t\t\n\t\t\tCase Creation\n\t\t\t\n\t\t\tDiscovery Recommendation\n\t\t\t\n\t\t\tKB\n\t\t\t\n\t\t\tD0-1187-UECC\n\t\t\t\n\t\t\tNX platforms\n\t\t\t\n\t\t\tA15039\n\t\t\t\n\t\t\tn/a\n\t\t\t\n\t\t\tEnabled\n\t\t\t\n\t\t\tReplace DIMM\n\t\t\t\n\t\t\tKB 8885\n\t\t\t\n\t\t\tD0-1187-Generic\n\t\t\t\n\t\t\tNX platforms\n\t\t\t\n\t\t\tA1187 / A1188\n\t\t\t\n\t\t\tAlert msg could not be retrieved\n\t\t\t\n\t\t\tEnabled\n\t\t\t\n\t\t\tTroubleshoot DIMM alert\n\t\t\t\n\t\t\tKB 4116\n\t\t\t\n\t\t\tD0-1187-G4G5-BIOS\n\t\t\t\n\t\t\tG4/G5\n\t\t\t\n\t\t\tA1187 / A1188\n\t\t\t\n\t\t\tBIOS < G4G5\n\t\t\t\n\t\t\tDisabled\n\t\t\t\n\t\t\tUpgrade BIOS and BMC to latest version. Do not replace DIMM.\n\t\t\t\n\t\t\tKB 3357\n\t\t\t\n\t\t\tD0-1187-G4G5-DIMM\n\t\t\t\n\t\t\tG4/G5\n\t\t\t\n\t\t\tA1187 / A1188\n\t\t\t\n\t\t\tDIMM CECC error message and BIOS version >= G4G5\n\t\t\t\n\t\t\tEnabled\n\t\t\t\n\t\t\tReplace DIMM\n\t\t\t\n\t\t\tKB 3357\n\t\t\t\n\t\t\tD0-1187-G6G7-DIMM-BIOS\n\t\t\t\n\t\t\tG6/G7\n\t\t\t\n\t\t\tA1187 / A1188\n\t\t\t\n\t\t\tDIMM CECC error message and BIOS version < 41.x\n\t\t\t\n\t\t\tEnabled\n\t\t\t\n\t\t\tReplace DIMM. Upgrade BIOS to latest version\n\t\t\t\n\t\t\tKB 9137\n\t\t\t\n\t\t\tD0-1187-G6G7-BIOS\n\t\t\t\n\t\t\tG6/G7\n\t\t\t\n\t\t\tA1187 / A1188\n\t\t\t\n\t\t\tDIMM CECC error message and BIOS = 41.x\n\t\t\t\n\t\t\tDisabled\n\t\t\t\n\t\t\tUpgrade BIOS and BMC to latest version. Do not replace DIMM.\n\t\t\t\n\t\t\tKB 9137\n\t\t\t\n\t\t\tD0-1187-G6G7-DIMM\n\t\t\t\n\t\t\tG6/G7\n\t\t\t\n\t\t\tA1187 / A1188\n\t\t\t\n\t\t\tDIMM CECC error message and BIOS >= 42.x\n\t\t\t\n\t\t\tEnabled\n\t\t\t\n\t\t\tReplace DIMM\n\t\t\t\n\t\t\tKB 7503\n\t\t\t\n\t\t\tD0-1187-hPPR-Fail\n\t\t\t\n\t\t\tG6/G7\n\t\t\t\n\t\t\tA15040\n\t\t\t\n\t\t\tn/a\n\t\t\t\n\t\t\tEnabled\n\t\t\t\n\t\t\tReplace DIMM\n\t\t\t\n\t\t\tKB 7503\n\t\t\t\n\t\t\tD0-1187-hPPR\n\t\t\t\n\t\t\tG6/G7\n\t\t\t\n\t\t\tA1187 / A1188\n\t\t\t\n\t\t\tDIMM RAS event error message and BIOS >= 42.x\n\t\t\t\n\t\t\tDisabled\n\t\t\t\n\t\t\tReboot host for hPPR (Hardware Post Package Repair) feature\n\t\t\t\n\t\t\tKB 9137\n\t\t\t\n\n \n\nPSU Alert Case Creation Logic \n\n\n\t\t\tECC Discovery Name\n\t\t\t\n\t\t\tPlatform\n\t\t\t\n\t\t\tAlert ID\n\t\t\t\n\t\t\tAdditional Logic\n\t\t\t\n\t\t\tCase Creation\n\t\t\t\n\t\t\tDiscovery Recommendation\n\t\t\t\n\t\t\tKB\n\t\t\t\n\t\t\tD0-1046\n\t\t\t\n\t\t\tAll\n\t\t\t\n\t\t\tA1046\n\t\t\t\n\t\t\tn/a\n\t\t\t\n\t\t\tNX Platforms Only\n\t\t\t\n\t\t\tReplace PSU\n\t\t\t\n\t\t\tKB 7386\n\t\t\t\n\t\t\tD0-1046-ENV\n\t\t\t\n\t\t\tAll\n\t\t\t\n\t\t\tA1046\n\t\t\t\n\t\t\tPSU State value = 0x0 or 0xb\n\t\t\t\n\t\t\tDisabled\n\t\t\t\n\t\t\tConfirm the power supply is connected to its power source\n\t\t\t\n\t\t\tKB 7386\n\t\t\t\n\n* This is an auto-first response eligible alert.  Where a primary Support Contact is configured, an initial email will be sent.\n\nYou can find more information about automatic alerts and the Pulse functionality in Alert and Event Monitoring of the Prism Web Console Guide for your AOS release.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: cvm_startup_dependency_check. ### Assistant: Controller VM (CVM) may fail to start after the host is rebooted. The NCC health check cvm_startup_dependency_check determines whether any problems regarding CVM bootup are likely to occur upon the host reboot.. If an ESXi host is restarted, the Controller VM (CVM) may fail to start. The NCC health check cvm_startup_dependency_check determines whether any problems regarding CVM bootup are likely to occur when the host is restarted.\n\nThis check runs on ESXi clusters only and verifies the presence of any startup dependencies that may result in a CVM not starting after the host reboot.\n\nThe check examines the contents of /etc/rc.local.d/local.sh (on the ESXi host) and makes sure it is intact and not missing anything that might be important to start the CVM after the host restarts.\n\nNote: In ESXi 5.1+, the rc local script is /etc/rc.local.d/local.sh while in ESXi 5.0, it is /etc/rc.local.\n\nThe cvm_startup_dependency_check consists of the following:\n\nPYNFS dependency checklocalcli check - checks if esxcli command is used in the rc local script. (Deprecated in NCC version 3.10.0.)vim command check - checks if \"vim-cmd vmsvc/power.on\" command is present in the rc local script.Autobackup check - checks if /sbin/auto-backup.sh has been run successfully.Network adapter setting check - checks if the network adapter is set to connect on power on.EOF check - checks if there is an \"EOF\" line at the end of the rc local script.RC local script exit statement present - checks if there is a top-level exit statement in the rc local script..dvsData directory in local datastore - checks if .dvsData directory is present on pynfs mounted local datastore and if it is persistent.Svmboot mount check - checks if svmboot.iso is present on mounted local datastore.\n\nBased on the outcome of the above checks, the result is either PASS, INFO, FAIL or ERR.\n\nRunning the NCC CheckYou can run this check as part of the complete NCC Health Checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr you can run this check individually:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is not scheduled to run on an interval.This check will not generate an alert.\n\nSample outputFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: INFO\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID106436DescriptionCheck that /sbin/auto-backup.sh has run successfullyCauses of failure/sbin/auto-backup.sh has not run successfully.ResolutionsMake sure '/bootbank/state.tgz' has a newer timestamp.ImpactCVM may not boot up after the host reboot.\n\nCheck ID106439DescriptionCheck that .dvsData directory is present on pynfs mounted local datastoreCauses of failure.dvsData directory is not persistent yet.ResolutionsCheck if .dvsData directory exists in the local datastore.ImpactCVM may not boot up after the host reboot.\n\nCheck ID106437DescriptionCheck that there is no line EOF at end of RC local scriptCauses of failureEOF statement at end of RC localResolutionsCheck that 'local.sh' does not have 'EOF'ImpactCVM may not boot up after the host reboot.\n\nCheck ID106438DescriptionCheck that top level exit statement is not present in RC local scriptCauses of failureTop-level exit statement present in script RC local preventing script lines from being run.ResolutionsCheck if 'local.sh' has an 'exit' statement. Generate INFO if the exit statement is NOT within the if..fi statementImpactCVM may not boot up after the host reboot.\n\nCheck ID103068DescriptionCheck Network adapter settingCauses of failureNetwork adapter is not set to not connect on power onResolutionsCheck if ethernet0.startConnected =true is present in the CVM's .vmx file.ImpactCVM may not boot up after the host reboot.\n\nCheck ID106431DescriptionCheck PYNFS dependencyCauses of failurePYNFS is in use and is not present.ResolutionsValidate PYNFS configuration.ImpactCVM may not boot up after the host reboot.\n\nCheck ID106433DescriptionCheck that vim-cmd vmsvc/power.on command is present in the local scriptCauses of failurevim-cmd vmsvc/power.on\\ command not present in local script.ResolutionsCheck if 'vim-cmd vmsvc/power.on' entry is present in 'local.sh'.ImpactCVM may not boot up after the host reboot.\n\nCheck ID106440DescriptionChecks if CVM ISO is on mounted local datastoreCauses of failureServiceVM_Centos.iso is not on the mounted local datastore.ResolutionsCheck if ServiceVM_Centos.iso exists in the local datastore.ImpactCVM may not start after the host reboot.. Find your error message in the table below and perform the corresponding recommended actions.\n\nFailed checkError messagesRecommended actionsPYNFS dependency check\"PYNFS is in use and is not present\"If pynfs is in use, make sure pynfs VIB is present and if pynfs.tar.gz is referenced in /etc/rc.local.d/local.sh, make sure it is present and readable.vim command check\"\"vim-cmd vmsvc/power.on\" command not present in local script\"Make sure 'vim-cmd vmsvc/power.on' entry is present in /etc/rc.local.d/local.sh.Autobackup check\"/sbin/auto-backup.sh has not run successfully\"Check whether /sbin/auto-backup.sh has been run successfully by confirming that /bootbank/state.tgz is newer than /etc/rc.local.d/local.sh.\t\t\tThe NCC check fails if /bootbank/state.tgz is not newer than /etc/rc.local.d/local.sh.\t\t\tSample result for the failure scenario:\n\t\t\tnutanix@cvm$ hostssh ls -al /bootbank/state.tgz\n\n\t\t\tnutanix@cvm$ hostssh ls -al /bootbank/state.tgz\n\t\t\tNetwork adapter setting check\"Network adapter is set to not connect on power on\"Make sure the CVM VMX file does not have the setting \"ethernet0.startConnected\". If it does, make sure it is set to \"true\".EOF check\"\"EOF\" statement at end of rc local\"Make sure /etc/rc.local.d/local.sh does not have an 'EOF' line..dvsData directory in local datastore\".dvsData directory is not persistent yet\"If .dvsData directory exists in the local datastore, make sure it is persistent.Svmboot mount check\"Failed to find ServiceVM*.vmx\"\t\t\t\"Invalid vmx configuration\"\t\t\t\"ServiceVM_Centos.iso is not mounted\"\t\t\t\"ServiceVM_Centos.iso missing in vmx configuration\"\n\t\t\tMake sure /vmfs/volumes/NTNX-local-ds*/*/ServiceVM*.vmx exists, is readable and its contents are valid.Make sure /vmfs/volumes/NTNX-local-ds-XXXXXXX-A/ServiceVM_Centos/ServiceVM_Centos.iso is present in the CVM's .vmx file and is mounted.Make sure ServiceVM_Centos.iso is checked to start connected.Make sure the local datastore name is not changed to a non-default from NTNX-local-ds*.(non-configurable vSphere components)Rename ESXi local datastore name to default format of NTNX-local-ds* from vCenter server and run the check again.Make sure references to files ide0:0.fileName = \"ServiceVM_Centos.iso\" and serial0.fileName=\"ServiceVM_Centos.0.out\" are not showing a full path like \t\t\t\tide0:0.fileName = \"/vmfs/volumes/5a3a73f2-59cb5028-8a34-0cc47a9bc41e/ServiceVM_Centos/ServiceVM_Centos.iso\" , only the files should appear.Make sure memsize and shed.mem.minsize are not using a capital S at the 'size' word.\n\t\t\t\n\nDo not restart the host or trigger any rolling reboot activity like CVM Memory upgrade or AOS upgrade until the check passes or Nutanix Support confirms that a restart will not cause any problem with the CVM bootup.\n\nIn case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Changing the IPMI password via ipmitool fails with the \"event not found\" error. ### Assistant: Changing the IPMI password via ipmitool may fail with the \"event not found\" error if the password contains special characters like \"!\".. Changing the IPMI password via ipmitool may fail with the \"event not found\" error if the password contains special characters like \"!\".\n\nnutanix@cvm:~$ hostssh ipmitool user set password 2 abcdef!xyz. Special characters like \"!\" are interpreted by bash resulting in command failures.Use one of the following approaches to resolve this issue:\nIf using a command with hostssh, enquote the complete command:\n\tnutanix@cvm:~$: hostssh 'ipmitool user set password 2 <password>'\n\tWhen running the command directly on the AHV host, enquote the password:\n\tnutanix@cvm:~$: hostssh 'ipmitool user set password 2 <password>'\n\tChanging it using IPMI GUI. Example for Nutanix hardware: Configuration tab -> Users -> Choose User -> Modify.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix Move - FAQ (Frequently Asked Questions). ### Assistant: Nutanix Move is a tool used to migrate VMs to Nutanix AHV by automating the process of installing VirtIO and retaining the IP as well as the MAC address on the AHV side. This article contains Frequently Asked Questions (FAQ) on Nutanix Move.. The Nutanix Move User Guide is present on the Nutanix Support Portal, and we advise you to review the docs to understand the product workflow.\n\nBelow are Frequently Asked Questions on Nutanix Move.. What is Nutanix Move and why do we need it?\n\tNutanix Move is a tool used to migrate VMs to Nutanix AHV by automating the process of installing VirtIO and retaining the IP as well as the MAC address on the AHV side. We need Move to take care of all the hassle of copying the vdisks one by one and loading to AHV using Image Configuration and installing VirtIO on the machines manually and keeping a note of the IPs as well.\n\n\t \n\tMovements from which all hypervisors to AHV is supported?\n\tMove 1.x and Move 2.x supports migration from ESXi to Hyper-V.\n\n\tMove 3.x supports migration from Hyper-V to AHV, AWS to AHV, and ESXi to ESXi (on Nutanix).\n\n\t \n\tWhat are the Port requirements for Move?\n\tRequirements for ESXi migration:\n\n\tESXi hosts should be reachable from Nutanix Move on ports TCP 443 and TCP 902.vCenter should be reachable from Nutanix Appliance on ports TCP 443.Allow ports (TCP and UDP) 2049 and 111 between the Nutanix Move network and the AHV CVM network.\n\n\tRequirements for Hyper-V Migration:\n\n\tWinRM-HTTPS: 5986WinRM-HTTP: 5985RDP: 3389 (only for inbound)SSH: 22\n\n\t \n\tWhich guest OSes are supported for Migration?\n\tSupported guest OSes for ESXi to AHV migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-support-os-r.html\n\n\tSupported guest OSes for ESXi to ESXi migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-support-guest-os-esxi-to-esxi-r.html\n\n\tSupported guest OSes for AWS to AHV migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-supported-os-aws-r.html\n\n\tSupport guest OSes for AWS to ESXi migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-supported-os-aws-esxi-r.html\n\n\tSupported guest OSes for Hyper-V to AHV migration: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v3_6:v36-supported-os-hyperv-r.html\n\n\tHow to Migrate Guest OSes using Data only Migration: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-perform-data-only-migration.html\n\n\t \n\tWhat are the Default Credentials for Nutanix Move?\n\tDefault credentials for Nutanix Move GUI:\n\n\tUsername: nutanixPassword: nutanix/4u\n\n\tDefault credentials for Nutanix Move CLI:\n\n\tUsername: adminPassword: nutanix/4u\n\n\tNote: The admin user on Move does not have all the major rights so the best way is to change the user to root using the rs command.\n\n\t[admin@nutanix-move]$ rs\n\n\t \n\tHow to Collect log bundle on Nutanix Move?\n\tTo download the latest support bundle: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_4:v34-support-bundle-c.html#nconcept_uxw_ppz_bgb\n\n\t \n\tHow can I upgrade Nutanix Move?\n\tTo upgrade Nutanix Move using the 1-click method, dark site upgrade or CLI based upgrade: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v3_6:v36-upgrade-management-c.html#nconcept_byq_nnl_b3b\n\n\t \n\tDoes Nutanix Move support migration of Exchange Server?\n\tExchange should be migrated by installing newer versions of Exchange Server in parallel with existing production environments, then move user mailboxes from the old system to the new one.\n\n\tRefer migration limitation: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-migration-limitations-r.html\n\n\t \n\tDoes Nutanix Move support migration of MySQL / Oracle database servers?\n\tNutanix Move supports migrations for smaller DBs with a lesser workload. But if you are looking at a heavy database VM, then we recommend performing the best practices offered by Microsoft and Oracle. Adding to this, there are Nutanix Best Practices for such workloads to run with optimized performance, hence we recommend you perform cloning of the role to a newly created VM on AHV rather than migrating the VM using Move.\n\n\tYou can refer to this portal document pointing to the limitations of Nutanix Move: https://portal.nutanix.com/#/page/docs/details?targetId=Nutanix-Move-v3_6:v36-migration-limitations-r.html\n\n\tBest Practices Guide for Oracle on AHV: https://portal.nutanix.com/#/page/solutions/details?targetId=BP-2073-Oracle-on-AHV:BP-2073-Oracle-on-AHV\n\n\tBest Practices Guide for MySQL server on Nutanix: https://portal.nutanix.com/#/page/solutions/details?targetId=BP-2056-MySQL-on-Nutanix:BP-2056-MySQL-on-Nutanix\n\n\t \n\tCan the VMs be migrated if they are powered off?\n\tYes, the VMs can be migrated if they are powered OFF. However, they will be Powered ON to install the necessary drivers if the “Automatic Preparation Mode” is selected. If you choose \"Manual Preparation Mode\", then you will need to copy the necessary scripts to the VMs and run them manually regardless of their power state. Either case, there will be a minimized downtime when you click \"Cutover\" which will power off the VMs on the source and power them on on the target.\n\n\t \n\tWhat are the supported hypervisors we can migrate from?\n\tMove supports migration from:\n\n\tESXi to AHVHyper-V to AHVAWS to AHVESXi to ESXi (on Nutanix)\n\n\tSupported Versions are listed here\n\n\t \n\tHow can I change the IP address of my Move VM?\n\tTo assign a static IP address to Move, check this link\n\n\t \n\tHow many VMs can be migrated from Move in one go?\n\tThere is no limit on the number of VMs that can be migrated in one go. However, the default limit for running parallel migrations is 32. The rest would be in the “Queued for Migration” state.\n\n\t \n\tAre there logs created on VM by Move?\n\tYes, there are logs injected on the VMs when Move starts injecting VirtIO and IP retention scripts.\n\n\tOn Windows, it can be found in:\n\n\t[admin@nutanix-move]$ rs\n\n\tOn Linux, it can be found in:\n\n\t[admin@nutanix-move]$ rs\n\n\t \n\tWhat are the common user permissions needed on the Windows side if using a Domain administrator or local Administrator?\n\tCheck 3 things for any user on Windows VM to be qualified for Move to use:\n\n\tUsers must have \"Logon as Batch Job\" enabled.UAC must be disabled (for the local administrator, it is usually disabled)Users must have \"Restore files and directories\" rights.\n\n\t \n\tWhat are the common user permissions needed on the Linux side if using root user?\n\tBelow are the configuration settings to look out for when migrating a Linux VM using credentials.\n\n\tEnsure ssh localhost using only password authentication, i.e. set “PasswordAuthentication yes” in sshd_configEnsure using sudo, user assume root privileges without entering a passwordEnsure read, write and execute permissions for /tmp and /opt and /boot foldersroot access, sshd service running\n\n\t \n\tWhy do we need to install VirtIO to the machines when performing the migration to AHV?\n\tNutanix VirtIO needs to be installed on the VMs before migrating to AHV as its functionality is similar to VMware Tools in ESXi. VirtIO helps in detecting the disks when booting up with SCSI controllers on an AHV cluster. You might run into issues post-migration where VMs fail to boot up (Windows machines might run into \"Boot device not found\" and Linux machines might show \"boot partition was not detected\").\n\n\tMake sure you enter the correct credentials to the VM so that Move can take care of the installation of VirtIO OR you can install the VirtIO drivers manually, which are available from the Nutanix Portal in the AHV / VirtIO download section.\n\n\tTo manually install VirtIO on a Linux VM, follow the article Migrating Linux VMs to Acropolis.\n\n\tIf the VM has VirtIO installed on the VM, check if the correct boot device is selected on Prism for the VM to boot.\n\n\t \n\tWhat is the maximum waiting time for a VM to be left in Ready to Cutover state?\n\tWe recommend performing the cutover within 1 week post the initial Data Seeding completion of the VM because Nutanix tests the operation at those conditions. You can refer to the document here for further reference.\n\n\tNote that the cutover operation needs user invention as the VM will be powered off on the source end and powered up on the target cluster. So, make sure that no users are impacted during this operation and after a green signal from all the concerned parties for the VM, only then proceed to select the VM and hit the \"Cutover\" button on the UI.\n\n\t \n\tHow is migration impacted if my source and target cluster is located in physically different locations?\n\tWith high-speed Internet and no bottlenecks, the migration will run smoothly over WAN. Keep a close check if the migration is being performed during production hours so that there are no performance impacts.\n\n\t \n\tWhen creating a new Move, we see multiple IPs on Prism UI?\n\tIf you have a DHCP server configured on the cluster when Move VM is powered on, DHCP will assign a temporary IP to the server. This only pops up on the UI (Prism behavior). You can ignore it and proceed to add static IP to the Move VM. Note that it will take some time for the DHCP server IP to vanish from the UI. Make sure that the Move VM is reachable over the static IP and the UI can load over the static IP configured (http://<static_move_ip>).\n\n\t \n\tI have a DHCP server in my environment but the Move VM migration is failing / Move VM gets an APIPA IP. How can I overcome this?\n\tMake sure you add the Move VMs MAC address as \"trusted\" on your DHCP server. You can also check if the Move VM is being assigned new IP every minute/hour and make the appropriate change to lease the IP for VM. To avoid running into such conditions, you can configure a static IP for the VM.\n\n\t \n\tIf I have a UEFI enabled VM on the source cluster, will it be supported for migration using Move?\n\tNutanix Move supports UEFI enabled VM migration from Move 3.3 and AOS 5.11 onwards. However, it is yet to support Secured Boot on UEFI.\n\n\t \n\tI see VMware Tools installed on the VM post-migration. Should I delete it?\n\tYes, you can proceed to delete VMware Tools post-migration from the AHV cluster. Note that if the migration was from a non-Nutanix ESXi cluster to a Nutanix ESXi cluster, the VMs might BSOD due to VMware Tools being outdated. You can re-install VMware Tools to the VM so that the disks are available on the OS level.\n\n\t \n\tCredentials for source/target cluster changed during the migration. Now, migration is failing due to incorrect username/password. What should be done?\n\tBelow are workarounds that can be followed:\n\n\tEnter the new password on the target/source cluster by hovering on the 3 dots -> Edit -> enter username/password. Restart all the services on the Move VM (this will mark all the ongoing migrations as failed so be careful when rebooting the VM)Change the password back to the old password and restart all the services on Move (this will mark all the ongoing migrations as failed so be careful when rebooting the VM).\n\n\tNutanix Engineering is aware of this issue and is working on improving the workflow when there is a change in the passwords.\n\tDoes Move Support Special Characters in VM Passwords?\n\n           Move does support using special characters in the VM passwords, all the special characters are supported. However, Language specific special characters used in password are not supported on Move.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Connecting the network adapter using ESX command line. ### Assistant: Fixing Virtual Machine network issue from command line.. If a UVM or CVM is powered on but not available on the network via ping or SSH, the network adapter might not be connected.. Use the following commands on the Node to determine if the network adapter is not connected and re-connect if needed.\n\nList all VMs on the Node. Note the VMID of the VM you are trying to reach (in this example VMID 8):\n\troot@esx# vim-cmd vmsvc/getallvms\n\n\troot@esx# vim-cmd vmsvc/getallvms\n\tIdentify the device information. Verify that the network is connected. In this case, it is not.\n\troot@esx# vim-cmd vmsvc/getallvms\n\tFind the DEVICEID for the NIC card that is connected to the local Network. In this case the DEVICEID is 4000\n\troot@esx# vim-cmd vmsvc/getallvms\n\tFrom the above command, we can verify that the vNIC is connected = false.  This needs to be fixed, as follows.\n\troot@esx# vim-cmd vmsvc/getallvms\n\twhere Z is true (1; enable) or false (0; disable\n\n\tFor example:\tEnable the network:\n\n\troot@esx# vim-cmd vmsvc/getallvms\n\n\tDisable the network:\n\n\troot@esx# vim-cmd vmsvc/getallvms\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about VM operations may fail with \"Invalid timezone: US/Pacific-New\" error on AHV clusters after upgrading AOS to 5.15.5, 5.19.0.5, or newer. ### Assistant: VM operations may fail with \"Invalid timezone: US/Pacific-New\" error on AHV clusters after upgrading AOS to 5.15.5, 5.19.0.5, or newer. The following operations may fail with the \"Invalid timezone: US/Pacific-New\" error on AHV clusters after upgrading AOS to 5.15.5, 5.19.0.5, or newer.\nVM cloning.Changing VM settings (except changing timezone).Restore from the protection domain.Restore from VM snapshot.Creation of new VMs with Pacific-New timezone.\nSample events from /home/nutanix/data/logs/uhura.out:\n\n2021-12-08 14:08:01,434Z ERROR vm_register.py:535 Failed to register VM. Error code: kUnknownError\nTo confirm that VM is affected, check \"Timezone\" in the VM properties in Prism UI.Alternatively, you can run the following command on any CVM in the cluster:\n2021-12-08 14:08:01,434Z ERROR vm_register.py:535 Failed to register VM. Error code: kUnknownError\nSample output:\n\n2021-12-08 14:08:01,434Z ERROR vm_register.py:535 Failed to register VM. Error code: kUnknownError. The Linux community has deprecated the \"US/Pacific-New\" timezone, and hence it was removed from CVM starting from 5.15.5, 5.19.0.5. As a result, if any VMs were using this timezone and VM snapshots or protection domain snapshots were created before an AOS upgrade, then post-upgrade symptoms described above may be experienced.Perform the following actions to fully resolve this issue.VMs that are currently using the \"US/Pacific-New\" timezoneChange the timezone to a different one. This action can be done via Prism UI or acli. After that, all VM operations should continue to work.Sample acli command:\nnutanix@cvm:~$ acli vm.update <vm_name> hwclock_timezone=<timezone>\nVMs that fail to be restored from VM or protection domain snapshotsStarting from AOS 5.20.4, and AOS 6.1.1 DR workflows are not validating timezone configuration and as a result, VM restore from the protection domain is allowed. No changes are done to restore from the VM snapshot workflow.Perform the following steps on all CVMs in the cluster:\nCopy Pacific to Pacific-New timezone file:\n\tnutanix@cvm:~$ acli vm.update <vm_name> hwclock_timezone=<timezone>\n\tUpdate permissions:\n\tnutanix@cvm:~$ acli vm.update <vm_name> hwclock_timezone=<timezone>\n\tOnce restore is completed, update the timezone of restored VMs and remove created files:\n\tnutanix@cvm:~$ acli vm.update <vm_name> hwclock_timezone=<timezone>\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about VMs do not automatically restart if an AHV host crashed while entering maintenance mode. ### Assistant: If an HA event happens during the process of entering maintenance mode, the VMs that should be restarted on other hosts will remain turned off. If an HA event happens on the host during the process of entering the same host into maintenance mode, the VMs that should be restarted on other hosts will remain turned off. To confirm the issue the acropolis.out log can be checked on the node that was the acropolis leader at that time of the crash. \nCompare the output from the time (edit the correct year-month-day) before the crash with the UUID from ncli host ls.  \n\nnutanix@CVM:~$ allssh \"zgrep 'Current master is' /home/nutanix/data/logs/acropolis.out*\" | grep 'YYYY-MM-DD'\n\nNOTE: Dependent on the AOS version the time is in UTC (AOS 5.20 and above) while older versions are in a local cluster time zone\nThe issue happens because the segment timestamp value is getting increased first as part of the migration process during entering the maintenance mode, but because the migration fails the HA process increases this value again. That results in incrementing the value twice and creating a mismatch leading the VM start task to fail. Such VMs will remain in the shutdown state.\nBelow the sequence of events can be found:\n\nThe VM migration starts for a VM as part of the maintenance mode request:\n\nnutanix@CVM:~$ allssh \"zgrep 'Current master is' /home/nutanix/data/logs/acropolis.out*\" | grep 'YYYY-MM-DD'\n\nBut because the host crashed, the migration task fails:\n\nnutanix@CVM:~$ allssh \"zgrep 'Current master is' /home/nutanix/data/logs/acropolis.out*\" | grep 'YYYY-MM-DD'\n\nThe VMForcePowerOff task is initiated as part of the HA process:\n\nnutanix@CVM:~$ allssh \"zgrep 'Current master is' /home/nutanix/data/logs/acropolis.out*\" | grep 'YYYY-MM-DD'\n\nLater as part of the HA process, the VM is attempted to be started on another host:\n\nnutanix@CVM:~$ allssh \"zgrep 'Current master is' /home/nutanix/data/logs/acropolis.out*\" | grep 'YYYY-MM-DD'\n\nBut the VM start process fails because the segment timestamp value does not match the requested value:\n\nnutanix@CVM:~$ allssh \"zgrep 'Current master is' /home/nutanix/data/logs/acropolis.out*\" | grep 'YYYY-MM-DD'. This issue is resolved in:\n\nAOS 6.x family (STS): AOS 6.1.1\n\nPlease upgrade AOS to versions specified above or newer.Workaround\nThe affected VMs should be started manually either via PRISM or acli.\n\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Intel X710 10G embedded LLDP agent dropping LLDP packets before they reach Linux kernel. ### Assistant: Intel X710 10G embedded LLDP agent dropping LLDP packets before they reach Linux kernel. Some Intel cards, including Intel X710 have an embedded LLDP agent that blocks LLDP packets before they can reach the kernel. This implies that the tcpdump will not show any in or out packets. Also, the stats will remain at 0 in AHV lldpcli although we send LLDP frames to the switch, nothing is received as these frames are dropped at the NIC driver level. \nCheck NIC type:\n\n[root@ahv ~]#  lspci -nn | grep -i 'Ethernet Controller'\n \n\nUse lldpcli to check statistics:\n\n[root@ahv ~]#  lspci -nn | grep -i 'Ethernet Controller'. AHVThis issue is resolved in AHV 20170830.265, which is bundled with AOS 5.10.3.2. Please consider upgrading AHV to the latest version to resolve this issue.ESXiThe latest native driver (i40en 1.5.6) and the latest firmware (6.0.1) provide a method to disable it on ESXi only by passing a parameter to the driver module.To view the Intel system module parameters currently in place, run either of the following commands from the host:\nesxcli system module parameters list -m i40en\n\nTo change the Intel system module parameter for LLDP, run the following command\n\nesxcli system module parameters list -m i40en\n\nReboot the ESX/ESXi host for the configuration change to take effect.For Intel card type XXV710 specifically, the firmware needs to be upgraded to version 8.50See KB-13085 for additional references for XXV710 NICs.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Prism stats not shown when ESXi SR-IOV NIC is used on VMs. ### Assistant: CPU and Memory stats not shown on Prism for ESXi hosts when VMs on them use SR-IOV NIC. Single Root I/O Virtualization (SR-IOV) NIC adapter is supported from VMware vSphere 5.X onward. In General, the SR-IOV adapter is used for VMs that are latency sensitive or require more CPU resources, but AOS doesn't support the NIC adapter type. Due to this, the Prism will not show CPU and Memory usage statistics for VMs with SR-IOV NIC adapter(s) running on ESXi hosts.Prism will not show CPU and Memory usage statistics for ESXi hosts when one or more VMs on these hosts are using SR-IOV NIC(s).One or more VM stats are not shown on Prism . This issue is resolved in:\nAOS 5.20.X family (LTS): AOS 5.20.3AOS 6.0.X family (STS): AOS 6.1.1\nPlease upgrade AOS to versions specified above or newer.Apart from the Prism stats not being available (as shown above), no other issues are seen with functionality.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Windows installation fails with \"No signed device drivers were found\" error. ### Assistant: Windows installation fails with \"No signed device drivers were found\" error if incompatible drivers are selected.. While trying to load storage controller drivers during the installation of Windows on AHV following error message might be displayed:\nNo signed device drivers were found. Make sure that the installation media contains the correct drivers, and then click OK.\n\n \n. This issue may happen if the wrong folder with drivers was selected. For example:\nFolder with OS name was selected (Windows Server 2016 for example). Please make sure to select one of the folders inside the directory with OS name, that represent OS architecture (amd64 in case of Windows Server 2016).x86 folder was selected while you are installing a 64-bit operating system. Or vice-versa.\nNote: x86 folder is no longer included in VirtIO ISO package from Nutanix VirtIO 1.1.6 and above.Sample folder structure with amd64 folder highlighted:Only drivers from amd64 folder should be used starting from Windows Server 2008 R2 as only 64-bit editions of server operating systems are released by Microsoft. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about VirtIO drivers may be removed after OS generalization (sysprep). ### Assistant: Sysprep tool may remove Nutanix VirtIO drivers from OS during generalization/customization process. After generalizing Windows Server 2012 R2 VM by using sysprep tool for creating VM template, latest VirtIO drivers may be removed from the VM and an older version is used. The behavior was noticed with NetKVM driver but can occur with any VirtIO driver.\n\nCheck the version of device driver version inside the guest OS from the property information. And verify the driver version included in Nutanix VirtIO Driver.Installed device driver version (driver date) might be older than expected version installed with Nutanix VirtIO Driver.See KB 5491 for the matrix of Nutanix VirtIO package versions and the driver version of each component.Sample screenshot of driver properties\n\n. Workaround \n\nInstall the latest Nutanix VirtIO driver again on each deployed VM, orCreate a template VM image by generalization without removing installed device driver.\n\nGeneralizing VM image without removing the device driver\n\nSysprep may generalize VM image with maintaining driver configuration by adding Microsoft-Windows-PnpSysprep/PersistAllDeviceInstalls setting in Sysprep Answer File (XML).The following article explains the instructions in detail:\n\nMaintain Driver Configurations when Capturing a Windows ImageOther References:\n\nWindows ADK (Windows Assessment and Deployment Kit) (Download page)Windows System Image Manager tool is required for authoring Sysprep Answer File which is packaged in Windows ADK.See Installing the Windows ADK (Instruction).. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: sata_dom_wearout_check. ### Assistant: The NCC health check sata_dom_wearout_check checks the wear out status of SATA DOM via SMART data. This article also covers the alert: A1180 Critical ... SATA DOM on host X.X.X.200 has PE cycles above 4500 or PE cycles above 3000 and daily PE cycles above 15.. The NCC health check sata_dom_wearout_check checks the wear out status of SATA DOM via SMART data.\n\nThe SATA DOM (Disk on Module) is a small flash memory module designed to be conveniently inserted into a server board SATA connector to provide high-performance solid-state storage capacity that simulates a hard disk drive (HDD). Each Nutanix node contains a SATA DOM on the server board. The SATA DOM serves as the hypervisor boot drive.\n\nThis check examines the wear level of the SATA DOM via SMART data.This check was introduced in NCC version 2.2This check is run as part of the NCC Health Checks.\n\nRunning NCC checkRun the NCC check as part of the complete NCC Health Checks.\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr run the sata_dom_wearout_check check individually.\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 3 days, by default.\n\nThis check will generate an alert after 1 failure.\n\nSample output\n\nFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nThe check result reports a FAIL level status if the wear attributes have exceeded a recommended threshold where the condition should be reviewed with your support representative. This result does not mean the SATA DOM has failed. The node will continue to operate normally, but the SATA DOM should usually be planned for replacement.\n\nFor Status: INFO\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou may observe a message indicating the SATA DOM model on that host is not currently supported by this check. Future versions of NCC will introduce further SATA DOM support.\n\nOutput messaging\n\nCheck ID1031DescriptionChecks the wearout of SATA DOM via SMART data.Causes of failureSATA DOM has been in use for a long time.ResolutionsReplace SATA DOM as soon as possible. Refer to the Nutanix documentation for instructions.ImpactCluster performance may be significantly degraded or the cluster may become unusable. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert IDA1180Alert Smart TitleSATA DOM on ip_address has worn out.Alert TitleSATA DOM has worn out.Alert MessageSATA DOM on host ip_address has PE cycles above 4500 or PE cycles above 3000 and daily PE cycles above 15.\n\nCheck ID*1035 (Check retired in NCC-3.10.0 and above.)DescriptionChecks the wearout of SATADOM-SL 3IE3 via SMART data.Causes of failureSATADOM-SL 3IE3 Device Life too short (<5).ResolutionsReplace SATADOM-SL 3IE3 as soon as possible. Refer to the Nutanix documentation for instructions.ImpactCluster performance may be significantly degraded or the cluster may become unusable. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert IDA1185Alert Smart TitleSATADOM-SL 3IE3 on ip_address has high wear.Alert TitleSATADOM-SL 3IE3 has high wear.Alert MessageSATADOM-SL 3IE3 on host ip_address has device life smaller than 5%\n\nCheck ID106037DescriptionChecks the wearout of SATADOM-ML 3SE via SMART data.Causes of failureSATADOM-ML 3SE Remaining Device Life too short.ResolutionsReplace SATADOM-ML 3SE as soon as possible. Refer to the Nutanix documentation for instructions.ImpactCluster performance may be significantly degraded or the cluster may become unusable. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert IDA106037Alert Smart TitleSATADOM-ML 3SE on ip_address has high wear.Alert TitleSATADOM-ML 3SE has high wear.Alert MessageSATADOM-ML 3SE on host ip_address has the following problem: alert_msg\n\nNote: This hardware related check executes on the below hardware \nNutanix NXDell XCDell Poweredge\n. Contact your support representative if you see a FAIL status indicating a high wear level on the SATADOM. Media Wearout level starts from 100 and decreases as it wears out. The worst score is 1.\n\nAlso, note that there exists the possibility of a false positive alert for wear level on model 3IE3 SATADOMs. It is recommended to upgrade to firmware version S670330N to fix this false positive. The upgrade can be performed via LCM (Life Cycle Manager). Ensure LCM is upgraded to version 1.4.1810 or higher before updating the SATADOM. Refer to the Portal documentation for more details on upgrading via LCM.\n\n\n \n. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about AOS upgrade - Failed to run the update SVM vmx script - CVM boot ISO not Mounted. ### Assistant: AOS upgrade failed with error Failed to run the update SVM vmx script because CVM boot ISO is not mounted.. AOS upgrade failed with error Failed to run the update SVM vmx script because CVM boot ISO is not mounted.Symptoms\n\nThe following errors are observed in the log file /home/nutanix/data/logs/finish.out on the CVM being upgraded:\n\n\t2015-05-25 21:37:04 INFO genesis_utils.py:655 Running auto-backup script on esx host\n\tIdentify the CVM being upgraded:\n\t2015-05-25 21:37:04 INFO genesis_utils.py:655 Running auto-backup script on esx host\n\tNCC check cvm_startup_dependency_check fails with the following error:\n\t2015-05-25 21:37:04 INFO genesis_utils.py:655 Running auto-backup script on esx host\n\t\n​. The script cannot fetch the attribute \"fileName\" because the CVM boot image ISO is detached from CVM on the ESXi host. You have the following options to verify:\n\nLogin to vSphere client > Hosts and Clusters > Select the cluster > Select CVM being upgraded > Edit settings.\tLogin to the respective ESXi host over SSH and run the following command.\n\t[root@ESXi-node-1:~] cat /vmfs/volumes/NTNX*/ServiceVM_Centos/ServiceVM_Centos.vmx | grep ide\n\t\tExample output:\n\t[root@ESXi-node-1:~] cat /vmfs/volumes/NTNX*/ServiceVM_Centos/ServiceVM_Centos.vmx | grep ide\n\tManually connect ServiceVM_Centos.iso from vSphere client (compare the VM settings with another CVM in the cluster):\tThe upgrade will continue as soon as the ISO file is mounted to the CVM.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Physical-to-Virtual (P2V) migration to AHV. ### Assistant: Nutanix does not offer tools to perform Physical-to-Virtual (P2V) migration to AHV.. Nutanix does not offer tools to perform Physical-to-Virtual (P2V) migration to AHV.. Please use partner solutions to perform Physical-to-Virtual (P2V) migrations. A list of Nutanix Ready Solutions can be found here.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about ESXi - How to recreate Standard Switch configuration via ESXi command line. ### Assistant: This article describes how to recreate the vSwitch0 and vSwitchNutanix configuration from the ESXi command line.. There have been a few instances when migrating to and from Distributed vSwitch can cause the Nutanix vSwitch to be deleted or have the wrong configuration. This will cause the Genesis service on the cluster to crash and not start.Follow the steps mentioned in this article to create a standard Nutanix vSwitch from the ESXi command line. . Recreating the ESXi Nutanix Switch from scratchPerform the following steps to recreate the ESXi Nutanix Switch from scratch:\n\nCreate standard vSwitches vSwitch0 and vSwitchNutanix.\n\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\tCreate Port Groups Management Network and VM Network on vSwitch0.\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\tChange the VLAN ID on a portgroup.\n\t~# esxcfg-vswitch -p \"portgroup_name\" -v VLAN_ID virtual_switch_name\n\tCreate Port Groups vmk-svm-iscsi-pg and svm-iscsi-pg on vSwitchNutanix.\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\tCreate VMkernel Port vmk0 for vSwitch0 and assign to VMkernel Port Group Management Network.\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\tCreate VMkernel Port vmk1 for vSwitchNutanix and assign to VMkernel Port Group vmk-svm-iscsi-pg.\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\tAssign default gateway to ESXi.\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\tCheck that the vmknics are created.\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\tCheck to which vmnics the two 10GbE ethernet ports are assigned.\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\tAssign the two 10GbE uplinks to vSwitch0 (In this example, vmnic0 and vmnic1).\n\t~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\t\n\nAn example of the correct configuration of vSwitchNutanix:\n\n~# esxcli network vswitch standard add --vswitch-name=vSwitch0\n\nNote that the changed port group naming with Foundation 5.0 and newer:The old naming for the portgroups attached to the internal switch can be seen below:\n\n~# esxcli network vswitch standard add --vswitch-name=vSwitch0\nWith Foundation 5.0 and newer, a new naming has been introduced (ref KB-11895):\n\n~# esxcli network vswitch standard add --vswitch-name=vSwitch0\nEnsure that the following information matches:\n\nNo uplink interfacesThree used ports Port groups and # of used portsName of the vSwitch\n\nChecking the CVM network configuration in vCenter\n\nVerify that eth0 (Network Adaptor 1) is part of the VM Network Port Group.Verify that eth1 (Network Adaptor 2) is part of svm-iscsi-pg.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: fs_inconsistency_check. ### Assistant: The NCC health check fs_inconsistency_check identifies CVM (Controller VM) filesystem inconsistencies by checking for EXT4-fs error/warning messages in dmesg and scanning tune2fs output for all the disks.. The NCC health check fs_inconsistency_check verifies whether any CVM (Controller VM) in the cluster is experiencing filesystem inconsistencies by checking for EXT4-fs error/warning messages in dmesg and scanning tune2fs output for all mounted disks. From NCC 4.4.0 onwards, if the failed disk causing inconsistencies is unmounted from the cluster, the check will skip the execution on the removed disk and pass.\n\nThis plugin was introduced in NCC version 3.9.3.\n\nThe check runs on CVMs on all platforms and hypervisors and is scheduled to run once every 24 hours on the prior 24 hours of data in the CVM dmesg ring buffer.\n\nStarting with NCC-4.1.0, this check will generate the alert A3038 after 1 concurrent failure across scheduled intervals.\n\nIn NCC 4.5.0, the dependency on dmesg logs is removed. Instead, the plugin checks a system counter that gives the number of errors in real time.\n\nRunning the NCC check\n\nThe check can be run as part of a complete NCC by running:\n\nnutanix@CVM$ ncc health_checks run_all\n\nIt can also be run individually as follows:\n\nnutanix@CVM$ ncc health_checks run_all\n\nYou can also run the check from the Prism web console Health page. Select Actions > Run Checks > All Checks > Run.\n\nSample Output\n\nFor Status: PASS\n\nnutanix@CVM$ ncc health_checks run_all\n\nIf the check results in a PASS, there are no filesystem inconsistencies detected. No action needs to be taken.\n\nFor Status: WARN\n\nnutanix@CVM$ ncc health_checks run_all\n\nIf one or more CVMs are logging filesystem inconsistencies, the check will result in a WARN.\n\nNote: From NCC 4.5.0, the severity is changed to FAIL. The end-user will experience \"Critical\" alert on the UI and \"FAIL\" status on the CLI when the check fails.\n\nOutput messaging\n\nCheck ID3038DescriptionCaptures EXT4-fs error messagesCauses of failureFile system inconsistencies are present on the node.ResolutionsLook for any problems in the file system. Review KB 8514.ImpactThe inability of the CVM to boot or for the upgrade pre-checks to run.Alert IDA3038Alert TitleFile system inconsistencies are detected.Alert Smart TitleFile system inconsistencies are detected on CVM: cvm_IPAlert MessageEXT4 file system errors are detected on CVM: cvm_ip: alert_msg. Investigating a WARN/CRITICAL Alert\n\nIf you see an alert for EXT filesystem errors on one or more CVMs, engage Nutanix Support.\n\nTo speed up the resolution time please collect and attach the following information to the support case:\n\nA complete NCC report:\n\tnutanix@CVM$ ncc health_checks run_all\n\tA log bundle generated from the cluster. This can be collected through the Prism Web Console's health page. Select Actions > Collect Logs. Logs can also be collected through the command line using logbay (KB 6691 - NCC - Logbay Quickstart Guide).\n\tnutanix@CVM$ ncc health_checks run_all\n\tThe output of the following commands, collected from the CVM(s) that have the EXT4 fs-error:\n\tnutanix@CVM$ ncc health_checks run_all\n\tThe following command can be used to collect the needed info from all of the CVMs and should find the disk SN outputs/mount point:\n\tnutanix@CVM$ ncc health_checks run_all\n\t\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A200614 - FNSVersionMisMatch. ### Assistant: Investigating Flow Network Security PE Minimum Version (FNSVersionMisMatch) alerts on a Nutanix cluster. This Nutanix article provides the information required for troubleshooting the alert A200614 - FNSVersionMisMatch - Flow Network Security PE Minimum Version for your Nutanix cluster.Alert OverviewThe FNSVersionMisMatch - Flow Network Security PE Minimum Version alert is generated on Prism Central (PC) when a Flow Network Security version on an AHV PE(s) registered to the PC does not meet the minimum version requirements for the PC running Flow Network Security version with microsegmentation enabled.Sample Alert\nBlock Serial Number: 18SMXXXXXXXX\nPotential ImpactCertain features of the Flow Network Security version available to configure on the PC which is raising the alert may not be implemented or work as expected on the affected PE cluster(s).Output messaging\n\nCheckFNSVersionMisMatch | Flow Network Security PE Minimum VersionDescriptionValidates Flow Network Security version on attached PEs meets minimum requirements for a PC with Flow Network Security microsegmentation enabledCause of failureNot all PEs registered to this PC meet the minimum Flow Network Security PE version required for the enabled feature(s).ResolutionsUse LCM to run an inventory of Flow Network Security PE on each AHV cluster attached to this microseg-enabled PC and upgrade those which do not meet the minimum requirements. Refer to KB14262 for further informationImpactThe affected PE cluster(s) may not support Flow Network Security policy features that are in useAlert IDA200614Alert TitleFlow Network Security version too low on registered PE cluster /\t\t\tFlow Network Security PE version too low on cluster XXXXXXAlert Message Flow Network Security PE version on registered cluster XXXXXXX does not meet the minimum version\t\t\t required to support microseg feature(s) in use on this PC. XXXXXXX. Check the current FNS version running on the reported affected PE cluster:\nvia PE Prism UI LCM Inventory\n\n\n\nvia CVM CLI, view value present in file /home/nutanix/flow/flow_version.txt. The following example shows how to extract this information for all CVMs in the same PE cluster: \n\nnutanix@CVM:~$ allssh cat /home/nutanix/flow/flow_version.txt\n\n \n\nCompare the discovered FNS for PE version with that of the PC raising the alert.\nTo resolve, upgrade Flow Network Security PE via LCM on the affected PE cluster(s).\n\nRun LCM Inventory on the PE Prism UI to ensure the latest available versions are listed as upgrade options.\n\nFor example, on the 'Updates' page LCM should show the current version and any newer versions available for upgrade:\n\n\n\nSelect the recommended version for FNS on PE to match what is running on the PC which raised this alertReview the Release Notes and User Guide available on the Support PortalProceed with the upgrade. Refer to the LCM User Guide available on the Support Portal for details.Validate the correct version is running post-upgrade using the earlier mentioned steps.\nIf you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: ergon_checks. ### Assistant: NCC 4.0.1. The NCC health check ergon_checks verifies a number of pending tasks and fails if a high number of tasks detected in the cluster.. The NCC health check ergon_checks verifies a number of pending tasks and fails if a high number of tasks detected in the cluster.Running NCC Check\nYou can run this check as a part of the complete NCC health checks \n\tnutanix@cvm:~$ ncc health_checks run_all\n\tOr you can run this check individually \n\tnutanix@cvm:~$ ncc health_checks run_all\n\t\nSample Output\n\nCheck Status: PASS\n\nnutanix@cvm:~$ ncc health_checks run_all\nCheck Status: FAIL\n\nnutanix@cvm:~$ ncc health_checks run_all\nCheck fails if more than 50000 tasks are found on the Prism Element cluster or if more than 400000 tasks are found on the Prism Central cluster.This check runs on all hypervisors.This check is not scheduled.This check runs on Prism Element and Prism Central.This check does not generate an alert.Output messaging\n\nCheck ID111082DescriptionHigh number of tasks in the cluster.Causes of failureHigh number of tasks in the cluster.ResolutionsContact Nutanix support for help.ImpactOperation on entities in the cluster might not progress..  Please collect the log bundle using Logbay and engage Nutanix Support at https://portal.nutanix.com/. For more information on how to use Logbay, see KB 6691. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: 10gbe_check. ### Assistant: The NCC health check 10gbe_check determines if there are any nodes in the cluster using 10GbE uplinks.. The NCC health check 10gbe_check determines if there are any nodes in the cluster using 10GbE uplinks.\n\nThe check reports a PASS status if one 10GbE NIC is found and CVM (Controller VM) is pinned to the 10GbE uplink. For best cluster operation, Nutanix CVMs communicate by using 10GbE network uplinks connected to a 10GbE wire-speed top of rack switch.\n\nNote:For Dell, Nutanix supports only 10GbE NICs to be used for cluster traffic and imaging except XC640-4i model for 14G platforms. The option 1GbE NICs are recommended to be used for management purposes only and not for imaging or cluster traffic except for XC640-4i. \n\nThe check results in a WARN status on AHV clusters when there are 1GbE and 10GbE NICs present in the bond.\n\nThe check results in a FAIL status if there are no 10GbE NICs detected on the hypervisor or nodes.\n\nRunning the NCC CheckRun the following check as part of the complete NCC Health Checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr run the following check separately:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism Web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nSample outputFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: INFO\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: WARN\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: ERR\n\nnutanix@cvm$ ncc health_checks run_all\n\n \n\nNote: Check ID 103073 (Check that CVM uplinked to an active 10Gbps link) has been retired from NCC 4.5.0. Make sure you are running the latest version of NCC before running the NCC health checks.Output messaging\n\nCheck ID6008DescriptionChecks whether CVM is uplinked to 10 GbE NIC.Causes of failureCVM not uplinked to 10 GbE NIC.\t\t\t1 GbE NIC is part of the bond.\t\t\tIf the hypervisor is ESXi, it may be in lockdown.ResolutionsCheck network configuration.\t\t\tIf the hypervisor is ESXi, check for lockdown and unlock it.ImpactThe Controller VM is not configured to use the 10 GbE NIC or is configured to share the load with a slower NIC.Alert IDA1058Alert TitleCVM NIC Speed LowAlert MessageController VM service_vm_external_ip is running on the network interface(s) nic_list, with transmission speed less than 10 Gbps. This will degrade system performance.ScheduleThis check is scheduled to run every 5 minutes, by default.Number of failures to alertThis check will generate an alert after 1 failure.\nNote : This hardware related check executes on the below hardware \nAll Hardware except Inspur and SYS. Following are the steps to verify the uplinks manually at the hypervisor layer:\n\nESXiThe following procedure queries the uplink type in use on a Standard vSwitch or a Distributed vSwitch, depending on which vSwitch is handling inter-CVM communication.\n\n\n\tTo manually verify the network uplink type currently in use, run this command from each ESX host:\n\n\troot@esxi# esxcfg-vswitch -l\n\t\n\tVerify that the uplinks in use for inter-CVM communication are 10GbE speed NICs.\n\n\troot@esxi# esxcfg-vswitch -l\n\t\n\tIf 10GbE ports are in use and the check is still failing in ESXi, then confirm the CVM is assigned to the correct portgroup on the vSwitch and that the link is connected. This is done by going to vCenter, right-clicking on the CVM --> Edit Settings:\n\n\t\n\n\tIf the Network label is blank or if the Connected checkbox is unchecked, the NCC check fails.\n\n\tTo resolve, place the CVM in the correct port group and check the Connected check box.\n\t\n\nFor the INFO message (CVM is not uplinked to active 10Gbps NICs, but has a standby 10Gbps link), use vCenter or ESXi Web console to configure the vSwitch to have at least 1 active NIC. This should be done for all hosts that are mentioned in the alert.\n\n\n\tLog in to vCenter and click on the host that is showing the INFO alert. Go to the Configure tab. Select Virtual switches under Networking and select the vSwitch that has the adapters to which CVM connects (usually vSwitch0). Click the Edit button for that vSwitch\n\n\t\n\t\n\tGo to Teaming and failover and see the failover order. Everything should be under the standby adapter.\n\n\t\n\t\n\tUse the UP arrow key on the screen to move 1 or more vmnic to active adapters.\n\n\t\n\tClick OK.\n\nHyper-VVerify network speeds on each of the Hyper-V hosts from the CVM.\n\nroot@esxi# esxcfg-vswitch -l\n\nNote: The Transmit and Receive Links speeds are from an LACP (twin NIC) configuration, resulting in the 20 GB aggregate values.\n\nAHV\n\n\n\tConnect to CVM running on affected AHV host via SSH and run the following commands:\n\n\troot@esxi# esxcfg-vswitch -l\n\t\n\tIn the output of commands mentioned above, make sure that br0 only contains NICs with speed 10000 or higher.\n\n\tExample:\n\n\troot@esxi# esxcfg-vswitch -l\n\t\n\tIf 1Gb NICs are found in br0, perform the following steps to remove them:\n\n\tNote: The steps below are for AOS =< 5.19. To perform the process of removing 1G links for AOS 5.19 and newer, follow the Prism Web Console Guide for your associated AOS version under the section Creating or Updating a Virtual Switch (Example: https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_19:wc-network-vs-create-upgrade-wc-t.html).\n\n\tFollow Verifying the Cluster Health chapter in the AHV Administration Guide to make sure that cluster can tolerate node being down. Do not proceed if the cluster cannot tolerate a failure of at least 1 node.Put the node and CVM in the maintenance mode:\n\t\tCheck the availability of changing maintenance mode of target hosts:\n\t\t\troot@esxi# esxcfg-vswitch -l\n\t\t\tPut the host into maintenance mode - this will migrate running VMs to other hosts. If the cluster is a single-node cluster, UVMs should be shut down before executing following command:\n\t\t\troot@esxi# esxcfg-vswitch -l\n\t\t\tEnable maintenance mode for the CVM on the target host. This step is for preventing CVM services from being impacted by possible connectivity issues. You may skip this step if the CVM services are not running, or the cluster has stopped. If the cluster is a single-node cluster, you should skip this step because this CVM should not be put into maintenance-mode:\n\t\t\troot@esxi# esxcfg-vswitch -l\n\n\t\t\tNote: You can find <host ID> in the output of the \"ncli host ls\" command:\n\n\t\t\troot@esxi# esxcfg-vswitch -l\n\t\t\t\n\t\tUpdate br0 to only have 10G NICs:\n\t\troot@esxi# esxcfg-vswitch -l\n\n\t\t<bond mode> is one of the following modes: active-backup, balance-slb or balance-tcp. The current bond_mode can be found in the output of the manage_ovs show_uplinks command. If --bond_mode parameter is not specified, then the bond mode will be reset to active-passive, which may affect network connectivity.\n\n\t\tIf LACP is used then a few more additional parameters should be added:\n\n\t\troot@esxi# esxcfg-vswitch -l\n\n\t\tOr specify NIC names manually:\n\n\t\troot@esxi# esxcfg-vswitch -l\n\n\t\tNote: On AOS 5.15 or later, due to a reliance on services that may be unavailable on the affected CVM during maintenance or network outage, manage_ovs may fail with output:\n\n\t\troot@esxi# esxcfg-vswitch -l\n\n\t\tTo apply the required uplink changes, run the below command (with --host option targeting the host intended for an uplink change) from a different CVM which is in \"UP\" state and not in maintenance mode:\n\n\t\troot@esxi# esxcfg-vswitch -l\n\t\t\n\t\tOnce the configuration is completed, make sure both hosts and CVM are accessible via network. Also, make sure that all NICs in the bond are operational by shutting down links one by one and verifying connectivity.\n\t\t\n\t\tIf all tests are successfully completed, remove CVM and node from maintenance mode:\n\n\t\tFrom one of the other CVMs, run the following command to exit the CVM under maintenance mode:\n\t\t\troot@esxi# esxcfg-vswitch -l\n\t\t\tExit host into maintenance mode - this will restore VM locality:\n\t\t\troot@esxi# esxcfg-vswitch -l\n\t\t\t\n\t\t\n\t\n\nIn case the steps or solution described in this article do not resolve the issue, consider engaging Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Root Cause Analysis of CVM Reboots. ### Assistant: This article describes how to troubleshoot and perform root cause analysis when a CVM (Controller VM) suddenly reboots.. This article describes how to troubleshoot and perform root cause analysis when a CVM (Controller VM) suddenly reboots.\n\nLogs to look for inside the CVM:\n\ndmesg\n\nLogs to look for on AHV host: \n\ndmesg\n\nLogs to look for inside the ESXi:\n\ndmesg\n\nTo review memory/cpu usage/disk latency of the CVM at the time of the reboot, the sysstats under /home/nutanix/data/logs/sysstats logs can be reviewed. Note that the logs are in UTC timestamp.\n\ndmesg. Examples\n\n\n\tCVM command last reboot:\n\n\tnutanix@cvm$ last reboot\n\t\n\tLogs on CVM /var/log/messages and kern.log:\n\n\tnutanix@cvm$ last reboot\n\t\n\tESXi logs /vmfs/volumes/xxxxxxxx-xxxxxxxx-xxxx-xxxxxxxxxxxx/ServiceVM*/vmware.log:\n\n\tnutanix@cvm$ last reboot\n\n\t\"Restart Guest OS\" on CVM initiated from vCentre results in the following signature in the cvm's vmware.log\t(Note that this entry does not occur in the vmware.log if the CVM has been gracefully restarted from within the Nutanix Cluster via AOS Upgrade or cvm_shutdown command)\n\n\tnutanix@cvm$ last reboot\n\t\"Shutdown Guest OS\" on CVM initiated from vCentre results in the following signature in the cvm's vmware.log\t(Note that this entry does not occur in the vmware.log if the CVM has been gracefully shutdown from within the Nutanix Cluster via AOS Upgrade or cvm_shutdown command)\n\tnutanix@cvm$ last reboot\n\t\tAnother example of vmware.log (based on VMware bug nr. 676321):\n\tnutanix@cvm$ last reboot\n\n\tAnother vmware.log (EPT misconfiguration - VMware KB 1036775):\n\n\tnutanix@cvm$ last reboot\n\t\n\tESXi logs /vmfs/volumes/xxxxxxxx-xxxxxxxx-xxxx-xxxxxxxxxxxx/ServiceVM*/ServiceVM.out.0 shows jbd2/fio driver issue in this example:\n\n\tnutanix@cvm$ last reboot\n\t\n\tFor any recent hard drive failure, check hades.out log.\n\n\tIf the SSD is the metadata drive, AOS will force a CVM to reboot. Also, if AOS has trouble removing an HDD and a forced removal is triggered by hades, a CVM will reboot.\n\n\tThe output of ServiceVM.out.0 (Bug 735768):\n\n\tnutanix@cvm$ last reboot\n\n\tESXi vmksummary to see if the ESXi host rebooted:\n\n\tnutanix@cvm$ last reboot\n\n\tAHV:\n\n\tnutanix@cvm$ last reboot\n\n\tCVM:\n\n\tnutanix@cvm$ last reboot\n\n\tScroll a few lines above to have more information:\n\n\tnutanix@cvm$ last reboot\n\t\n\nFor newer versions of CVM, you may have to grep for \"rsyslogd.*start\" rather than \"kmsg started\":\n\nnutanix@cvm$ last reboot. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: remote_site_latency_check. ### Assistant: This check was introduced in NCC 3.9.0 and validates the latency between primary and target clusters.. The NCC health check remote_site_latency_check checks if the latency to the target cluster is lesser than the maximum value (5ms) allowed for AHV Synchronous Replication.If the latency is higher than 5ms, synchronous replication may fail. This check provides a summary of the latency to the target cluster(s).Running the NCC CheckYou can run this check as part of the complete NCC health checks\nncc health_checks run_all\n  Or you can run this check individually\n\nncc health_checks run_all\n  This check is scheduled to run every 6 hours and only runs on Prism Central. An alert will get generated after failure. \n\nSample outputFor Status: PASS\n\nncc health_checks run_all\n  For Status: FAIL\n\nncc health_checks run_all\n \n\nncc health_checks run_all\n   \n\nOutput messaging\n\nCheck ID110022DescriptionCheck if the latency to target cluster is lesser than the maximum value allowed for AHV Sync RepCause of failureTarget cluster is unreachable or the connection to target cluster is not goodResolutionEnsure that the target cluster is reachable and latency is below 5ms or choose another target clusterImpactSynchronous Replication will be affectedAlert IDA110022Alert TitleChecking if latency to target cluster is lesser than the maximum valueAlert MessageLatency to [remote site] is greater than 5ms. If the check reports a FAIL status in regards to latency being too high, verify the latency between the sites. This can be done by running the ping command bi-directionally: \nFrom CVM on source cluster to CVM on remote cluster\n\nnutanix@cvm:~$ ping x.x.x.22\n\nFrom CVM on remote cluster to CVM on source cluster\n\nnutanix@cvm:~$ ping x.x.x.22\n\nYou can also run the tracepath command bi-directionally to try to isolate where in the network path the latency increases or drops:From CVM on source cluster to CVM on remote cluster\n\nnutanix@cvm:~$ ping x.x.x.22\n\nFrom CVM on remote cluster to CVM on source cluster\n\nnutanix@cvm:~$ ping x.x.x.22\n\nIf latency is showing above 5ms when running the commands, ensure your network environment is stable (physical switches, cabling, and device configurations) and consult with your networking team to resolve the latency issues. If latency for this particular remote site cannot be reduced, please choose another target cluster.To verify if the latency issue is intermittent or persistent, you can review the /home/data/logs/sysstats/ping_remotes.INFO logs on each cluster. Output within the ping_remotes.INFO log will be similar to the following (the IP in the log will be the remote cluster VIP):\n\n\nnutanix@cvm:~$ ping x.x.x.22\n\nReview the log and see if you can find any patterns for high network latency observed every hour, and so on. If there is a pattern for high network latency, try to isolate what could be causing the latency spikes.  \n\n\nIf the check reports a FAIL status in regards to the latency not being found, this indicates that the connection to the remote site is unreachable. The remote site can become unreachable if the following occurs:\n1. The remote cluster becomes unstable2. The remote PC VM becomes unstable3. The Availability Zone (AZ) gets disconnected\nTo verify if the issue is with the PC VM or the remote cluster (target cluster), you can review the ping_remotes.INFO log on the source cluster. If the issue is with the remote cluster, messages similar to the following will be displayed within the log:\n\nnutanix@cvm:~$ ping x.x.x.22\n\nIf unreachable messages are seen in the log, please check on the status of the remote cluster and also verify that there are no connectivity issues with the remote cluster Virtual IP Address (VIP). The VIP can be viewed by logging into Prism Element on the remote cluster and clicking on the cluster name - this will display the 'Cluster Details' - verify that the IP listed for the 'Cluster Virtual IP Address' matches the IP address shown in the ping_remotes.INFO log of the source cluster. If there is a discrepancy with the IP addresses, consult with your network team to verify any recent networking changes. If the remote cluster VIP has recently changed, the Protection Policy for this remote cluster will need to be recreated. Steps for creating a Protection Policy can be found in the \"Creating a Protection Policy\" section of the Leap Administration Guide: https://portal.nutanix.com/#/page/docs/details?targetId=Xi-Leap-Admin-Guide:dra-protection-policy-create-t.html If the log is not showing unreachable messages, then the issue is either with connectivity to the PC VM or the AZ has become disconnected. You can check the connectivity to the PC VM by running the ping command. If the ping command is unsuccessful, verify that the PC VM is powered on and check to see if there have been any recent networking changes. If the ping command is successful, check the status of the Availability Zone. Log into both the source and remote PC to make sure the AZ is configured and showing with a status of reachable. You can access this information by typing \"Availability Zones\" into the search bar or by to the menu and selecting Administration > Availability Zones. Steps for creating an AZ can be found in the \"Pairing Availability Zones\" section of the Leap Administration Guide: https://portal.nutanix.com/#/page/docs/details?targetId=Xi-Leap-Admin-Guide:dra-az-pair-t.html \n\nIn case the mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/. Additionally, please gather the following command output from the Prism Central VM and attach it to the support case:\n\nnutanix@cvm:~$ ping x.x.x.22. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: nic_model_hypervisor_check. ### Assistant: The NCC health check nic_model_hypervisor_check checks if the specific NIC model is present on a Hyper-V 2019 cluster and not compatible with Hyper-V 2022.. The NCC health check nic_model_hypervisor_check checks if any of the below NIC cards are present on a Hyper-V 2019 cluster and is not compatible with Hyper-V 2022:\n\nIntel 82599 chipset NIC modelIntel X540 chipset NIC modelIntel I350 chipset NIC model\n\nCustomers running Hyper-V 2019 with the above NIC models cannot upgrade to Hyper-V 2022 and above as Hyper-V 2022 is not compatible with these NIC models.\n\nRunning the NCC health check:\n\nYou can run the check as part of the complete NCC check from the command line by running:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr individually as:\n\nnutanix@cvm$ ncc health_checks run_all\n\nThis NCC check is not scheduled to run automatically after a certain interval. This check would be available as a part of the Hyper-V 1-click upgrade pre-check.\n\nSample Output:\n\nFor status: PASS\n\nThe check returns a PASS when the specific NIC cards (82599/X540/i350) are not present on a Hyper-V 2019 cluster.\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: FAIL\n\nThe check returns a FAIL when a specific NIC card (82599/X540/i350) is present on the cluster that runs Hyper-V 2019. The output message may vary in NCC versions.\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging:\n\nCheck ID106040DescriptionCheck that there should not be any EOL card connectedCauses of failureNIC card is found which is EOL from Hyper-V 2022.ResolutionsReview KB 13079.ImpactNode Failure on Hyperv2022\nNote : This hardware related check executes on the below hardware \nNutanix NXDell XCHPE. Hyper-V 2022 is not compatible with the NIC models shown in the NCC check failure.\n\nThe unsupported NIC model needs to be replaced prior to the upgrade, or remain disconnected before and after the upgrade to Hyper-V 2022. Hyper-V 2019 supports the NIC card.\n\nIn case of any queries, kindly contact Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Enable Microservices Platform precheck fails on Prism Central after LCM upgrade to 2.5 or later. ### Assistant: Enable Microservices Platform on Prism Central precheck fails after LCM upgrade to 2.5.. After the LCM upgrade to version 2.5 or later in Prism Central, it is impossible to enable Microservices Infrastructure due to network reachability precheck failure, even if the required URL is reachable.Identification:\n\nEnable button on the Prism Central Management page in Prism Central Settings is greyed out and not clickable due to precheck failure.\t\tClick here for more details\" link shows the following message:\tList the MSP controller and also check the health of the controller. Both will have errors:\n\tnutanix@PCVM:~$ mspctl cluster list\n\tAlso, ncc health_checks system_checks microservice_infrastructure_status_check should come up with the below WARN\n\n\tnutanix@PCVM:~$ mspctl cluster list\n\t/home/nutanix/data/logs/genesis.out logs on PrismCentral show CONNECTION_ERROR due to \"[Errno 20] Not a directory\" for HTTPS URLs in test_connectivity_to_external_portals:\n\tnutanix@PCVM:~$ mspctl cluster list\n\tBelow URLs are confirmed to be reachable and responding with valid HTTP codes if checked via curl from Prism Central. Any valid HTTP response code, including 4xx confirms that the URL is reachable in this case:\n\tnutanix@PCVM:~$ mspctl cluster list\n\t. The issue is fixed in pc.2023.1.0.1. Upgrade your Prism Central to pc.2023.1.0.1 or later.Workaround for PC earlier than pc.2023.1.0.1As a workaround, disabling HTTPS in the LCM Settings page should allow CMSP to be enabled in Prism Central Management.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: ahv_version_check. ### Assistant: The NCC health check ahv_version_check checks if the AHV version bundled with the currently installed AOS is not yet installed on the cluster.. The NCC health check ahv_version_check checks if the AHV version bundled with the currently installed AOS is not yet installed on the cluster.\n\nAfter upgrading AOS, a bundled compatible AHV version is present and available for immediate upgrade in Prism. It is recommended that this bundled version of AHV, at a minimum, is installed at the earliest convenience to avoid potential compatibility issues in the future. This NCC check verifies if there were AOS upgrades performed on a cluster without a corresponding AHV upgrade, and raises awareness of the fact. \n\nRunning NCC Check\n\nYou can run this check as a part of the complete NCC Health Checks.\n\tnutanix@cvm$ ncc health_checks run_all\n\tOr you can run this check separately.\n\tnutanix@cvm$ ncc health_checks run_all\n\t\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every day, by default.\n\nThis check will generate an alert after 1 failure.\n\nSample Output\n\nCheck Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nCheck Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nCheck Status: ERROR\n\nnutanix@cvm$ ncc health_checks run_all\n\nCheck Status: WARNING \n\nnutanix@cvm$ ncc health_checks run_all\nNOTE: If you see this WARN output or Alert in an NC2 on AWS cluster, it is false-positive and you can safely disregard it. However, it is advisable to upgrade AOS to 6.0.2.5 or above to resolve this issue.  \n\nOutput messaging\n\nCheck ID111061DescriptionCheck if a compatible AHV version is installed on the host.Causes of failureThe installed AHV version is not compatible with the current AOS version.ResolutionsUpgrade the version of AHV on the host to a version that is compatible with the current AOS version.ImpactAHV is not running with the full set of features.Alert TitleDetected incompatible AHV versionAlert MessageUpgrade the version of AHV on the host to desired_veror_later.a host.. If this check fails with the error below, the currently installed AHV version on the cluster is not the expected version with the AOS:\n\nAHV currently is installed with <installed version>, it should be installed with <expected version> or a compatible version\n \n\nChecking Installed AOS Version\n\nPrism: Click on your user name in the top right corner and select the \"About Nutanix\" menu option.Command-line: \n\tAHV currently is installed with <installed version>, it should be installed with <expected version> or a compatible version\n\t\n\nUpgrade AHV\n\nPerform the following steps to check the expected AHV version and upgrade AHV.\n\nCheck the \"Software Installed in This Release Family\" section of release notes for the currently installed AOS version on Nutanix Portal to find a compatible AHV version. Upgrade AHV by following the steps described in the \"AHV upgrade\" section of \"Prism Web Console Guide\".After upgrading the AHV version, run the check again, and the result should be a PASS output.\n\nIf this check fails with the error below it means that NCC could not retrieve information about the AHV version that is compatible with the current AOS release:\n\nAHV currently is installed with <installed version>, it should be installed with <expected version> or a compatible version\nTo confirm this issue run the following command to check if any data is present in /home/nutanix/data/installer/ folder:\n\nAHV currently is installed with <installed version>, it should be installed with <expected version> or a compatible version\n\nIf the folder is empty consider upgrading AOS to the latest release. During upgrade /home/nutanix/data/installer/ folder will be automatically populated with correct data.\n\nKnown issueAfter an upgrade to AOS 5.19 or 5.19.0.5, you may notice an alert suggesting to upgrade AHV from 20190916.360 to 20201105.12 or 20201105.1021, while LCM is not offering any new AHV releases.Upgrades from AHV 20190916.360 to 20201105.12 and 20201105.1021 were blocked because 20190916.360 was released after 20201105.12/1021 and as a result target AHV versions are older than the source version. If an upgrade is allowed then it may actually lead to downgrading the version of some packages.After AOS upgrade to 5.19.1 LCM will start offering to upgrade AHV to version 20201105.1045.For NC2 on AWS cluster, a false-positive ncc WARN will be displayed. This is due to AHV version 20201105.30007 was blacklisted as an available option to upgrade. This affects AOS versions 6.0.2, 6.0.2.2 and 6.0.2.4, where said AHV version was bundled with these AOS releases. If the above steps do not resolve your issue, consider engaging Nutanix Support for further assistance.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about How to check ACLI and NCLI command history. ### Assistant: This article describes how to access the ACLI and NCLI command history. ACLI is a command line utility that can be used to create/modify/manage VMs on AHV.\n\nNCLI is a command line utility that can be used to manage almost all cluster operations.\n\nIt may be helpful to determine what commands were run previously to troubleshoot a current issue.\n\nThis article describes how to access the ACLI and NCLI command history.. ACLI History\n\nThe ACLI command history is stored in the file /home/nutanix/.acli_history.\nNotes:\n\nThis file is a hidden file.This file is persistent across reboots of the CVM.This file is available only on CVMs hosted on AHV. It is not available on CVMs running on ESXi or Hyper-V.\n\nTo view the file, use the cat command. For example:\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nNCLI History\n\nThe NCLI command history is stored in the file /home/nutanix/.nutanix_history.\nNotes:\n\nThis file is a hidden file.This file is persistent across reboots of the CVM.\n\nTo view the file, use the cat command. For example:\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nBASH History\n\nIn addition to the files above, all ACLI and NCLI commands that were directly run from bash are also logged to the file /home/nutanix/.bash_history. Again, use the cat command to view its contents:\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nMore useful commands\n\nDisplay last 10 lines of a file:\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nDisplay last 20 lines of a file:\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nDisplay first 20 lines of a file:\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nDisplay lines with ncli in them:\n\nnutanix@cvm$ cat /home/nutanix/.acli_history\n\nDisplay last 10 lines with ncli in them:\n\nnutanix@cvm$ cat /home/nutanix/.acli_history. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: default_password_check, pc_default_password_check and file_server_default_password_check. ### Assistant: The NCC health checks default_password_check, pc_default_password_check and file_server_default_password_check cover changing the default passwords on the Nutanix CVM (Controller VM), hypervisor, Prism Central (PC), File Server VMs and IPMI.. The NCC health checks default_password_check, pc_default_password_check and file_server_default_password_check verify if there are any CVMs (Controller VMs), hosts, IPMIs, Prism Central (PC) instances, or File Server VMs with the default credentials.\n\nTo secure your Nutanix cluster, Nutanix recommends changing the default passwords. This includes, but is not limited to:\n\nThe Nutanix Controller VM (CVM)\n\tThe local 'nutanix' user account\n\tThe Installed Hypervisor:\n\tFor AHV, the local 'root', 'admin', and 'nutanix' accounts. This has been introduced in NCC-4.1.0 and above and AOS 5.15.5, 5.19.0.5, 5.20 and above.For ESXi, the local 'root' user accountFor Hyper-V, the local 'administrator' user account\n\tPrism Central\n\tThe 'admin' Prism GUI user accountThe local 'nutanix' user account\n\tThe Out-of-Band Management (IPMI)\n\tThe 'ADMIN' user account\n\tThe File Server VMs (FSVMs)\n\tThe 'nutanix' user account\n\t\n\nNote: The password checks are done on the CVMs, Prism Central, AHV, and ESXi by reading the file \"/etc/shadow\" for hash comparison. Some security monitoring software (e.g. Splunk) may alert that the /etc/shadow is being accessed. You may disable this check if there are security concerns with this file being read.\n\nRunning the NCC checkYou can run this check from any CVM as part of the complete NCC Health Checks:\n\nncc health_checks run_all\n\nOr you can also run the appropriate check separately depending on whether you are executing on a Controller VM (CVM):\n\nncc health_checks run_all\n\nOr on a Prism Central VM (PCVM):\n\nncc health_checks run_all\n\nOr on a File Server VM (FSVM):\n\nncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 7 days.\n\nThis check generates an alert after 1 failure.\n\nSample outputFor status: PASS\n\nncc health_checks run_all\n\nFor status: INFO (default IPMI passwords in use)\n\nncc health_checks run_all\n\nFrom NCC 4.3.0.1 onwards, the check results in FAIL and shows a CRITICAL alert if CVMs/hosts/IPMI is configured with a default password.For status: FAIL\n\nncc health_checks run_all\n\nFrom NCC 4.6.0 onwards, the check messaging has changed to reflect the host type and user accounts where the default password needs to be updated.For status: FAIL\n\nncc health_checks run_all\n\nFrom NCC 4.6.0 onwards, the check messaging also reports a FAIL if File Server VMs (FSVMs) are configured with the default password:\n\nncc health_checks run_all\n\nOutput messaging\n\nCheck ID6213DescriptionCVM is using the default passwordCauses of failureSSH access credentials have not changed from the shipped default.ResolutionsEnsure that all CVMs are using a custom password for improved security.ImpactInsecure configuration.Alert IDA6213Alert TitleCVM using default passwordAlert Smart TitleCVM <CVM_IP> is using default passwordAlert MessagePlease update the default password on CVM for user(s): {users} to harden the security on the clusters. Nutanix highly recommends not using default passwords for clusters.\n\nCheck ID6214DescriptionHost is using the default passwordCauses of failureSSH access credentials have not changed from the shipped default.ResolutionsEnsure that all hosts are using a custom password for improved security.ImpactInsecure configuration - Using default passwords could expose your system to unauthorized access.Alert IDA6214Alert TitleHost using the default passwordAlert Smart TitleHost <HOST_IP> is using default passwordAlert MessagePlease update the default password on {host_type} host for user(s): {users} to harden the security on the clusters. Nutanix highly recommends not using default passwords for clusters.\n\nCheck ID15037DescriptionIPMI is using default passwordCauses of failureSSH access credentials have not changed from the shipped default.ResolutionsEnsure that all IPMI devices use a custom password for improved security.ImpactInsecure configuration - Using default passwords could expose your system to unauthorized access.Alert IDA15037Alert TitleIPMI <IPMI_IP> is using default passwordAlert Smart TitleIPMI using the default passwordAlert MessagePlease update the default password on IPMI devices for user(s): {users} to harden the security on the clusters. Nutanix highly recommends not using default passwords for clusters.\n\nCheck ID111050DescriptionPC is using the default passwordCauses of failureSSH access credentials have not changed from the shipped default.ResolutionsEnsure that Prism Central is using a custom password for improved security.ImpactInsecure configuration - Using default passwords could expose your system to unauthorized access.Alert IDA111050Alert TitlePrism Central uses the default passwordAlert Smart TitlePrism Central <PC_IP> using default passwordAlert MessagePlease update the default password on PC for user(s): {users} to harden the security on the clusters. Nutanix highly recommends not using default passwords for cluster\n\nCheck ID160145DescriptionFile Server is using default password.Causes of failureSSH access credentials have not changed from the shipped.ResolutionsEnsure that all FSVMs are using a custom password for improved security.ImpactInsecure configuration - Using default passwords could expose your system to unauthorized access.Alert IDA160145Alert TitleFile Server using default passwordAlert Smart TitleFile Server {file_server_name} VM {fsvm_ips} is using default passwordAlert MessagePlease update the same custom password on FSVM(s) {fsvm_ips} of File Server {file_server_name} for account nutanix to harden the security of the clusters. Nutanix highly recommends not using default passwords for clusters.. NOTE: From NCC 4.6.4, Dell, HPE, Lenovo and Fujitsu hardware throws the following error for the check 'default_password_check':\n\nERR : Error fetching IPMI user list.\nThis is a False positive. Please upgrade NCC to 4.6.5 to resolve the issue.\n\nIf a cluster has elevated security enabled (hardened system), the password complexity requirements will be increased. See the Nutanix Security Guide document for further guidance. From NCC-4.3.0.1, the check's severity has been changed to CRITICAL/FAIL. Use the below commands to update the passwords for the user entities used for CLI access on host/CVM/PCVM and the user for Web UI access for IPMI. The Nutanix Controller VM (CVM) 'nutanix' AccountUse the below command (run from any CVM) to change the CVM 'nutanix' user password. The change will be automatically replicated to the other CVMs in the cluster. This will change the 'nutanix' user password for all CVMs.\n\nERR : Error fetching IPMI user list.\n\nThe Hypervisor Local Accounts\n\nAHV\n\n\tThe following three commands help to change the password for the root, admin, and nutanix accounts at the AHV host level and on all the hosts in the cluster. Do not modify the command. It will ask for the new password twice and will not display it.\t\tUse the below command to change the local 'root' account password for all AHV hypervisors in the Nutanix cluster. This can run from any CVM in the cluster.\n\n\tERR : Error fetching IPMI user list.\n\n\tUse the below command to change the local 'admin' account password for all AHV hypervisors in the Nutanix cluster. This can run from any CVM in the cluster.\n\n\tERR : Error fetching IPMI user list.\n\n\tUse the below command to change the local 'nutanix' account password for all AHV hypervisors in the Nutanix cluster. This can run from any CVM in the cluster.\n\n\tERR : Error fetching IPMI user list.\n\n\tRefer to the AHV Host Access chapter of the AHV Administration Guide for more information about root, admin, and nutanix accounts and the steps to change passwords for these accounts.\t\tNote: If password change fails on one or more hosts with the \"passwd: Have exhausted maximum number of retries for service\" error, it may indicate that provided password does not comply with the password policy. Try logging in to the affected AHV host and manually changing the password.\n\tVMware ESXi\n\tUse the below command to change the local root password for all ESXi hypervisors in the Nutanix cluster. This can run from any CVM in the cluster. Do not modify the command. It will ask for the new password twice and will not display it.\n\n\tERR : Error fetching IPMI user list.\n\tMicrosoft Hyper-V\n\tUse the below command to change the local administrator password for all Hyper-V hypervisors in the Nutanix cluster. This can run from any CVM in the cluster. Do not modify the command. It will ask for the new password twice and will not display it.\n\n\tERR : Error fetching IPMI user list.\n\n\tDocumentation: AOS 5.5.x (and later) Hyper-V Administration for Acropolis: Changing the Hyper-V Host Password\n\t\n\nThe Prism Element and Prism Central 'admin' and 'nutanix' Accounts\n\nThe initial login will guide you through a password change for the 'admin' account for Prism Central and Prism Element. Note that making the password change on any Prism Element will automatically change the rest of the Prism Elements in the cluster.\n\nFor the Prism Central local 'nutanix' user account:\n\nSSH into the Prism Central VM.Log in as user 'nutanix' with the default credentials.Enter the below command:\n\tERR : Error fetching IPMI user list.\n\tFollow the prompts to enter a new password for the local 'nutanix' user.\n\nFor Prism Element admin account:\n\nLog in to the Prism Element page and click on admin user at the top right corner.Click on \"Change password\" and enter a new password.\n\nIf you have forgotten the \"admin\" user password and are locked out of the account, you can change the password from CLI using the below command from any CVM/PCVM:\n\nERR : Error fetching IPMI user list.\n\nNutanix NX Host IPMI (Out-of-Band Management) 'ADMIN' AccountThe below command will change the IPMI 'ADMIN' user password across all hosts in the Nutanix cluster if the password is identical for all IPMIs. If the passwords are different, refer to IPMI Password Change to update the passwords directly from each host. Note that platforms with BMC version pre-3.40 cannot set passwords with special characters. If password recovery is needed, refer to IPMI Password Change. This can run from any CVM in the cluster. Change <CURRENT_PASSWORD> and <NEW_PASSWORD> to the appropriate values for your environment.\n\nERR : Error fetching IPMI user list.\n\nNote: For nodes running BMC Firmware 7.0 or above, changing passwords that are longer than 16 characters requires adding \"20\" to the end of the command, or the remaining characters will be truncated. This can run from any CVM in the cluster. Change <CURRENT_PASSWORD> and <NEW_PASSWORD> to the appropriate values for your environment.\n\nERR : Error fetching IPMI user list.\n\nNote: When you specify the new password, including special characters for bash (GNU Bourne-Again SHell), for example, \", ', !, $, and so on, escape them appropriately. The bash may expand the special meaning and generate a password string different from what you expect. For example, using ! followed by a number, the shell will replace it with a command from the bash history. Refer to the bash manual for details.\n\nThe Nutanix File Server VMs (FSVMs) 'nutanix' Account:\n\nOn the CVM, list File Server details:\n\tERR : Error fetching IPMI user list.\n\tSSH to the File Server virtual IP address listed in the output above.Change the File Server VM password:\n\tERR : Error fetching IPMI user list.\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: pd_clones_check. ### Assistant: The NCC health check pd_clones_check verifies the presence of PD (protection domain) clones older than 7 days and verifies if more than 3 clones are restored from a PD.. The NCC Health Check pd_clones_check alerts you of any existing PD (protection domain) clones older than 7 days or a PD with 3 or more clones restored. This NCC check identifies the clones called protection domain clones or backup clones. They are created by running the API v2 call that is only meant to be used by 3-rd party backup software: \n\nPOST /protection_domains/{name}/snapshots/{snapshot_id}/clone\n\nDo not use the above call for normal VM restoration/cloning. This is meant only for backup software.The goal of the NCC check is to alert the Nutanix administrator of a large number of these clones or old clones that may be consuming storage space on the cluster.The NCC alert is raised if either the clone age or the clone count or both are above the configured threshold. You can change the threshold for clone age and clone count in Prism.\n\nRunning the NCC CheckRun the following check as part of the full NCC Health Checks: \n\nPOST /protection_domains/{name}/snapshots/{snapshot_id}/clone\nOr run the following check separately:\n\nPOST /protection_domains/{name}/snapshots/{snapshot_id}/clone\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.Sample OutputFor status: PASS\n\nPOST /protection_domains/{name}/snapshots/{snapshot_id}/clone\n\nFor status: FAIL\n\nPOST /protection_domains/{name}/snapshots/{snapshot_id}/clone\n\nOutput messaging\n\nCheck ID110245DescriptionCheck for any clones restored from protection domain snapshots that are too old.Causes of failureClones were not removed automatically.ResolutionsRemove any clones restored from protection domain snapshots that are too old.ImpactStorage performance may be degraded.Alert TitleFound old clones on cluster.Alert MessageFound count clones related to protection domain pd_name which are older than the currently configured threshold of clone_age_threshold days.ScheduleThis check is scheduled to run every day, by default.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID110246DescriptionCheck if there are too many clones restored from protection domain snapshots.Causes of failureClones were not removed automatically.ResolutionsRemove any clones restored from protection domain snapshots that are too old.ImpactStorage performance may be degraded.Alert TitleToo many clones on cluster.Alert MessageFound count clones related to protection domain pd_name which is more than the currently configured threshold of clone_count_threshold.ScheduleThis check is scheduled to run every day, by default.Number of failures to alertThis check will generate an alert after 1 failure.. To restore VMs from snapshots as clones on a DR site use using restore_enties call instead of the clone call:\n\nPOST /protection_domains/{name}/restore_entities\n\nVerify if these clones are created by backup software. If they are, work with the backup vendor to determine if the clones are properly getting deleted and/or if you need to increment the thresholds.When the NCC check indicates there are clones present, you have determined these clones are no longer needed, and the backup software is unable to delete these clones, contact Nutanix Support to remove them.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Importing VHDX 4k logical disk alignment is not supported on AHV. ### Assistant: VHDX images with 4k logical alignment either uploaded via Prism or imported via Move will fail with Assertion.. Importing VHDX with a 4k logical disk alignment is not supported on AHV. Qemu-img will fail with errors due to the logical disk alignment when importing disks into AHV or migrating with Move. In Prism after importing the image you will see the following failure of an Image Update task:\n\nIn the acropolis.out the file you will find an error, stating it failed to convert the image.\n\n2020-09-22 13:25:14,646Z INFO download_util.py:160 File size for nfs://127.0.0.1/SelfServiceContainer/.acropolis/image/dda40e40-ed6e-49c4-b010-3fd285e9215c is 35353542656\nEach error message will refer to an Assertion. This is because qemu-image does not know how to convert a 4k logically aligned disk. \n\n2020-09-22 13:25:14,646Z INFO download_util.py:160 File size for nfs://127.0.0.1/SelfServiceContainer/.acropolis/image/dda40e40-ed6e-49c4-b010-3fd285e9215c is 35353542656\nFrom AOS 5.15.5+ LTS and  AOS 5.19+ STS, the error message has been improved to say:\n\n2020-09-22 13:25:14,646Z INFO download_util.py:160 File size for nfs://127.0.0.1/SelfServiceContainer/.acropolis/image/dda40e40-ed6e-49c4-b010-3fd285e9215c is 35353542656\nTo view the properties of the disk run the following on a Hyper-V host using Powershell to confirm the disk properties.\n\n2020-09-22 13:25:14,646Z INFO download_util.py:160 File size for nfs://127.0.0.1/SelfServiceContainer/.acropolis/image/dda40e40-ed6e-49c4-b010-3fd285e9215c is 35353542656\nThe below example of a VHDX disk has a logical sector size set 4k:\n\n2020-09-22 13:25:14,646Z INFO download_util.py:160 File size for nfs://127.0.0.1/SelfServiceContainer/.acropolis/image/dda40e40-ed6e-49c4-b010-3fd285e9215c is 35353542656\n . AHV does not support importing logical 4k aligned VHDX images from Hyper-V source VMs. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about VMDK image upload fails with \"Invalid footer\" error. ### Assistant: Creating an image from corrupted VMDK fails with \"Invalid footer\" error. Uploading a disk image to Prism > Image Configuration that involves a conversion from vmdk to the AHV qcow2 format may show the state as \"Inactive\" and the size as \"-\".\n\nImage uploads are logged in the /home/nutanix/data/logs/acropolis.out log file of the Acropolis leader.\n\nKB 2305 describes steps on how to find an Acropolis leader.\n\nLook in the acropolis.out log file on the indicated node for \"qemu-img\", which will show conversion attempts from vmdk format to qcow2 format.\n\nnutanix@cvm$ grep qemu-img ~/data/logs/acropolis.out\n\nThe \"Invalid footer\" indicates the vmdk is corrupt, which will create a corrupted qcow2 image.\n\nA corrupt vmdk can also be verified by using a standalone version of the qemu-img conversion tool.. The vmdk disk image needs to be exported again.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: ahv_rpmdb_integrity_check. ### Assistant: NCC 4.3 The ahv_rpmdb_integrity_check verifies the integrity of RPM database on the AHV host. The NCC health check ahv_rpmdb_integrity_check verifies the integrity of the RPM database on the AHV host. RPM database inconsistencies can lead to the host becoming non-upgradable.\n\nRunning the NCC Check\n\nYou can run this check as part of the complete NCC Health Checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr you can run the check separately:\n\nnutanix@cvm$ ncc health_checks run_all\n\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click RunThis check applies only to the AHV hypervisor and does not run on Prism Central.The check is scheduled to run every 6 hours.This check will not generate an alert.Sample outputFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor Status: WARN\n\nnutanix@cvm$ ncc health_checks run_all\nFor Status: ERR\nnutanix@cvm$ ncc health_checks run_all\nOutput messaging\n\nCheck ID111093DescriptionChecks if RPM database is inconsistentCauses of failureFound corrupted RPM database(s) on AHV host(s).ResolutionsSource of corruption needs to be identified and fixed.\t\t\tRPM DB may need repair or host may need to be re-imaged.\t\t\tRefer to KB 10397 for details.ImpactAHV host upgrades may fail\n. Multiple improvements were made in RPM database handling in AHV 20220304.242.Upgrade AHV to version 20220304.242 or newer to prevent the issue from happening.If this check is failing, the source of inconsistencies needs to be identified and fixed. RPM DB may need repair, or the host may need to be re-imaged.To speed up the resolution time, collect and attach the following information to the support case:\n\nA complete NCC report:\n\tnutanix@cvm:~$ ncc health_checks run_all\n\tA log bundle is generated from the cluster. This can be collected through Prism Web Console's health page. Select Actions > Collect Logs. Logs can also be collected through the command line using logbay (KB 6691 - NCC - Logbay Quickstart Guide):\n\tnutanix@cvm:~$ ncc health_checks run_all\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about CVM cannot power on after ESXi host is configured as dedicated failover host in HA configuration. ### Assistant: CVM cannot power on after ESXi host is configured as dedicated failover host in HA configuration. If an ESXi host is set as a dedicated failover host in vSphere HA configuration, the Controller VM (CVM) cannot boot up after shutdown. You see an error message similar to:\n\nno available resources\n\nvSphere HA feature does not allow any VMs running on a host that is set as a dedicated failover host. But in a Nutanix cluster, the CVM needs to run on each host.\n\nFrom VMware vSphere Availability guide\n\n\"To ensure that spare capacity is available on a failover host, you are prevented from powering on virtual machines or using vMotion to migrate virtual machines to a failover host. Also, DRS does not use a failover host for load balancing.\". Do not configure dedicated failover hosts.\n\nRefer to vSphere HA Settings in the vSphere Administration Guide for Acropolis.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about OVA export from Prism Central fails with 'Internal Server Error. kInternalError: Internal Error: 5 :Error accessing ovas capabilities'. ### Assistant: Following steps for 'Exporting a VM as an OVA' in the Prism Central (PC) Infrastructure Guide, OVA export from PC fails with 'Internal Server Error. kInternalError: Internal Error: 5 :Error accessing ovas capabilities'.. Issue encountered when following the instructions in the Prism Central (PC) Infrastructure Guide for 'Exporting a VM as an OVA'.\n\nAfter clicking 'Export', the following error occurs:\n\nInternal Server Error. kInternalError: Internal Error: 5 :Error accessing ovas capabilities.\n\nThe Prism Element (PE) clusters to which the OVA is being exported meet the prerequisites of a minimum AOS version of 5.18 and a minimum of PC 5.18. Performing this task with the VM powered on or off or exporting with disk format as QCOW2 or VMDK returns the same error.\n\n\n\nLog messages that can be seen in Aplos (~/data/logs/aplos.out) on Prism Central VM:\n\nInternal Server Error. kInternalError: Internal Error: 5 :Error accessing ovas capabilities.\n\nMetropolis (~/data/logs/metropolis.out) log may also record the error:\n\nInternal Server Error. kInternalError: Internal Error: 5 :Error accessing ovas capabilities.. This is caused by an incorrect Prism Central certificate setup.\n\nWhat is expected:\n\nnutanix@PCVM$ sudo sed -e 1b -e '$!d' /home/private/server.key\n\nExample of incorrect certificate setup:\n\nnutanix@PCVM$ sudo sed -e 1b -e '$!d' /home/private/server.key\n\nGenerate a self-signed certificate that adheres to the formatting the system expects by issuing the command below from a Prism Central VM:\n\nnutanix@PCVM$ sudo sed -e 1b -e '$!d' /home/private/server.key\n\nAt this point, you should be able to test the OVA export. Review and correct your custom SSL certificates before you upload them again to Prism Central.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Prism console slowness for Windows VMs running on AHV. ### Assistant: Slow Prism VM console for Windows. In some scenarios visual performance of the VM console in Prism UI may be degraded. Symptoms include:\nTyping is not immediately recognized on screen. That is, user is typing and there are several seconds of delay until the text is shown on screen.Mouse pointer control can be slow or frozen.Switching between opened applications is very slow in responsiveness.\n\nPlease note that at the same time VM itself may perform fine. Symptoms may get worse with increased latency between users computer and VM running on cluster.The reason behind this is because the entire screen must be scraped many times a second (refresh rate) as AHV has no visibility into what is happening inside the guest OS. This process is expensive in nature as it is continuously parsing the framebuffer which introduces some level of latency.Nutanix recommends using Microsoft Remote Desktop (RDP) to connect to Windows VMs.. Improve the general usability performance of the console by disabling the visual effects in Performance Options:\nNavigate to System Properties: Navigate to Advanced System SettingsNavigate to Performance Settings:Select \"Adjust for best performance\"Note: In a VDI environment, please ensure that this is updated on the master/gold image.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about How to create Windows installation ISO with built-in VirtIO drivers. ### Assistant: Instructions for adding Nutanix VirtIO drivers into a Windows ISO for easier, or automated, installation on AHV.. When deploying a new Windows VM on AHV typically VirtIO ISO package needs to be mounted and the SCSI VirtIO driver selected for the Windows installer to detect the disk for OS deployment. To simplify the deployment process it is possible to proactively add VirtIO drivers to Windows installation ISO.. Prerequisites\nPowerShell.Windows ADK (Deployment and Imaging Tools Environment).Windows installation ISO.Nutanix VirtIO driver package. It is recommended to use the latest version which can be downloaded from Support Portal.Administrative privileges on your Windows admin workstation.\n In this manual Windows Server 2019 and VirtIO 1.1.6 are used as an example.\n\nWindows VirtIO injection setup\n\nMount the Windows Server 2019 and VirtIO ISOs:\t\tNote the drive letters of each mounted ISO in your specific environment. In this guide, we use the following drive mappings:\tWindows ISO image: D:\tNutanix VirtIO image: F:\tTemporary folder: C:\\VirtIO\\Open PowerShell as Administrator. Create temporary folders on C: drive:\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nCopy the mounted Windows ISO/Image contents into windows_temp folder\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nChange the image \"C:\\VirtIO\\windows_temp\\sources\\\"  files boot.wim and install.wim file permissions to read/write in the windows_temp directory\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\ninstall.esd to install.wim conversion (optional)If the install file in the sources directory (C:\\VirtIO\\windows_temp\\sources\\) is install.wim, then proceed to the next step. Otherwise, continue with this section.ESD and WIM files may contain multiple Windows editions. In this example will use ImageIndex: 1\n\nReview install.esd images and choose a distribution: \n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nUse DISM to export image to install.wim file. Note the use of \"/SourceIndex:1\":\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nVerify image export:\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\nInjecting VirtIO driversYou may find two wim files on Windows installation ISO:\n\nboot.wim is used for the initial boot and installation of Windows.install.wim contains the actual OS, which is installed on the hard drive.\nWe need to inject drivers both into boot.wim and install.wim files.Perform the following steps to inject drivers into boot.wim:\n\nReview and choose an index that belongs to \"Microsoft Windows Setup\" in boot.wim:\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nMount the boot.wim file for driver injection. Note the use of \"-Index 2\":\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nInject the Nutanix VirtIO drivers into the boot.wim image:\n\n\tPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\tUse \"Get-WindowsDriver -Path C:\\VirtIO\\mountBoot\\\" command to verify that drivers were injected.\n\nSave and unmount the boot.wim image:\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\nPerform the following steps to inject drivers into install.wim:\n\nReview and choose an index that belongs to OS edition that you need to deploy:\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nMount Windows image:\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nInject the Nutanix VirtIO drivers into the install.wim image:\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nSave and unmount the install.wim image:\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nCreate ISO using OSCDIMGYou must have Windows ADK (Deployment and Imaging Tools Environment) installed on your admin workstation to perform these steps.\n\nOpen Deployment and Imaging Tools Environment as Administrator\n\n\n\nRun oscdimg command (note the lack of _space_ between some options and values):  oscdimg -l<MOUNTED ISO LABEL> -m -u2 -b<SOURCE PATH> <DESTINATION PATH>\tMounted ISO label:        WINSERV2019_virtio-oscdimg\tSource Path:           C:\\VirtIO\\windows_temp\\boot\\etfsboot.com\tDestination Path     C:\\VirtIO\\WINSERV2019_VirtIO-oscdimg.iso\n\nPS C:\\> mkdir C:\\VirtIO\\windows_temp,C:\\VirtIO\\mountBoot,C:\\VirtIO\\mountInstall\n\nUpload and Test image in a Nutanix AHV Cluster.Clean up the temporary folder C:\\VirtIO on the admin workstation to release disk space used by this workflow.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A1101 - MetadataUsageCritical. ### Assistant: Investigating MetadataUsageCritical issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert A1101 - MetadataUsageCritical for your Nutanix cluster.\n\nAlert Overview\n\nThe MetadataUsageCritical alert is generated when Stargate detects that metadata disk's global metadata usage goes above the 50% of the statfs_disk_size (not the raw disk capacity).\n\nSample Alert\n\nFor status: WARN\n\nID : 27fc16df-xxxx-xxxx-xxxx-927b8067b8a9\n\nOutput messaging\n\nCheck ID130025DescriptionMetadata Usage High.Causes of failureThe cluster either has too many snapshots or too much data is being fingerprinted.ResolutionsReduce metadata size by removing unneeded snapshots, creating snapshots less frequently, creating snapshots of fewer VMs, or expanding the cluster size.ImpactCluster performance may be significantly degraded.Alert IDA1101Alert TitleMetadata Usage HighAlert Smart TitleMetadata usage on Controller VM service_vm_external_ip has exceeded critical_pct%.Alert MessageMetadata usage on Controller VM service_vm_external_ip has exceeded critical_pct%.. TroubleshootingFor troubleshooting the A1101 alert, you need to understand:\n\nHow much above the 50% threshold did metadata size growWhich disks in a node are metadata disks, and what is their data filesystem (statfs) sizeWhat is the metadata size stored on this node in a Cassandra ring, and whether the ring is balancedSnapshot chain length and RPOs configured for DR snapshots or third-party backupsAdditionally, understanding the workload a cluster was sized for and features enabled on its storage containers\nCheck stargate.INFO log to find the reported metadata usage percentage above the 50% threshold:\nnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\n\nCheck sizes for disk data partitions:\n\nnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\n\nExample: \n\nnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\n\nNote: Up to 4 SSDs can be used by Cassandra to store global metadata, even if the node is all-flash and has more SSDs, as in the example above.\nFind which of the disks on the node store Cassandra metadata: \nnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\n\nExample:\n\nnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\nCheck Cassandra metadata load across the cluster: \nnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\n\nExample:\n\nnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\nRun the NCC ring_balance_check to make sure the Cassandra ring is balanced. One or two nodes might report higher than the average Load or Owns column. Consider rebalancing the ring to distribute the metadata more evenly in such a case. See KB-1342 for more details on the ring_balance_check.Run the NCC snapshot_chain_height_check to identify whether the snapshot chain is too long. See KB-1732 for more details on the snapshot_chain_height_check.Alternatively, check the longest snapshot chain as follows:\nnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\nCheck in Prism whether Deduplication is enabled on any of the storage containers in the cluster. Deduplication is ill-advised for some workloads. In such a case, deduplication may increase metadata footprint without providing expected capacity savings. For more information on workloads suitable for deduplication, see Deduplication Best Practices chapter in the Prism Web Console Guide.Run the NCC cassandra_ssd_size_check to identify possibly undersized SSD tier in a node. See KB 8842 for more details on cassandra_ssd_size_check.\n\nResolving the Issue\n\nIf the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support.\n\nCollecting Additional Information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691. \n\tnutanix@CVM:~$ allssh \"grep MetadataUsage ~/data/logs/stargate.INFO\"\n\t\n\nAttaching Files to the CaseAttach the files at the bottom of the support case on the support portal. If the size of the NCC log bundle being uploaded exceeds 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A1157 - RemoteSiteInvalidNetworkMapping. ### Assistant: Investigating RemoteSiteInvalidNetworkMapping issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert RemoteSiteInvalidNetworkMapping for your Nutanix cluster.Alert Overview\n\nThe RemoteSiteInvalidNetworkMapping alert is generated if either the source or destination network configuration is not present in network mapping specified in the remote site, or the networks specified in the network mapping do not exist.Sample Alert\n\nBlock Serial Number: 16SMXXXXXXXX\nOutput Messaging\n\nCheck ID130063DescriptionInvalid Network Mapping Specified.Cause of failureEither the source or destination network configuration is not present in the network mapping specified on the remote site, or the networks specified in the network mapping do not exist.ResolutionsDelete existing network mappings and create the network mapping with the associated networks present on the source and the destination cluster for the remote site.ImpactWhen the VM is restored or cloned on the remote site, networking configuration may not be recovered.Alert IDA1157Alert TitleRemote Site Network Mapping Invalid.Alert MessageInvalid network mapping specified for remote site {remote_name}: {reason}.. Resolving the Issue\n\nCreate network mapping with the networks present on the source and the destination cluster for the remote site. Refer to Configuring Remote Site for more details.\n\nNote: Unicode characters in the ESXi network name may generate a RemoteSiteInvalidNetworkMapping alert for CHDR workflows. This alert will be generated after upgrading the cluster to AOS version 5.20.1 and later from a cluster with AOS 5.20 or a previous version.\n\nTo resolve the alert, delete existing network mappings on the source and the destination cluster for the remote site reported in the alert and reconfigure the network mapping. Refer to Configuring a Remote Site (Physical Cluster) documentation. \n\nIf you need assistance or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com.\n\nCollecting Additional InformationBefore collecting additional information, upgrade NCC. For information on upgrading NCC, refer KB 2871.Collect the NCC health check bundle via cli using the following command\n\nnutanix@cvm$ ncc health_checks run_all\n\nCollect the NCC output file ncc-output-latest.log. For information on collecting the output file, refer KB 2871.\n\nAttaching Files to the CaseTo attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.Requesting AssistanceIf you need assistance from Nutanix Support, add a comment to the case on the support portal asking for Nutanix Support to contact you. You can also contact the Support Team by calling on one of our Global Support Phone Numbers.Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: remote_site_config_check. ### Assistant: NCC 3.5.0. The NCC health check remote_site_config_check verifies the mapping configuration of a remote site and a vStore (container).. The NCC health check remote_site_config_check verifies the mapping configuration of a remote site and a vStore (container).\n\nThis health check uses the data from the cluster configuration to check if the local site configuration matches the remote site configuration:\n\nThe check looks for the Controller VM (CVM) IP addresses of the local site on the cluster configuration of the remote site. (A process within the cluster maintains the Controller VM IP addresses.)NCC then checks the parameters of the cluster configuration and examines how they are configured on both sites.\n\tThe number of vStores provided in the mapping is the same for the source and the remote site.The specified vStores are available on the remote site. This is checked on both the local and the remote sites.The vStore mapping is correct on the local and the remote sites.The network mapping exists on the local and remote sites. (Included in NCC 3.6)NCC checks \"remote_site_config_check\" does not do a network mapping check in NCC 3.6.3 and later versions.\n\t\n\nNote: If any of the above checks fail, NCC returns a FAIL status for the entire check.\n\nRunning the NCC check\n\nYou can run this check as part of the complete NCC health check as follows:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run this check individually by using the following command.\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 3 hours, by default.\n\nThis check does not generate an alert.\n\nOutput messaging\n\nCheck ID110006DescriptionCheck remote site configuration on local and remote sitesCauses of failureRemote site is not properly configured.ResolutionsReview KB 3335.ImpactNutanix snapshot replication might fail to the remote site.. NCC check remote_site_config_check returns INFO for protection domains without remote container mappings.\n\nIf the NCC check completes and returns an INFO for protection domains configured for local backups (no replication), this check result can be ignored. These protection domains usually do not have a remote site listed.If replication is desired and there is a remote site configured, check the container or vStore mapping on both sites to ensure there is a 1:1 relationship. You can do the following:\n\tCreate a valid vStore mapping on the remote site configuration of the source cluster and on the remote cluster. A vStore mapping maps a container on the source cluster with a container on the destination cluster. See the Prism Web Console Guide for more information.\n\t\n\nIf the NCC check remote_site_config_check returns a FAIL, run the following commands to review the configuration on the remote site.\n\nnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\nReplace ProtectionDomainName with the name of the protection domain.\n\nnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\nReplace RemoteSiteName with the name of the remote site.\n\nRun this command on the remote site and compare the output - mainly the Capabilities and the VStore Map\n\nNote:\n\nThe data obtained from the output of these commands are useful in identifying configuration issues and troubleshooting the failure.If the protection domain name or the remote site name is not specified in the above commands, data for all the protection domains or the remote sites configured is displayed.Modifying this vStore mapping information will cause a full replication to take place.The Test Connection option seen on the Data Protection page checks network connectivity and required ports for communication. It does not check for VStore or container mappings.\n\nTo list all the remote sites and vStore mappings, run the following command.\n\nnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\nSample output:\n\n\n\tIf the number of vStore mappings does not match on both sites, you will see a message similar to the following:\n\n\tnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\n\tReview the vStore mappings on both sites, and ensure that the number of vStores match.\n\t\n\tIf the vStore mapping is incorrect on either site, a message similar to the following is displayed:\n\n\tnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\n\tReview the mapping. It should be mapped in the right order.\n\n\tExample: Site A if you have Container Mapping set to Container_A: Container_B on Site B the mapping should be Container_B: Container_A\n\t\n\tOn the remote site, for the corresponding site pair, if the network mapping is missing under remote configuration, a message similar to the following will be displayed:\n\n\tnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\n\tReview the configuration on the remote site listed in the failure and update the networking configuration.\n\t\n\tIf Site B is configured as the remote site on Site A, but on Site B, the remote site configuration is missing, a message similar to the below is seen:\n\n\tnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\n\tThis check only checks the remote site configuration. This check does not check for connectivity. To check connectivity between the sites, review KB 1596: NCC Health Check: remote_site_connectivity_check\n\n\tReview the remote site configuration on both sites to ensure that the remote sites have been configured.\n\n\tExample: Site A should have Site B as its remote site. Similarly, Site B should have Site A as its remote site.\n\t\n\tPrior to NCC 3.8: Number of vStores mapped for the local site on the remote site is not the same.\n\n\tIn NCC versions prior to 3.8, this check may fail with a similar result as the output below.\n\n\tnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\n\tAn issue was identified that triggered this alert if both names of the two clusters are changed. In this case, the \"Entity Centric\" entries are modified but a Legacy entry still remains. The legacy entry is there in case the cluster name changes. Since both EC and non-EC entries share the same IP and uuid, the workflow of the check compares a non-EC remote site object with EC remote site object configured on target. Since no check for remote site type is present here, the check fails. You can confirm by checking that the name of the remote site is shown as: remote_<ip>_<uuid>. If you see this entry, upgrade NCC to version 3.8 or later and this check should pass.\n\t\n\tIf the DR network segmentation is enabled on Site A but not added in the remote site B \n\n\tnutanix@cvm$ ncli pd ls name=ProtectionDomainName\n\tThis check might fail on the source site if the remote site has multiple remote sites added to it, you can confirm this by listing the remote sites on the remote cluster using \"ncli remote-site ls\" or via Prism and going to Data Protection > Remote Site and reviewing the sites added. this failure in the check should be addressed in a future NCC release\n\nYou can update the configuration or the mappings by modifying the remote site configuration through the Prism web console.\n\nTo correct the mapping/configuration, do the following:\n\nLog on to the Prism web console and go to the Data Protection page.Select the Table view.On the Remote Site tab, select the remote site, and then click Update.Correct the mapping.\n\nIn case the above mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/\n\nGather the output of the below command and attach it to the support case:\n\nnutanix@cvm$ ncli pd ls name=ProtectionDomainName. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: microservice_infrastructure_status_check. ### Assistant: The NCC health check \"microservice_infrastructure_status_check\" alerts the user if CMSP is in a healthy state.. The NCC health check \"microservice_infrastructure_status_check\" alerts the user if the CMSP enablement fails. This will also alert the user when Prism Central continues to run without the microservice platform enabled.The check is introduced in NCC 4.6.2.The check has the following parts:\nIf status is \"Failed\" or \"Error\" state, FAIL the check.If status is \"Enabled\" but components are UNHEALTHY, return WARNING.If status is \"Enabled\" and components are HEALTHY, PASS the check.If status is \"Disabled\" or \"Enabling\" state, return WARNING.If none of the above, CMSP is not enabled on the setup.\nRunning the NCC check.\n\nRun the NCC check as part of the complete NCC health checks.\n\nnutanix@PCVM:-$ ncc health_checks run_all\nOr run the microservice_infrastructure_status_check check separately.\n\nnutanix@PCVM:-$ ncc health_checks run_all\n\nYou can also run the checks from the Prism Web Console Health page: select Actions > Run Checks. Select All checks and click Run\n\nThis check is scheduled to run every 5 minutes by default. This check will generate an alert after the first failure across scheduled intervals.\n\nFor status: PASS\n\nnutanix@PCVM:-$ ncc health_checks run_all\n \n\nFor status: FAIL\n\nnutanix@PCVM:-$ ncc health_checks run_all\n \n\nFor status: WARN\n\nnutanix@PCVM:-$ ncc health_checks run_all\n \n\nFor status: ERROR\n\nnutanix@PCVM:-$ ncc health_checks run_all\n \n\n\nCheck ID200335DescriptionCheck Microservices Infrastructure enablement status and components healthCauses of failureThis alert indicates that either Microservice Infrastructure enablement has failed or the components are in an unhealthy state.ResolutionsCheck for any error in logs or contact Nutanix Support. Refer to KB-13136 for more details and troubleshooting steps.ImpactMicroservice Infrastructure is not in a healthy state. Please check for errors in logs or contact Nutanix Support.Alert IDA200335Alert TitleMicroservices Infrastructure, not in a healthy state.\n. NOTE:  PC versions lower than pc.2022.9 may see the WARN alert after upgrade to NCC 4.6.3 even though they have CMSP disabled in the cluster. Upgrade NCC to the latest version for a permanent fix of this issue.For other cases, If the check reports a FAIL:\n\nVerify if the MSP cluster shows \"MSP Ready to Service.\" If a check is a FAIL, this will also confirm if the microservice infrastructure is enabled on Prism Central.\n\tnutanix@PCVM:~$ mspctl cluster list\n\tIf microservice infrastructure is not enabled, then manually enable microservice infrastructure. Login to Prism Central > Settings > Prism Central Management, check section > \"Prism Central on Microservices Infrastructure. If the section says “Enable Now,” the microservice infrastructure is not enabled.\t\n\nNOTE: If the version of PC is 2022.6.x and this is present in your PC Management Dashboard, and if the above command does not list anything, the Fail alert may still be triggered.  This is a known issue with NCC 4.6.2 and has been fixed in the latest NCC version. Kindly upgrade NCC to the latest version for a permanent fix.\nIf the check reports a WARN:\n\nVerify if the MSP infrastructure is healthy on the cluster. Run the following commands to check if all MSP components are healthy:\n\nnutanix@PCVM:~$ mspctl cluster list\n\nCheck if the \"msp_controller\" service is UP and running fine:\n\nnutanix@PCVM:~$ mspctl cluster list\n\nIf your cluster runs NCC 4.6.2 on Prism Central version less than or equal to pc.2022.6.x with microservice infrastructure enabled, this NCC check may raise an ERROR. Verify if the microservice infrastructure is healthy and enabled by following the steps above. After verifying that the MSP infrastructure is correctly working, customers can safely IGNORE the ERROR message raised by the NCC check. The fix is available in pc.2022.6.0.2 and above. If this check reports an ERROR on pc.2022.9 and above, engage Nutanix Support.\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Prism - No compatible products found for the uploaded cluster summary file. ### Assistant: PE licensing error 'No compatible products found while licensing cluster'. When a customer tries to update a license on a cluster, they get the below error:\n\nError : No-compatible-products-found\n. The licenses need to be applied through Prism Central. If customers have not installed Prism Central, their AOS version needs to be 6.0.1 or higher, and the Prism Central version needs to be 2022.4 or higher. After upgrading to the current version of Prism Central, the licenses will be applied by downloading the cluster summary file from Prism Central. \n\nIf you need assistance installing Prism Central or applying the license, contact Nutanix Support.\n\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Prism - Support of AES encryption / SHA digest. ### Assistant: AOS/Prism Central support only the 'AES' encryption and 'SHA' digest algorithm for SNMP. This 'AES' means 'AES-128', and 'SHA' means 'SHA-1'.. Several AES encryption options exist. The naming used indicates the protocol followed by the length of the key after a dash, for example, AES-128, AES-256, SHA-1, SHA-2, SHA-256, etc.The Prism UI provides no option to select the length of the AES or SHA key:This article explains which AES encryption / SHA digest variant is supported in Prism SNMP settings. . Nutanix SNMP AES/SHA implementations comply with the below standards below:\nRFC-3826 defines only AES-128. AES in Prism uses AES-128.RFC-3414 defines two types of encryption protocols: HMAC-MD5-96 and HMAC-SHA-96. HMAC-SHA-96 uses the SHA-1 digest algorithm (RFC-3174), which is what Prism uses. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: cvm_dvportgroup_binding_check. ### Assistant: NCC 3.6.2. The NCC health check cvm_dvportgroup_binding_check checks if the CVM (Controller VM) NIC is connected to a dvPort group with ephemeral binding and recommends static binding if it is so.. The NCC health check cvm_dvportgroup_binding_check checks if the CVM (Controller VM) NIC is connected to a dvPort group with ephemeral binding and recommends static binding if it is so.\n\nFor virtual machines backed by dvPort group with ephemeral/dynamic binding in a ESXi environment, there is a possibility that the virtual machines may lose network connectivity after a host reboot. This is a known VMware issue described in https://kb.vmware.com/kb/1021938.\n\nThis check has been introduced for NCC version 3.6.2 and above.\n\nRunning the NCC check\n\nThis NCC check runs as part of a full health check:\n\nnutanix@cvm$ ncc health_checks run_all\n\nIt can also be run as an individual health check:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check will run on ESXi hypervisor platforms.\n\nThis check is not scheduled to run at an interval.\n\nThis check will not generate an alert.\n\nSample output\n\nFor status PASS:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID3025DescriptionCheck for CVM dVportgroup bindingCauses of failureOne (or more) Nutanix cluster Controller VM NIC is connected to a dVportgroup with ephemeral binding.ResolutionsChange the Dvportgroup binding to staticImpactThe cluster may experience downtime.. If the ephemeral port binding is used, change it to Static binding. For instructions on setting a static binding, refer to VMware documentation - https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.networking.doc%2FGUID-809743E1-F366-4454-9BA5-9C3FD8C56D32.html.\n\nThis is due to a pynfs issue. Nutanix used to recommend ephemeral binding when configuring dVswitch. This is no longer the case.\n\nIf you are using static binding or a standard switch, gather required logs (VMware support bundle) and contact Nutanix Support for further investigation.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about One Click Hypervisor (ESXi) Upgrade Caveats and Troubleshooting. ### Assistant: One-Click Hypervisor (ESXi) Upgrade Caveats and Troubleshooting. One Click Hypervisor Upgrade for ESXi has the following caveats that need to be kept in mind should the same fail.\n\nIssue #1\n\nURLDecoder: Illegal hex characters in escape (%) pattern - For input string:\n\n\n\nThis comes up if:\n\nThe Metadata and Binary files are either incorrect; orThe Binary file is chosen in place of the JSON file and vice versa\n\nSo once you click on Software Update - Hypervisor:\n\nHYPERVISOR METADATA FILE - this is the JSON fileHYPERVISOR BINARY FILE - this is the ESXi upgrade image\n\nUpgrade will further fail with a size mismatch if:\n\nThe JSON file and the ESXi image build numbers do not matchCorrect JSON build number and ESXi build number are chosen but the ESXi ISO image is used. For this to work, you will need the ZIP (Depot)\n\nSo essentially, using the below combination will fail either with a size mismatch or \"URLDecoder: Illegal hex characters in escape (%) pattern - For input string:\"\n\nVMware-VMvisor-Installer-6.0.0-2494585.x86_64.iso6.0.0-2494585.json\n\nIssue #2\n\nURLDecoder: Illegal hex characters in escape (%) pattern - For input string:\n\nURLDecoder: Illegal hex characters in escape (%) pattern - For input string:\n\nAttempted to modify the Hypervisor Upgrade JSON file to add an ISO file.\n\nAfter realizing the mistake, uploaded the correct zip file, but received the \"Bundle integrity verification\" error.\n\n\n\nIssue #3\n\nURLDecoder: Illegal hex characters in escape (%) pattern - For input string:\n\n\n\nIssue #4\n\nAvailable compatible versions show \"not qualified\"\n\n \n\nIssue #5\n\nURLDecoder: Illegal hex characters in escape (%) pattern - For input string:\n\n. Issue #1\n\nTo resolve this issue, the correct files needed are:\n\nVMware-ESXi-6.0.0-2494585-depot.zip and6.0.0-2494585.json\n\nNote: For this to work, the Virtual Center Server needs to be upgraded to vSphere 6.0 Build 2494585 or later prior to the ESXi upgrade. Else, the upgrade with fail with:\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nIf you are using the Virtual Center Virtual Appliance (Linux based VCVA), the account that needs to be used is root because administrator@vsphere.local might fail with a username password error.\n\nLogs will show the below:\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nShould the root account be used and the below messages come up, then the root account does not have access to the Virtual Center Inventory. In such a case, the root account will need to be provided access to the Virtual Center Inventory by using the administrator@vsphere.local and adding the root user to the SSO permissions so it has access to the inventory.\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nOr:\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nIf the above message is seen, log on with the root user to the Virtual Center Server and confirm you are unable to see the Inventory Objects (ESXi hosts, Virtual Machines, etc.)\n\nTo add permission for the root user, log on to the Virtual Center Server using the administrator@vsphere.local\n\nLog in to the Web client “https://client-hostname:9443/vsphere-client” with administrator@vsphere.local using the password of whatever you configured SSO password as. *Note you can only configure SSO using the Web Client*Navigate to vCenter Servers > Manage > Permissions, click on the + to add a userNow add the root (localhost) to vCenter along with the permissions required and click on OK.Try and login with the user you just configured and you should now have access to the Web client or the Windows vSphere UI Inventory\n\nShould the below be seen:\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nThe hosts need to be a part of the vSphere Cluster in vCenter.\n\nAdmission control can cause pre-upgrade to fail at 33%.\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nShould this be the case, go to Cluster Settings on Virtual Center and check the \"Disable: Power on VMs that violate availability constraints\".\n\n\n\nIn vCenter 6.5, this is how you disable Admission Control:\n\nIn the vSphere Web Client, browse to the vSphere HA cluster.Click the Configure tab.Select vSphere Availability and click Edit.Click Admission Control to display the configuration options.Select a number for the Host failures cluster tolerates. This is the maximum number of host failures that the cluster can recover from or guarantees failover for.Select an option for Define host failover capacity by:\n\tDisabledSelect this option to disable admission control and allow virtual machine power on that violate availability constraints.\n\n\t* Note the original configuration before disabling admission control so that you can put it back post-upgrade.\n\tClick OK.\n\nFor the One-Click to work, the following needs to be configured as these are a part of the Pre-Upgrade Checks and needs to be enabled prior to the upgrade attempt.\n\nroot user needs to have access to the Inventory ObjectsvMotion needs to be configured and enabledDRS needs to be enabled\n\nIssue #2\n\nRemove all downloaded files from CVMs:\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nIssue #3\n\nMost of the time, this issue is because the host SSH is disabled.\n\nRun \"ncc health_checks run_all\" and if you see below FAIL messages, enable SSH on the hosts and run the pre-upgrade check from Prism one click.\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nIssue #4\n\nAfter uploading the files (JSON and ZIP), you see a \"(not qualified)\" next to the ESXi version in Prism and there is a Retry button instead of Upgrade.\n\nYou see the following in the genesis.out:\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nDo the following:\n\nCheck MD5 value of the Hypervisor zip bundle and compare it to the vendor provided value (available from the VMware download or Dell download site in case of customized ISO)Check the MD5 value of the JSON file. Ensure that the latest available version is downloaded from portal.nutanix.com. In this case, the JSON had a different MD5 value, and downloading a fresh JSON should work.\n\nIssue #5\n\nMost of the time, this issue is because of the wrong credentials or wrong vCenter IP addresses.\n\nMake sure credentials are correct and to double-check the correct vCenter IP is provided, run \"hostssh grep serverIp /etc/vmware/vpxa/vpxa.cfg\" and you will see the IP address of vCenter that host is connected.\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nAnother way to check the vCenter IP address is run \"ncli ms list-management-server-info\".\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\n\nIssue #6\n\nThe upgrade process may get stuck on a one or multiple ESXi hosts when upgrading from ESXi 6.7 to ESXi 7.0. Customer may report that the upgrade is stuck for a long time at a particular percentage.\n\nHypervisor logs under /scratch/host-upgrade.log on the ESXi host will show Timeout occurred during entering maintenance modevCenter task \"Entering Maintenance mode\" for that host will be stuck at ~70% with the message \"Waiting for active NFC sessions to terminate\". Manually putting the host into maintenance mode will also result in the same issue.Restarting the vpxa service on the affected ESXi host allows the host to enter maintenance mode.\n\nUse the command:\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491\nIn order to resume the upgrade, restart genesis across the cluster using the below and command and monitor,\n\n2015-09-02 00:27:54 INFO cluster_upgrade.py:351 Current host version: 5.1.0-1065491. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A130156 - DiskRemovalStuck. ### Assistant: Investigating DiskRemovalStuck issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert A130156 - DiskRemovalStuck for your Nutanix cluster.For an overview about alerts, including who is contacted and where parts are sent when a hardware alert case is raised, see KB 1959.\n\nAlert Overview\n\nThe DiskRemovalStuck alert is generated by Curator when no progress in the remaining disk removal tasks is detected after several scans.This is an event triggered alert that is sent once the issue is detected and is not scheduled to run on an interval.\n\nSample Alert\n\nFor status: Critical\n\nalert_type: DiskRemovalStuck\n\nOutput messaging\n\nCheck ID130156DescriptionDrive Removal StuckCauses of failureDrive Removal Stuck.ResolutionsContact Nutanix support.ImpactDrive cannot be removed from the cluster.Alert IDA130156Alert TitleDrive Removal StuckAlert Smart TitleDrive disk_id Removal Stuck.Alert MessageRemoval of drive disk_id is stuck.. Troubleshooting\n\nThe DiskRemovalStuck alert is generated by Curator when no progress is detected upon several scans.Disk removal is a multi-stage process which, depending on whether a disk in question is the metadata drive or not and which features are in use, will need to migrate Oplog, NearSync store, Cassandra metadata, Extent Store data, etc, in specific order. For bigger disks, the system needs to evacuate and redistribute terabytes of data in the background, without affecting VMs operation and without compromising resiliency. This naturally takes time.\n\nAs per HW replacement documentation: \"Estimated time to complete: Variable, can take up to several hours\".\n\nResolving the Issue\n\nThere are multiple possible causes for drive or node removal getting stuck. Consider engaging Nutanix Support at https://portal.nutanix.com/ to investigate and resolve this issue.\n\nCollecting Additional Information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691. \n\tnutanix@cvm$ logbay collect --aggregate=true\n\t\n\nAttaching Files to the Case\n\nAttach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294.\n\nRequesting Assistance\n\nIf you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact.\n\nClosing the Case\n\nIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about VM migration on AHV cluster may fail with \"Host CPU does not provide required features by guest: monitor\" error. ### Assistant: After adding new Lenovo hosts or hosts with Intel Xeon Gold 61xx or Xeon Silver 41XX CPUs to an existing cluster, VM migration to these hosts may fail with the error message \"Host CPU does not provide required features by guest: monitor.\". After adding new Lenovo hosts or hosts with Intel Xeon Gold 61xx or Xeon Silver 41XX CPUs to an existing cluster, VM migration to these hosts may fail with the error message \"Host CPU does not provide required features by guest: monitor.\" as seen on Tasks page in Prism UI.\n\n~/data/logs/acropolis.out shows the error below:\n\n2018-08-23 15:04:55 INFO base_task.py:611 Task VmMigrate fabf5884-9a85-40be-b6e1-ff652913d232 failed with message: Host CPU does not provide required features by guest: monitor. This could be due to the MONITOR/MWAIT feature either missing or disabled from the host BIOS. Lenovo servers may have this feature disabled by default.Boot the host into BIOS Setup and enable the MONITOR/MWAIT feature. Below are sample screenshots of BIOS CPU/processor configuration screens where the MONITOR/MWAIT feature can be located and enabled.\n\n\n\n\n\nSave changes and exit BIOS Setup. VMs should now be able to migrate to this host.NOTE: If the MONITOR/MWAIT option is greyed out and you cannot change it you will need to go into the BIOS option for Operating Mode and set it to \"CUSTOM\" which will enable MONITOR/MWAIT.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Generic AHV troubleshooting scenarios. ### Assistant: This article provides an overview of generic AHV troubleshooting scenarios.. This article provides a generic description of common issues that can be seen on clusters running AHV and propose troubleshooting methods.\n\nCommon scenarios:\n\nUnreachable hostHost in a crash loopUnexpected host rebootUnexpected guest VM rebootInability to perform VM management actionsStuck VM migrationFailed VM live migration due to Jumbo frames misconfiguration.. Logs are the most important source of information when performing troubleshooting. You can find below the list of logs on the CVM (Controller VM) and on the AHV host, which should be checked during troubleshooting.\n\nCVM logsThe most important log file on a CVM to check while troubleshooting AHV related issues is /home/data/logs/acropolis.out as the Acropolis service coordinates most of the activities. Always check acropolis.out on the CVM hosting the Acropolis leader. KB 2305 describes steps on how to find the Acropolis leader.\n\nTo find which CVM is the Acropolis leader from logs or to show leader changes if any have occurred recently, run the following command:\n\nnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\nExample:\n\nnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\nIf the log file on the Acropolis leader does not contain relevant details, consider checking acropolis.out on other CVMs.\n\nNote: acropolis.out is updated with new events together with any action against user VM. Lack of updates means the service may not be healthy on the CVM.\n\nHost logs\n\nImportant logs are:\n\nLog fileDescription/var/log/libvirt/libvirtd.logEvents related to libvirtd service/var/log/libvirt/qemu/NTNX-<HOST>-CVM.logThe QEMU log for the CVM on this host./var/log/libvirt/qemu/<vm uuid>.logEvents related to VM lifetime (start, stop, migrate). To find a specific VM's UUID, use acli vm.list from a CVM and take note of that VM's UUID. You can find out which host that VM is currently on by checking the output of acli vm.get <VM_NAME> and taking note of the host_name field./var/log/libvirt/qemu/hook.logContains some basic information on calls that Acropolis makes into QEMU/var/log/acropolis_ovs.logHost-level OVS-related changes and queries from the Acropolis service are logged here on each host. You should see logs when new taps are created, changed, or destroyed for VMs running on this host.ovs-ctl.log, ovsdb-server.log, ovs-vswitchd.log in /var/log/openvswitchEvents generated by ovs-vswitchd service/var/log/iscsi_redirectoriSCSI Redirector process is what manages iSCSI connectivity between the user VM's vdisks on AHV hosts and the Stargate processes on CVMs. iSCSI redirector is what AHV uses to redirect vdisks to be hosted by other CVMs when the local CVM is inaccessible via iSCSI (e.g. CVM is down or Stargate is having issues)./var/log/NTNX.serial.out.0The CVM's console log. If a CVM is powered on, but not reachable via SSH or is in a boot loop, you can check this file to view the CVM's console output.\n\nNote: Always check the timezone configured on CVM and AHV hosts while correlating events. The default timezone on the AHV host is UTC and the default timezone on CVM is PST. The timezone on the AHV host should not be modified. Refer to the Nonconfigurable AHV components section of the AHV Administration Guide. If the timezone on the AHV host was changed, it will be automatically reverted back to UTC on the next AHV upgrade.\n\nScenarios\n\n1. Unreachable host\n\nA host can become non-accessible due to networking issues or due to the inability to boot the hypervisor.\n\n1A. Host unreachable due to networking issues\n\nPerform the following tests to identify the scope of the issue:\n\nDoes the host respond to pings from another host in the same subnet (other AHV hosts for example)?Does the host respond to pings from another host in a different subnet?\n\nIf an AHV host responds when queried from hosts in the same subnet but does not respond to hosts in other subnets, make sure that routing is correctly configured on network devices.\n\nIf connectivity to the host is intermittent, make sure that the host's IP address is not used by other devices on the network. Perform steps described in the Solution section of KB 3025 to host with the conflicting IP address.\n\nIf a host is not reachable, check if the IPMI web interface is available. If not, physical access to the server is required to check power status and network cabling.\n\nIf you can log in to the AHV host via IPMI console, perform basic network troubleshooting:\n\nCheck the configuration of the br0 bridge and make sure it is correct:\n\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\tCheck br0 configuration:\n\tFrom AHV host, connect to CVM:\n\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\tFrom CVM, run the following commands:\n\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\tIn the output of commands mentioned above, make sure that correct NICs are assigned to br0 and other bonds. For example:\n\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\tUpdate bridge configuration by following steps from KB 8835.Check which NIC is active if active/pass mode is used by following the steps described in How to select active NIC in active/passive bond section of KB 2090.\n\t\n\n1B. Host unreachable due to inability to boot the hypervisor\n\nIf the connection to the IPMI interface is working fine, then open the server console and check the status of the host. Following scenarios are possible:\n\nThe host is powered off. Power it on remotely via IPMI.The host is powered on but stuck during BIOS initialization. Check the hardware event log for hardware-related issues.The host is powered on but is in a crash loop (scenario #2 below).\n\n2. Host in a crash loop\n\nIf the host is constantly rebooting, try making the screenshot of the call stack shown on the screen and search for known issues on the Nutanix portal. If no matching KB articles can be found, consider engaging Nutanix Support at https://portal.nutanix.com/.\n\n3. Unexpected host reboot\n\nIf the host was unexpectedly rebooted, perform the following checks to scope the issue:\n\nWere there any upgrades performed on the cluster at the same time?Were any events happening in the environment, like power or overheating issues, that could affect the hardware? Check the IPMI event log to verify it.\n\nCollect basic information about host uptime, last reboot date:\n\nnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\nCheck IPMI event log for hardware errors. In addition to this, check contents of /var/log/mcelog file if it exists. Make sure that the timestamp recorded there matches the date and time of the unexpected reboot.\n\nSample:\n\nnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\nRun ahv_crash_file_check NCC check to verify if any core dump files are present:\n\nnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\nMore details about this check can be found in KB 4866.\n\nIf ahv_crash_file_check finds any issues, /var/crash folder on the affected host will contain one or more subfolders with the date and time of crash specified in the name.\n\nEach of these subfolders may contain one of the below:\n\nvmcore and vmcore-dmesg.txt if core dump was successfully created.vmcore-dmesg-incomplete.txt if core dump creation was aborted due to some external factor (like lack of free space for example).\n\nReview vmcore-dmesg.txt and vmcore-dmesg-incomplete.txt files.\n\nCheck /var/log/messages and messages.<number>.gz files for any critical events preceding host reboot.Check if the AHV host abruptly rebooted. It generates a series of \"recovering journal\" and checks events under /var/log/journald_boot.log similar to:\n\nnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\nSearch for known issues on the Nutanix portal. If no matching KB articles can be found, consider engaging Nutanix Support.\n\n4. Unexpected guest VM reboot\n\nPerform the following steps to identify the scope of the issue:\n\nWrite down the names of affected VMs and the time or reboot.Make sure that the VM reboot was not caused by the host reboot by following the steps in the Unexpected host reboot section above.Make sure that all hosts are accessible over the network. If not, then perform the steps described in the Unreachable host section above.Verify that guest OS is supported as per Compatibility Matrix.Run cluster_services_status NCC check to make sure that services on CVM are in a stable state:\n\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\tCheck logs inside guest VM as unexpected restart may be a result of issues with the guest OS itself. Log names can be different on different OSes. On Windows VMs, check System and Application event logs. On Linux VMs, check /var/log/messages and messages.<number>.gz files for any critical events preceding host reboot. Consider engaging an OS vendor if you see evidence of issues inside guest OS.Check QEMU logs on the AHV host and compare the signature to find how reboot was triggered:\n\tGet the VM UUID:\n\t\tPrism UI: VM details in VM pageNCLI:\n\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\t\n\t\tGo to the host where the VM was running and check the log file in /var/log/libvirt/qemu/<vm-uuid>.log\n\t\tIt is possible to find where the VM was running by looking on which host log files are present and checking the last lines to find out the most recent log:\n\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\tThe output will be displayed for every host where the VM ran, but only one of them will have the most recent date:\n\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\t\n\t\t\n\t\tCheck for the following signature in /var/log/libvirt/qemu/<vm-uuid>.log\n\n\t\tPower OperationLog Snippet\"shutdown -P now\" in Guest\n\t\t\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\t\t\t\n\t\t\t\t\t\"shutdown -r now\" in Guest or\n\n\t\t\t\t\tNMI (Kernel panic) - check \"NTNX.serial.out.0\" log\n\t\t\t\t\t\n\t\t\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\t\t\t\n\t\t\t\t\t\"virsh shutdown <VM>\" or\n\n\t\t\t\t\tVM power operation in Prism\n\t\t\t\t\t\n\t\t\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\t\t\t\"virsh reboot <VM>\" or\n\t\t\t\t\tVM power operation in Prism\n\t\t\t\t\t\n\t\t\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\t\t\t\"virsh reset <VM>\" or\n\t\t\t\t\tVM power operation in Prism\n\t\t\t\t\t\n\t\t\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\t\t\t\"virsh destroy <VM>\" or\n\t\t\t\t\tVM power operation in Prism\n\t\t\t\t\t\n\t\t\t\t\tnutanix@cvm$ allssh 'grep -i Acquired ~/data/logs/acropolis.out*'\n\t\t\t\t\t\n\t\t\n\t\n\nIf all factors mentioned above are excluded, consider checking /home/data/logs/acropolis.out on Acropolis leader and /var/log/libvirt/qemu/<vm uuid>.log on AHV host where VM was running before restart.\n\n5. Inability to perform VM management actions\n\nIn most cases, the inability to perform VM management actions from Prism or acli on one or more AHV hosts is caused by issues with the Acropolis service itself. Start with checking /home/data/logs/acropolis.out on the host where affected VM is located and on Acropolis leader. Look for the \"Traceback (most recent call last)\" text in the log to identify the start of the stack trace, which indicates that the Acropolis service is not healthy.\n\nSearch for known issues on the Nutanix portal. If no matching KB articles can be found, consider engaging Nutanix Support.\n\n6. Stuck VM migration\n\nIn most scenarios, VM migration is not stuck, but processing very slowly. Refer to KB 3810 for additional details.\n\n7. Failed VM live migration due to Jumbo frames misconfiguration.\n\nIf the Jumbo frames are used, but the physical network switches do not have the correct configuration to allow packets with higher MTU size, the live migration of VMs will fail. Refer to KB 10180 for more information.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Deleted VM disk attached to Windows VM is accessible from guest OS. ### Assistant: A deleted VM disk attached to a Windows VM may be accessible from within the guest OS.. When a VM disk, attached to a Windows VM running on AHV, is deleted from Prism UI may still be online and accessible within the guest OS via File Explorer even after deletion.Performing a refresh operation from the Windows Disk Management utility changes the disk status to \"Not Initialized\", but the disk is still mounted and accessible from the guest OS.. To completely remove the disk from the Windows guest OS, we need to perform a \"Rescan Disks\" operation from the Windows Disk Management utility.To perform a rescan operation, follow the steps below.\nOpen Disk Management utility in the guest OS.Open the \"Action\" menu in the Disk Management utility.Select \"Rescan Disks\".\n\n\n\nThe deleted disk should now be completely removed from the guest OS.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about VMware vCLS insfrature VMs and Robo licenses. ### Assistant: ROBO 1.0 Licensing Model and VMware vCLS infrastructure. ROBO 1.0 licenses (Per-VM licensing) are sold according to a VM count.  ROBO licenses may be purchased based on how many VMs are planned to run within a particular cluster.*Note: This count does not include the Nutanix CVMs and the powered-off VMs. As of VMware vSphere version 7.0 Update 1, an additional VMware Management VM called \"VMware vCLS infrastructure\" was added. It is responsible for the new vSphere Cluster Services feature. This KB explains why this VMware Management VM is included in the VM count for ROBO licenses even though it is not a user VM.. The \"VMware vCLS infrastructure\" VM will be counted under the ROBO licensing because it is a VMware management VM; it is not a Nutanix management VM.\nPlease refer to the screenshot and documentation below for more information. vSphere Cluster Services (vCLS) in vSphere 7.0 Update 1 and newer versions (80472). ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Unable to connect to Prism Central, getting error \"Failed to send request to IAMv2\". ### Assistant: Unable to connect to Prism Central, getting error \"Failed to send request to IAMv2\". Versions Affected:\n\nPrism Central (PC) version pc.2023.1.0.1\n\nSymptoms:\n\nPrism Element (PE) cluster is unable to connect to Prism Central.PC-UI login is failing with the following message:\n\tupstream connect error or disconnect/reset before headers\n\tAll cluster services are up and running, but unable to log in to PC.PC /home/nutanix/data/logs/aplos_engine.out shows the following errors:\n\tupstream connect error or disconnect/reset before headers\n\t/home/nutanix/data/logs/prism_gateway.log shows the following:\n\tupstream connect error or disconnect/reset before headers\n\tiam pod is in crash loop.\n\tupstream connect error or disconnect/reset before headers\n\t. Workaround:\n\nLog in to the Prism Central node where the issue is seen and reboot it using the following command:\n\nnutanix@PCVM$ sudo reboot. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix DR | AOS 6.5.2.x stuck arithmos replication stats when there are no active replications. ### Assistant: AOS 6.5.2.x stuck arithmos replication stats when there are no active replications. In Protection Summary,Replication Tasks dashboard ongoing tasks shows incorrect number.. In PC, Protection summary global view, under Replication Tasks dashboard, ongoing replication count increments.\nTo identify the issue, login to PC, go to Protection summary global view, under Replication Tasks dashboard, ongoing replication count keep increments but when you check the current active replications on the corresponding PE clusters, you will see sometimes no replications or very few replications in progress.  \n\nFrom the PE clusters, we can verify the ongoing replications using ncli\n\nnutanix@cvm:~$ ncli pd ls-repl-status protection-domain-type=entity-centric\n \n\nLogin to PC, go to Protection summary global view, under Replication Tasks dashboard, check ongoing replication count\n . Please reach out to Nutanix Support to reset the stats for stuck replications.\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix Files: Unable to join domain due to 'constraint violation'. ### Assistant: Unable to join Nutanix Files to AD domain due to duplicate SPNs existing in the AD forest.. If there is an AD SPN with the duplicate name as the Nutanix Files server or FSVMs in the Active Directory domain or forest, the below error will be presented.\nJoin AD failed with error - Failed to set Service Principal Names (SPN) entries on the AFS machine account due to 'constraint violation'. \nTo validate, run the below command on a Domain Controller to attempt to manually create a SPN for Nutanix Files as a test.\nJoin AD failed with error - Failed to set Service Principal Names (SPN) entries on the AFS machine account due to 'constraint violation'. \nExample\n\nJoin AD failed with error - Failed to set Service Principal Names (SPN) entries on the AFS machine account due to 'constraint violation'. \nYou will get the below error if there is a duplicate object in the domain or the forest.\n\nJoin AD failed with error - Failed to set Service Principal Names (SPN) entries on the AFS machine account due to 'constraint violation'. \n . The solution is to rename the Nutanix Files server to a unique name and then rejoin Nutanix Files to the AD domain.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Debugging ESX Machine Check Exception (MCE) PSOD. ### Assistant: This article describes how to debug ESX Machine Check Exception (MCE) purple diagnostic screen (PSOD) with an example.. ESXi servers may halt with a purple diagnostic screen (PSOD) due to MCE (machine check exception) errors. MCE errors are due to hardware issues.\n\nThe MCE purple diagnostic screen displays multiple things. Out of these, the MCi_STATUS register is useful in debugging the problem.\n\nRegister Details:\n\nThe MCi_STATUS register consists of information about the machine check error.The register is displayed next to the “S:” as shown in the example below:\n\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\t\n\tBits and their significance:\n\n\t6362616059585753-5638-5232-3716-310-15VALID flag – If set, then information is validOVERFLOW flag – If set, then may indicate multiple MCEs occurred close to each otherUNC flag- If set, then CPU could not correct the errorEN flagMISCV flag – If set, then MISC register contains more informationADDRV – If set, then the ADDR register contains where the error occurredPCC flag- If set, then it means processor may have been corrupted.Architectural if bit 11 is set, else “other information”Architectural if bit 10 is set, else “other information”Other informationModel specific error code for CPUMachine check error code\n\t\n\nTo debug the error, the low 16 bits of the MCi_STATUS register are important. These bits indicate a simple or compound error.\n\nOption 1: Using Automatic Tool\n\nNOTE: You can now debug the error with the following tool that automates the steps below:\n\thttps://vmware-psod-reader.github.io/vmware-psod-reader/\n\t\n\nOption 2: Using Manual Steps\n\nSimple errors are easy to debug\n\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\tCompound errors\n\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\n\tF – Form flag\n\t\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\n\t\tFiltering means that some or all of the subsequent corrections to this entry in this structure are not posted.\n\t\tTT - Applies to 2 and 4 above.\n\t\tIndicates type of transaction:\n\n\t\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\t\tLL - Applies to 1, 2, 4 and 5 above.\n\t\tIndicates the level in the memory hierarchy where the error occurred.\n\n\t\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\t\tRRRR - Indicates type of action associated with the error. Actions are:\n\t\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\t\tPP (Participation) - Describes the role of the local processor in the error.\n\t\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\t\tT (Time-out) - 1 = Request timed out.II (Memory or I/O)\n\t\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\t\tMemory Controller errors - Defined by MMM and CCCC subfields above.MMM – Memory error\n\t\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\t\tCCCC – Channel with the error\n\t\tMC: PCPU18 B:13 S:0xfe20004000011166 M:0x7246040086 A:0x38c989b100 5\n\t\t\n\t. Here is an example of how to analyse an MCE purple diagnostic screenshot:\n\n\n\nNote down the MCi_STATUS register value is 0xfe20004000011166.Convert it to binary:\n\t1111 1110 0010 0000 0000 0000 0100 0000 0000 0000 0000 0001 0001 0001 0110 0110\n\tTake the lower 16 bits of the MCi_STATUS register:\n\t1111 1110 0010 0000 0000 0000 0100 0000 0000 0000 0000 0001 0001 0001 0110 0110\n\n\tCompare it with the compound errors. In this case, this appears to be a cache hierarchy error (type 4).\n\tApply the values:\n\t1111 1110 0010 0000 0000 0000 0100 0000 0000 0000 0000 0001 0001 0001 0110 0110\n\t\n\nConclusion:\n\nIt looks like the purple diagnostic screen occurred while there was a prefetch operation on some data in the L2 cache of the processor.Thus, this could be a problem with the L2 cache on the processor.Therefore, we should replace the CPU first and check if the issue is resolved. In case the issue comes up again, we could replace the motherboard.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about LCM not listing all available Redfish updates on nodes and might show \"System does not meet Redfish requirements\". ### Assistant: LCM does not list all of the available Redfish on the nodes due to the internal interface not being reachable, and shows \"System does not meet Redfish requirements\" for one or more nodes.. A new issue has been identified in LCM 2.4.x, which does not list all of the available Redfish updates on the nodes due to the internal interface not being reachable. This affects only G6/G7/G8 hardware (not supported on G4/G5 hardware).Only certain host versions are available for the Redfish BIOS update in LCM 2.4.x. Ensure the host meets all requirements and is eligible for a Redfish BIOS update. If the system does not meet Redfish requirements, LCM will continue using the legacy(using Phoenix) method to perform the upgrades.There are two scenarios associated with this same issue.Scenario 1LCM inventory page will show \"System does not meet Redfish requirements\" for one or more nodes:\n\nLog in to LCM leader CVM, to find the LCM leader run the below command from the CVM: \n\n\nnutanix@CVM:~$ lcm_leaderLcm leader is at <CVM_IP>\n\nRun the below command from the node that shows the error \"System does not meet Redfish requirements\":\n\nnutanix@CVM:~$ grep -i 'Max retries exceeded with url:' -B1 ~/data/logs/lcm_ops.out\n\n\n[2021-02-02 10:36:11.285249] GET - https://169.254.3.254/redfish/v1/odata[2021-02-02 10:36:14.369178] Error while calling https://169.254.3.254/redfish/v1/odata. Details: HTTPSConnectionPool(host='169.254.3.254', port=443): Max retries exceeded with url: /redfish/v1/odata (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f9155d11990>: Failed to establish a new connection: [Errno 113] No route to host',)).\n\n\nThe above logs indicate errors while trying to call internal IP 169.254.3.254.Scenario 2If the BMC/BIOS version is already on the latest version, the LCM inventory page will show \"No available updates\" for the BIOS Redfish option. Instead, it will show legacy BIOS update options available in LCM for one or more nodes in the cluster:On the node that is not displaying Redfish updates, run the following command : \n\nnutanix@CVM:~$ grep -i 'Failed to ping the internal IP' -A6 ~/data/logs/lcm_ops.out\n\n[2021-05-03 01:31:12.311761] Generic ping at IP: 169.254.3.254[2021-05-03 01:31:33.355680] cmd: ping -c 1 -W 1 169.254.3.254, out: PING 169.254.3.254 (169.254.3.254) 56(84) bytes of data.\n\n--- 169.254.3.254 ping statistics ---1 packets transmitted, 0 received, 100% packet loss, time 0ms, ret: 1, err:[2021-05-03 01:31:33.355847] Failed to ping 169.254.3.254[2021-05-03 01:31:33.355871] No. of retries left: 0[2021-05-03 01:31:33.355891] Failed to ping the internal IP on AHV, resetting the interface\n\nThe above log snippet indicates errors regarding failure to ping the BMC internal IP -169.254.3.254. \n. Save the IPMI configuration file for backup before implementing the workaround. Customers need to save the current configuration because in step-2 IPMI will reset to factory default setting. IPMI will be restored back to the current configuration once the customer performs step 3. \n\nNote: on ESXi all the ipmitool commands should start with a /. For example:\n\nroot@ESXi:~$/ipmitool raw 0x30 0x4a\n\n \n\nDownload the IPMI configuration of the affected node via GUI or from the command line:\n\tGUI from the IPMI:\n\t\tLog in to IPMI Navigate to \"Maintenance > IPMI Configuration\"\n\t\t\t\n\t\t\tClick on Save which will download the .bin file.\n\t\tCommand-line from the host:\n\t\troot@ESXi:~$/ipmitool raw 0x30 0x4a\n\t\t\n\tReset the IPMI to factory default from the command line or GUI:\n\tReset via GUI from the IPMI:\n\t\tNavigate to “Maintenance -> Factory Default” as below, select option \"Remove current settings but preserve User configurations\" and then click on \"Restore\":\n\t\t\t\n\t\t\tClick on \"Yes\":\n\t\t\t\n\t\t\tWait for at least 2 minutes. IPMI will redirect back to the login page.\n\t\tConfirm IPMI is accessible.\n\tReset via command line from the host:\n\troot@ESXi:~$/ipmitool raw 0x30 0x4a\n\tReload IPMI configuration from the host if IPMI is not accessible. The IPMI may reboot after restoring the configuration. Wait for 120 seconds in order to let the IPMI complete the initialization.\n\troot@ESXi:~$/ipmitool raw 0x30 0x4a\n\tRe-run LCM inventory.\n\nIf the above steps don't fix the issue, contact Nutanix Support with the downloaded IPMI configuration file from step 1.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix Database Service | Debugging Windows DB Server VM Provisioning Failures. ### Assistant: This article describes debugging Windows DB server VM provisioning failures.. Note: Nutanix Database Server (NDB) was formerly known as Era.\n\nScenario 1: Sysprep fails when antivirus software is installedSysprep of a VM could fail when antivirus software is installed in the system.\n\nOperation logs /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<operation_id>.log or /home/era/era_base/logs/drivers/sqlserver_database/provision/<operation_id>.log:\n\n[2022-08-01 10:10:33,189] [140408303626048] [INFO ] [0000-NOPID],Got an successful status code\n\n[2022-08-01 10:10:33,189] [140408303626048] [INFO ] [0000-NOPID],Got an successful status code\n\nFrom sysprep logs /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<operation_id>_syspre_logs/Panther/setupact.log or /home/era/era_base/logs/drivers/sqlserver_database/provision/<operation_id>_syspre_logs/Panther/setupact.log:\n\n[2022-08-01 10:10:33,189] [140408303626048] [INFO ] [0000-NOPID],Got an successful status code\n\nScenario 2: Joining a workgroup/domain failsIf the software profile hostname is longer than 15 characters, joining a workgroup gets stuck.\n\nThis is identified by checking sysprep_task.log from the zip file generated by the sysprep failure. If the last line is workgroup join, the issue is identified. See sample output below:\n\n[2022-08-01 10:10:33,189] [140408303626048] [INFO ] [0000-NOPID],Got an successful status code\n\nAnother situation where this issue can be seen is if there is a domain join failure. In this case, there is a file called sysprep_task_domain.log. The domain join failure would be present in this log file.\n\nBelow are some of the reasons for a domain join failure:\n\nThe custom domain OU path is incorrect.The credentials provided do not have the necessary permission to join the domain.The credentials are incorrect.Another VM with the same name exists in the domain already.\n\nScenario 3: Network connectivityWhen there is no network connectivity between the Era server/agent and the DB server VM, the WinRM connection from the Era server/agent would fail with the following message:\n\n[2022-08-01 10:10:33,189] [140408303626048] [INFO ] [0000-NOPID],Got an successful status code\n\nScenario 4: Disabling WinRM as part of GPO policyWhen the DB server is added to the domain, and you have an active GPO policy that might disable any remote WinRM connection to the newly provisioned DB server, the provisioning would fail with WinRM connectivity failures.\n\nScenario 5: Missing virtIO installationWhen the software profile does not include VirtIO (a collection of drivers), the DB provisioning process cannot add NICs to the DB VMs. This will cause the provisioning process to fail. Here is the error message in the /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<operation_id>.log:\n\n[2022-08-01 10:10:33,189] [140408303626048] [INFO ] [0000-NOPID],Got an successful status code\n\nSample screenshot:\n\n\n\nScenario 6: Locked service accountWhen the Era worker service account is locked in the domain, DB provisioning fails with the following error message in /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<operation_id>.log:\n\n[2022-08-01 10:10:33,189] [140408303626048] [INFO ] [0000-NOPID],Got an successful status code\n\nThe domain user in the above operation log is the SQL Service Startup Account used when provisioning the DB Server:\n\n\n\nGo to the Active Directory -> Users and Computers, and search for this account. You may see that this account is locked.\n\n. Resolution 1: Sysprep fails when antivirus software is installedAntivirus software should be disabled on the software profiles prior to provisioning.\n\nDisable or delete the antivirus software service (see the antivirus software's documentation on how to do this).Take a new software profile (see Creating a Software Profile in the NDB User Guide).Perform provision DB server operation (see SQL Server Database Provisioning in the NDB User Guide).\n\nResolution 2: Joining a workgroup/domain failsChange the hostname of the source VM to less than 15 characters, or engage Nutanix Support to skip joining a workgroup as part of the provision.\n\nIf the sysprep_task.log does not have more than 4-5 lines of output, this is a different issue with the same error message. In this case, engage Nutanix Support for assistance.\n\nTo resolve a domain join failure:\n\nCorrect the OU path.Give different credentialsDelete the entry of the same name or provision with a new name.\n\nResolution 3: Network connectivityEnsure that the Era server/agent can connect to the subnet where the VM is being deployed.\n\nResolution 4: Disabling WinRM as part of GPO policyEnable WinRM remote connection to the host via GPO.\n\nResolution 5: Missing virtIO installationInstall VirtIO in the gold image DB VM, then create a new software profile from it. For the VirtIO installation, refer to Nutanix VirtIO for Windows.\n\nResolution 6: Locked service accountUnlock the service account in the domain by ticking the checkbox.\n\n. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Move: How to move a VM from one cluster to another. ### Assistant: How to move a VM from one cluster to another. There are several ways to move a VM from one cluster to another.. To move a VM from a non-Nutanix cluster to a Nutanix cluster, use Nutanix Move available on the Nutanix Portal.\n\tNutanix Move is a cross-hypervisor mobility solution to move VMs with minimal downtime.\n\tTo move a VM from one Nutanix cluster to another, use Asynchronous Disaster Recovery Between On-Premise Data Centers.\n\tSummary of steps:\n\n\tConfigure a Remote Site on cluster A.Create a Protection Domain on cluster A.Add the VM to be moved to this Protection Domain.Power off the VM.Create a schedule for (N) hours temporarily and initiate a full replication.After replication is complete, restore the VM from snapshot on cluster B and power on the VM.Unprotect the VM from Protection Domain, delete the schedule, snapshots and Protection Domain on cluster A.Un-configure the Remote Site on cluster A.\n\n\tThe same result can be achieved using Asynchronous Entity Centric Replication using Leap. Check the following documentation for more details.\n\tTo move a VM from an AHV to an ESXi cluster, export it in .ova format from the source and import it in the target cluster.\n\tSummary of steps:\n\n\tFor AOS 5.18 or newer:\n\t\t\n\t\t\tSelect the Export as OVA action from the Actions dropdown list on Prism Central UI. For more details on how to perform the Disk export, refer to Exporting VM as an OVA in the OVA Management section of the Prism Central Infrastructure Guide.\n\t\t\t\n\n\t\tFor AOS versions prior to 5.18:\n\n\t\tFollow the steps described in the Solution part of KB 3059.\n\t\tDeploy the exported VM into the ESXi cluster using the steps described in the following VMware documentation.\n\tFrom AOS 5.19, the VM can be Cross Cluster Live Migrated using Leap and AHV SyncRep. Check the following documentation for more details.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: cluster_services_down_check. ### Assistant: The NCC health check cluster_services_down_check verifies if the cluster services are running on all Controller VMs in the cluster. If any cluster service is not running on any of the Controller VMs in the cluster, this check fails.. The NCC health check cluster_services_down_check verifies if the cluster services are running on all Controller VMs in the cluster. If any cluster service is not running on any of the Controller VMs in the cluster, this check fails.\n\nRunning the NCC CheckYou can run this check as part of the complete NCC Health Checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr you can run this check separately:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every minute, by default.\n\nSample outputFor status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor PC (Prism Central) ClusterThis check verifies whether all services are running on PC VM.\n\nNote: If the Cluster Health service status is DOWN for more than 15 minutes, an alert email is sent by the AOS cluster to configured addresses and Nutanix Support (if selected). In this case, no alert is generated in the Prism web console. The email is sent once per 24 hours. You can run the NCC check cluster_services_down_check to see the service status.\n\nOutput messaging\n\nCheck ID3032DescriptionCheck that no services are down in the cluster.Causes of failureOne or more services in the cluster are down.ResolutionsIf this check occurs once or infrequently, no action is necessary.ImpactCluster performance may be significantly degraded. In the case of multiple services with the same condition, the cluster may become unable to serve I/O requests.Alert IDA3032Alert TitleCluster services are downAlert Smart TitleCluster services down on Controller VM ip_addressAlert MessageCluster Service: service_name is down on the Controller VM ip_address.. Run the following command to confirm if any service is currently down:\n\nnutanix@cvm$ cluster status | grep -v UP\n\nIf any services are listed as DOWN, start those services by using the following command:\n\nnutanix@cvm$ cluster status | grep -v UP\n\nIf there are issues starting some services, check the following:\n\n\n\tCheck if the node is in maintenance mode by running the ncli host ls command on the CVM. Verify if the parameter Under Maintenance Mode is set to False for the node where the services are down. If the parameter Under Maintenance Mode is set to True, remove the node from maintenance mode by running the following command:\n\n\tnutanix@cvm$ cluster status | grep -v UP\n\n\tYou can determine the host ID by using ncli host ls.\n\t\n\tSee the troubleshooting topics related to failed cluster services in the Advanced Administration Guide available from the Nutanix Portal's Software Documentation page. (Use the filters to search for the guide for your AOS version). These topics have information about common and AOS-specific logs, such as Stargate, Cassandra, and other modules.\n\t\n\tCheck for any latest FATALs for the service that is down. The following command prints all the FATALs for a CVM. Run this command on all CVMs.\n\n\tnutanix@cvm$ cluster status | grep -v UP\n\t\n\nIf a service does not start, or if you see FATAL logs for the service that is down, collect the logs and contact Nutanix Support for assistance.\nCollect logbay bundle for the timestamp when the alert was noticed. For more information on logbay, see KB 6691.\n\nFor PC VM\n\nCheck if any services are down using the below command:\n\n\tnutanix@cvm$ cluster status | grep -v UP\n\tIf any service is listed as DOWN, start that service using the below command:\n\tnutanix@cvm$ cluster status | grep -v UP\n\tVerify if all services are running using the command provided in step 1.\n\nIf the above commands do not help and you still see a service reported as DOWN, open a case to engage Nutanix Support at https://portal.nutanix.com/.\nCollect logbay bundle for the timestamp when the alert was noticed. For more information on logbay, see KB 6691.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Move : Online upgrades from newly deployed Move 4.1.0 appliances are not working. ### Assistant: This article describes an issue where online upgrades from newly deployed Move 4.1.0 VMs are not working.. Online upgrades from newly deployed Move 4.1.0 VMs are not working.\n\nTroubleshooting:\n\nSSH to the Move VM. (See Accessing Move VM with SSH.)\n\nRun the /opt/xtract-vm/scripts/check_upgrade_connectivity.sh script. It shows the error \"curl: no URL specified!\"\n\nFor example:\n\nadmin@move on ~ $ sh /opt/xtract-vm/scripts/check_upgrade_connectivity.sh. This issue is fixed in Move 4.1.2. Use Move 4.1.2 or later.\n\nWorkaround:\n\nPerform the following steps:\n\nSSH to the Move VM.Type the rs command and enter the password.Open the check_upgrade_connectivity.sh file in an editor and delete the contents of the file.\n\tadmin@move on ~ $ vi /opt/xtract-vm/scripts/check_upgrade_connectivity.sh\n\tCopy and paste the following content:\n\tadmin@move on ~ $ vi /opt/xtract-vm/scripts/check_upgrade_connectivity.sh\n\tRetry Upgrade Software from the Move UI.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Performance benchmarking with CrystalDiskMark on Nutanix. ### Assistant: Specifics about using the CrystalDiskMark on Nutanix. CrystalDiskMark is an open-source disk drive benchmark tool for Microsoft Windows. Based on Microsoft MIT-licensed diskspd tool, this graphical benchmark is commonly used to test storage performance. It generates a workload with a varying number of queues and threads. The detailed information about using the parent tool diskspd can be found in this Nutanix KB article here. If there is any help needed feel free to open a support case to help you with CrystalDiskMark. . CrystalDiskMark default settingsCrystalDiskMark benchmark is very simple to use and provides a one-click experience for storage benchmarking. However, the default settings are not a summary for testing enterprise-level storage but rather built for PC/laptop local disk testing. Please use the latest version of CrystalDiskMark for testing.Let's take a closer look at the settings of the CrystalDiskMark:Figure 1: Default CrystalDiskMark settings\n\nWorking SetBy default, CrystalDiskMark is set to a 1GiB test file size. It is recommended to test with a realistic working set size (the \"hot\" data which is frequently read and written) of the production application or at least get closer to the size that simulates the real life behaviour. Nutanix as a distributed architecture uses DRAM as read cache as well as Oplog as a write cache for random data and low parallel sequential write patterns. Hence the 1 GiB data will get into Oplog first and later into DRAM while parts of the test data will potentially get read from Oplog. This is unlikely to happen in real life. While customers often compare CrystalDiskMark results on Nutanix with existing SAN based storage systems and local storage solutions, the 1 GiB dataset will come 100% from the SAN cache or Raid Controller cache for both reads and writes and will show how this extremely small dataset with a short run time would behave differently.The below example illustrates the 100% reads from DRAM on a Nutanix based system:Figure 2: Reads with DRAM cachingFor this reason, it is important to understand what the working set would be before doing the benchmarking. Typically, the application owners can provide such information. For spot testing, we recommend using at least a 16 GiB file to bypass the controller cache for both Nutanix and SAN based storage to see realistic synthetic storage performance benchmarking results.\n\nQueues and ThreadsCrystalDiskMark allows setting queues and threads. All four of the default tests are single-threaded, which does not show a realistic I/O profile for the vast majority of workloads as most modern applications are multi-threaded. With Database products like Microsoft SQL, the default of parallel execution is set to a given amount of CPUs configured in the VM. More information about MAXDOP (max degree of parallelism) can be found here. It is highly recommended to understand the application as best as possible and identify whether it uses single-threaded or multi-threaded processing before doing the benchmarking. The test should be as close to the production workload as possible to best understand what the performance for a workload will be.\n\nThe amount of queues is the amount of I/O that each thread is generating at once (OIO - Outstanding IO). Both of the default random I/O tests are set to 4K block size. While 4K is a block size that can be seen at times, it is usually not a common block size in most workloads today. When possible the block size in the test should match the application's block size. The common block size for both Oracle and MS SQL is 8 KB as well as 64 KB. While these applications have variable block sizes dependent on the work happening, 8 KB and 64 KB are very commonly used block sizes in the field.CrystalDiskMark tests settings can be adjusted in the \"Settings\" menu. CrystalDiskMark allows a user to specify the following parameters:\n\n\n\tType (sequential or random)\n\t\n\tBlock Size\n\t\n\tQueues\n\t\n\tThreads\n\t\n\n \n\nSuggested example for testing Nutanix and other enterprise storageAlways create a new vDisk for the VM before running the tests. The main reason for that is the old disk might have some data from the previous tests or other workloads and it could have been down migrated to the HDD tier (in case of a hybrid cluster). The new disk will guarantee the correct data path of the I/O.Create a 50 GiB or 100 GiB disk. The disk then should be initialized and formatted with the default settings. The disk will get formatted and will get a letter assigned. Make sure that the new disk is selected in the CrystalDiskMark for the tests.NOTE: Nutanix scales performance per node and with the number of vDisks. The testing with CrystalDiskMark is based on a single vDisk on a single VM running on a single Node. While testing is important to understand a specific use case for a single vDisk, it should be clear that CrystalDiskMark does not replace a distributed performance test like Nutanix X-Ray to understand the full capability of a Nutanix cluster.The following settings are recommended as they provide a good mix of block sizes as well as parallel IO vs. single threaded IO:Figure 3: Recommended CrystalDiskMark settingsThe below table explains the different configuration details and differences to the default settings:\n\n\n\t\t\tDefault tests:\n\t\t\t\n\t\t\tRecommended tests:\n\t\t\t\n\t\t\tExplanation:\n\t\t\t\n\t\t\tSequential reads and writes of:\n\n\t\t\t\n\t\t\t\tBlock size: 1 MB\n\t\t\t\t\n\t\t\t\tQueue depth: 8\n\t\t\t\t\n\t\t\t\tThreads: 1\n\t\t\t\t\n\t\t\t\n\t\t\tSequential reads and writes of:\n\n\t\t\t\n\t\t\t\tBlock size: 1 MB\n\t\t\t\t\n\t\t\t\tQueue depth: 8\n\t\t\t\t\n\t\t\t\tThreads: 8\n\t\t\t\t\n\t\t\t\n\t\t\tThe test is very similar to the default sequential 1MB test, the only adjustment is the amount of threads has been changed from 1 to 8.\t\t\tSuch a test would be similar to what happens during a large file copy or during a backup job.\n\t\t\t\n\t\t\tSequential reads and writes of:\n\n\t\t\t\n\t\t\t\tBlock size: 1 MB\n\t\t\t\t\n\t\t\t\tQueue depth: 1\n\t\t\t\t\n\t\t\t\tThreads: 1\n\t\t\t\t\n\t\t\t\n\t\t\tSequential reads and writes of:\n\n\t\t\t\n\t\t\t\tBlock size: 64 KB\n\t\t\t\t\n\t\t\t\tQueue depth: 16\n\t\t\t\t\n\t\t\t\tThreads: 8\n\t\t\t\t\n\t\t\t\n\t\t\tAs the default single-threaded 1MB test won't give much more insight with a queue depth of 1\t\t\tNutanix recommends replacing this test with a 64KB block size sequential test as it is more common for applications to use that block size.\t\t\tThe queue depth of 16 and 8 threads will ensure that the test generates enough load on the storage subsystem.\n\t\t\t\n\t\t\tRandom reads and writes of:\n\n\t\t\t\n\t\t\t\tBlock size: 4 KB\n\t\t\t\t\n\t\t\t\tQueue depth: 32\n\t\t\t\t\n\t\t\t\tThreads: 1\n\t\t\t\t\n\t\t\t\n\t\t\tRandom reads and writes of:\n\n\t\t\t\n\t\t\t\tBlock size: 8 KB\n\t\t\t\t\n\t\t\t\tQueue depth: 32\n\t\t\t\t\n\t\t\t\tThreads: 8\n\t\t\t\t\n\t\t\t\n\t\t\tThe default 4K block size tests do not reflect the real-life usage of the application. However, 8K block size is very common and is used by most databases. \t\t\tThe queue depth of 32 and 8 threads will ensure that the test generates enough load on the storage subsystem.\n\t\t\t\n\t\t\tRandom reads and writes of:\n\n\t\t\t\n\t\t\t\tBlock size: 4 KB\n\t\t\t\t\n\t\t\t\tQueue depth: 1\n\t\t\t\t\n\t\t\t\tThreads: 1\n\t\t\t\t\n\t\t\t\n\t\t\tRandom reads and writes of:\n\n\t\t\t\n\t\t\t\tBlock size: 8 KB\n\t\t\t\t\n\t\t\t\tQueue depth: 1\n\t\t\t\t\n\t\t\t\tThreads: 1\n\t\t\t\t\n\t\t\t\n\t\t\tThe default 4K block size tests do not reflect the real-life usage of the application. However, 8K block size is very common and is used by most databases.\t\t\tThis test is left with a single thread and a queue depth of 1 just to show the difference between the multi-threaded test above and the single-threaded result.\n\t\t\t\n\nTable1: Explanation of default vs. recommended CrystalDiskMark settingsA good Measure Time recommendation is 30 seconds. The default setting of 5 seconds won't generate enough load for that short time frame.The good Interval Time recommendation is 60 seconds. That is to make sure that there is a quiet time between write tests. Nutanix starts flushing the write cache after 30 seconds of inactivity, so a default interval of 5 seconds is not a good setting. The test wants to ensure that the write cache is empty before running the next test to achieve fair results. While this is Nutanix specific, an application would rarely read the same data after it has been written. These settings reflect a realistic ramp up time which can be similarly used on other Enterprise Storage Systems.A good recommendation for a testing file size: 16 GiBThat way the file is big enough to bypass read/write caches and test Storage performance rather than a cache.Example of test results with recommended settingsThe results will greatly vary depending on the underlying hardware configuration and software versions. The below results were achieved on the following setup:\n\nVM details: Windows Server 2019 Datacenter Build 17763vCPUs: 8Memory: 8GbDisk size: 100Gb\n\nHardware details:3-node cluster NX-3061 (G3).2xSSDs 800Gb and 4 HDDs 1Tb per node\n\nNutanix AOS version: 5.19.1.5Hypervisor: AHV 20201105.12Storage container settings:Inline compression - onDeduplication - offErasure Coding (EC-x) - off\n\n.Figure 4: CrystalDiskMark results. For throughput equivalents of the results, simply multiply the IOPS by the block size (i.e. 8k, 64k, 1M). For example, 1629 IOPS x 1M = 1629MB/sec.\n\nData randomness.\n\nData randomness is quite important because the storage engine of Nutanix will be compressing the file internally if compression is enabled on the storage container level. CrystalDiskMark has two options for laying the file:\n\n\n\trandom (default)\n\t\n\tAll 0x00 (0 Fill) \n\t\n\nThis can be configured in Settings - Test Data. It is generally recommended to use the random pattern to get realistic results because Nutanix will optimize the NULL byte files and testing on NULL byte files will not reflect the performance of real applications.Figure 5: CrystalDiskMark data pattern\n\nThe difference can be demonstrated by running a test in each of the modes and then compressing the file into a .zip archive using the built-in Windows compression tool. Even though the compression algorithm is different on the Nutanix storage level, it will still show the difference. So, if the test file is 1Gb in size, we see a 1Gb file in Windows Explorer:Figure 6: The test file sizeIf the file is created with the default random pattern, compressing the file into .zip will result in the same file size:Figure 7: The test file compressed into .zip size with a random data patternHowever, if the file is created with the All 0x00 pattern and then compressed into .zip, the archive will be taking ~1Mb of storage space instead of 1Gb, because the 0s will be mostly eliminated by the compression algorithm:Figure 8: The test file compressed into .zip size with All 0x00 data patternA similar compression happens on the storage side and without true data randomness, the test file will utilise a lot less of the storage space and will likely fit more into the cache, which will result in an unfair test.Unlike the diskspd tool, CrystalDiskMark creates a file with a random data pattern by default. There are no tweaks needed to achieve randomness. \n\nFile system software cachingUnlike the default settings of diskspd, CrystalDiksMark is disabling the software caching in its tests. This will show the same amount of IOPS/Throughput on the Nutanix system side as we see in the CrystalDiskMark test.Test example in CrystalDiskMark:Figure 9: CrystalDiskMark vs. Nutanix Storage PerformanceThe results from CrystalDiskMark vs. the Nutanix storage are very close. The reason for this difference is below example shows a single iteration at the moment of the screenshot capture and the CrystalDiskMark shows the average of the whole test.Reads:Figure 10: Nutanix Storage Performance read Writes:Figure 11: Nutanix Storage Performance write NOTE: CrystalDiskMark is an easy to use benchmarking tool which is generally a simplified version of diskspd with GUI. It lacks some control and settings compared to diskspd, so there is less tuning possible. If the settings are correctly set, the results will be realistic.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: pcvm_disk_usage_check. ### Assistant: The NCC health check pcvm_disk_usage_check verifies that the amount of disk space used in the Prism Central VM is within limits.. The NCC health check pcvm_disk_usage_check verifies that the amount of disk or system partition usage in the Prism Central (PC) VM is within limits.\n\nThis check has the following parts:\n\n\n\tChecking the individual data disk usage (added in NCC 3.5.1):\n\n\tIf usage is more than 75% for several hours, a WARNING is returned to identify the disk.If usage is more than 90% for several hours, a FAIL is returned to identify the disk.\n\t\n\tChecking the overall data disk usage (added in NCC 3.10.1):\n\n\t\n\t\tIf overall usage is more than 90% for several hours, a WARNING is returned.\n\t\t\n\t\n\tChecking the Prism Central VM system root partition usage (added in NCC 3.9.4). Returns only FAIL message if the partition usage exceeds 95%.\n\t\n\tChecking the Prism Central VM home partition usage (added in NCC 3.9.4):\n\n\tIf the usage is more than 75%, a WARNING is returned.If the usage is more than 90%, a FAIL is returned.\n\t\n\tChecking the Prism Central VM CMSP partition usage (added in NCC 3.10.1):\n\n\tIf usage is more than 75%, a WARNING is returned.If the usage is more than 90%, a FAIL is returned.\n\tChecking the Prism Central VM Upgrade disk partition usage (added in NCC 4.6.0):\n\tIf the usage is more than 70%, a FAIL is returned.This check runs every 5 mins.If there are more than 5 failures (30 mins), a critical alert is raised.\n\t\nPlease note: If you are running LCM-2.6 or LCM-2.6.0.1, LCM log collection fills up /home directory please refer KB 14671 for workaround.\n\nRunning the NCC checkRun the NCC check as part of the complete NCC health checks.\n\nnutanix@pcvm$ ncc health_checks run_all\n\nOr run the pcvm_disk_usage_check check separately.\n\nnutanix@pcvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism Web Console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 5 minutes, by default.\n\nThis check will generate an alert after 5 consecutive failures across scheduled intervals.\n\nSample OutputsFor status: PASS\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: WARN (on Prism Central VM data disk, e.g. /dev/sdc1)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: WARN (on Prism Central VM overall MultiVDisk)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: FAIL (on Prism Central VM data disk, e.g. /dev/sdc1)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: FAIL (on root partition, i.e. /)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: WARN (on Prism Central VM home partition, i.e. /home)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: FAIL (on Prism Central VM home partition, i.e. /home)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: WARN (on Prism Central VM CMSP partition, i.e. /dev/sde)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: FAIL (on Prism Central VM CMSP partition, i.e. /dev/sde)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nFor Status: FAIL (on Prism Central VM upgrade disk partition, i.e. /home/nutanix/upgrade)\n\nnutanix@pcvm$ ncc health_checks run_all\n\nCheck Disk Usage in PC VMFollowing is an example of how to check disk usage on a PC VM.\n\nnutanix@pcvm$ ncc health_checks run_all\n\nPartition for upgrade disk would show up similar to: \n\nnutanix@pcvm$ ncc health_checks run_all\n \n\nOutput messaging\n\nCheck ID101059DescriptionCheck that disk space usage on the Prism Central VM is within limits.Causes of failureHigh disk usage in the Prism Central VM.ResolutionsRefer to KB 5228 for further details.ImpactPrism Central VM may run out of storage space to store data.Alert IDA101059Alert TitlePrism Central VM disk usage highAlert MessagePrism Central VM IP disk usage exceeds warning limit x % for disks: disks.\t\t\tPrism Central VM IP overall MultiVDisk usage exceeds warning limit of  2321329924 KB\t\t\tPrism Central VM IP disk usage exceeds critical limit x % for disks: disks.\n\nCheck ID200316DescriptionChecks if Prism Central system root partition usage is within the threshold to ensure uninterrupted operations.Causes of failureIncreased Prism Central VM system root partition usage due to excessive logging or incomplete maintenance operation.ResolutionsReduce Prism Central VM system root partition usage by removing any known temporary or unneeded files. Refer to KB 5228 for further details.ImpactIf the Prism Central VM system root partition is highly utilised, certain maintenance operations, such as upgrades, may be impacted. If the Prism Central VM system root partition is 100% utilized, services may stop and impact Prism Central cluster management functions.Alert IDA200316Alert TitlePrism Central VM System Root Partition Space Usage HighAlert MessageDisk space usage for root partition mount_path on entity-ip_address has exceeded threshold%.\n\nCheck ID200317DescriptionChecks if Prism Central home partition usage is within the threshold to ensure uninterrupted operations.Causes of failureIncreased Prism Central VM home partition usage due to excessive logging or incomplete maintenance operation.ResolutionsReduce Prism Central VM home partition usage by removing any known temporary or unneeded files. Refer to KB 5228 for further details.ImpactIf the Prism Central VM home partition is highly utilised, then certain maintenance operations, such as upgrades, may be impacted. If Prism Central VM home partition is 100% utilised, then services may stop and impact cluster storage availability.Alert IDA200317Alert TitlePrism Central VM home partition disk usage highAlert MessagePrism Central VM IP home partition disk usage exceeds warning limit x %.\t\t\tPrism Central VM IP home partition disk usage exceeds critical limit x %.\n\nCheck ID200328DescriptionCheck that platform disk space usage on the Prism Central VM is within limits.Causes of failureHigh disk usage in the Prism Central VM.ResolutionsRefer to KB 5228 for further details.ImpactPrism Central VM may run out of storage space to store data.Alert IDA200328Alert TitlePrism Central VM platform disk space usage highAlert Smart TitlePrism Central VM svm_ip platform disk space usage highAlert MessagePlatform disk space usage in Prism Central VM svm_ip exceeds percentage_exceed% for disk(s): disk_paths.\n\nCheck ID200334DescriptionChecks if Prism Central upgrade disk usage is within the threshold to ensure uninterrupted upgrade operations.Causes of failureIncreased Prism Central VM upgrade disk usage due to the presence of multiple Prism Central Installer filesResolutionsReduce Prism Central VM upgrade partition usage by removing the Prism Central Installer files which are not needed. Refer to KB 5228 for further details.ImpactIf the Prism Central VM upgrade disk is highly utilised, then the Prism Central Upgrade would fail due to lack of space in the upgrade disk.Alert IDA200334Alert TitlePrism Central VM upgrade disk usageAlert MessagePrism Central VM <IP> upgrade disk usage exceeds critical limit x%\n\nScenarios that trigger pcvm_disk_usage check Warn/Fail on /home partition\nScenario 1The /home directory on long-running PC instances might reach close to its maximum limit as older Prism code is not cleaned up:\n\nnutanix@pcvm$ ncc health_checks run_all\n\nNutanix Engineering is aware of the issue and is working on a fix in a future release to further clean up older Prism codes. For the workaround, refer to the corresponding solution section.\n\n \n\nRemoving older sysstats logsIf you have checked all the usual places but still need to clean up space, you can get the customer’s permission to remove older sysstats logs. Double-check that there are no open support cases with pending RCAs before proceeding, as this data may not yet have been collected.\n\nnutanix@pcvm$ ncc health_checks run_all\n\nScenario 2Hyperkube logs (kublet logs) are not being cleaned up after enabling CMSP / microservices on Prism Central. If Cluster Maintenance Utilities (CMU) has been updated to version 2.0.3 using LCM, the included Scavenger version is missing the capability to clean up certain logs related to CMSP microservices. As a result of this issue, users may find that Prism Central services are not starting or that they cannot log in to PC UI. This issue will first fill up the /home/nutanix/data/sys-storage/NFS_.../ directory, after which the kubelet logs will start filling up the root partition in the /tmp folder.\n\nnutanix@pcvm$ ncc health_checks run_all\n\nWe may find hyperkube.ntnx* logs getting generated and not rotated correctly when CMSP is enabled. Check if the directory /home/nutanix/data/sys-storage/NFS.../kubelet/ is excessively filled with these logs. \n\nnutanix@pcvm$ ncc health_checks run_all\n\nWhen working as intended, we should see less than 10G used in this kubelet folder, and the file count for kubelet* would be less than ~15\nWe may additionally see similar kubelet logs filling up space in /tmp, after space in the /home/nutanix/data/sys-storage/NFS... directory has been exhausted. \n\nnutanix@pcvm$ ncc health_checks run_all\n\nScenario 3Starting PC.2022.6 - a dedicated 30 GB disk is created and mounted for PC upgrades. This Vdisk will be used for downloading and extracting upgrade binaries from consecutive upgrades. If the Prism Central VM upgrade disk is highly utilised, then the Prism Central Upgrade would fail due to lack of space in the upgrade disk.\n\nnutanix@pcvm$ ncc health_checks run_all\n\nNote: In case several services are enabled on Prism Central, such as msp, karbon, calm, flow, and objects, we would see /home usage to be high as each of the services will generate several logs and configuration files. Nutanix engineers are constantly working towards improving the /home usage. If none of the above scenarios matches and this affects Prism Central upgrade, engage Nutanix Support team to help manually cleanup /home by trimming the logs. Scenario 4Due to the log file rotation issue, Adonis logs directory usage is high. This log directory usage should not be over than 1G. For Scale-Out Prism Central deployments, check the file usage on each of the Prism Central VMs:\n\nnutanix@pcvm$ ncc health_checks run_all\n\nScenario 5In certain cases, catalina.out may consume a large amount of space on the Prism Central VM.\n\nSSH to the Prism Central and check if /home/nutanix/data/logs/catalina.out is consuming a huge amount of space:\n\nnutanix@pcvm$ ncc health_checks run_all. If the check reports a WARN or FAIL status, disk usage is above the threshold and needs investigation. Generally, space utilization can be queried using df -h. The output below shows the mount points as follows:\n\n/dev/sdb1 is root partition/dev/sdb3 is home partition/dev/sdc1 is data disk partition\n\nnutanix@pcvm$ df -h\n\nData disk usage (/dev/sdXX) or overall multivdisk usage:\n\nVerify the number of VMs supported for the particular Prism Central size is within the limit (consult the Prism Central Guide for your version from the Support Portal for the limits). Contact Nutanix Support. While opening a support case, attach the output of the following commands to the case.\n\nnutanix@pcvm$ df -h\n\nPrism Central VM home partition (/home):\n\nInspect the NCC output to determine which Prism Central VM has high usage, then perform the following:\n\nLog in to the Prism Central VM.Use the cd command to change the location to the /home partition.List the contents of the directory by size using the command below:\n\tnutanix@pcvm$ df -h\n\n\tExamine the output for any large unused files that can be deleted.\n\tRun the du command below to list the usage of each file and sub-directory:\n\tnutanix@pcvm$ df -h\n\n\tExamine the output of large sub-directories. You can run the du command for each sub-directory in question to further identify large unused files that can be deleted.\n\tBelow are some common sub-directories of /home where large unused files are likely to exist:\n\t/home/nutanix/software_downloads/ - delete any old versions other than the versions you are currently upgrading./home/nutanix/software_uncompressed/ - delete any old versions other than the versions you are currently upgrading./home/nutanix/data/cores - delete old stack traces that are no longer needed./home/nutanix/data/log_collector/ - delete old NCC Logs with NCC-logs-2018-07-20-11111111111111-1032057545.tar format./home/nutanix/foundation/isos/ - old ISOs./home/nutanix/foundation/tmp/ - temporary files that can be deleted.\n\t\n\nIf the above steps do not resolve the issue or if the issue matches one of the scenarios presented earlier in this article, follow the solution steps outlined below.\nPrism Central VM root system partition (/):Consider engaging Nutanix Support. Gather the output of the commands below and attach it to the support case:\n\nnutanix@pcvm$ df -h\n\nPrism Central VM CMSP partition (/dev/sde):Consider engaging Nutanix Support. Gather the output of the commands below and attach it to the support case:\n\nnutanix@pcvm$ df -h\n\n \n\nScenario 1As a workaround, remove the directories that do not have registered clusters with the corresponding version.\n\nFind the AOS versions of all the registered PEs. You can do so from Prism Central > Hardware > Clusters > AOS version column.List the PE apache console directories at /home/apache/www/console/\n\nnutanix@pcvm$ df -h\n\nIf any PE apache console directories that do not correspond to registered PE versions are present, they should be safe to clean up.\nIf you require further assistance with the cleanup, consider engaging Nutanix Support. Gather the output of the commands below and attach it to the support case:\n\nnutanix@pcvm$ df -h\n\n \n\nRemoving older sysstats logsIf you have checked in all the usual places but still need to clean up space, you can get the customer’s permission to remove older sysstats logs. Please double-check that there are no open support cases with pending RCAs before proceeding, as this data may not yet have been collected.\n\nnutanix@pcvm$ df -h\n\nYou can use the following command to remove gzipped sysstats logs older than a certain date. In the example below, you will remove sysstats logs from all PCVMs that are older than 3 days (or 4320 minutes).\n\nnutanix@pcvm$ df -h\n\nScenario 2If your Prism Central instance matches this scenario, refer to KB-12707 Scenario #2 and open a case with Nutanix Support for assistance with in recovering from the issue.Scenario 3The increase in Prism Central VM, upgrade disk usage, is due to multiple Prism Central Installer files. Reduce Prism Central VM upgrade partition usage by removing the Prism Central Installer files that are not needed.This partition /home/nutanix/upgrade is designed to hold upgrade-related files. So, any other file within this directory can be deleted.The partition usage as listed in df -h:\n\nnutanix@pcvm$ df -h\nTo check the contents of the disk:\n\nnutanix@pcvm$ df -h\n\nTo remove the unwanted files, use the below command:\n\nnutanix@pcvm$ df -h\nScenario 4Nutanix is aware of the issue. The fix for this issue will be made available in a future PC release. For a workaround, engage Nutanix Support.\n\nScenario 5\n\nIf you see catalina.out log file is consuming a lot of space, use the following command to restart the prism service on the PCVM. \n\nnutanix@pcvm$ df -h. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about MOVE: Nutanix Move Basic Troubleshooting Guide. ### Assistant: This article describes basic troubleshooting for Nutanix Move (version 3.x & 4.x).. With the release of Nutanix Move 3.0, the Move services are dockerised and all the Move agents now run as Docker Containers. This is a major milestone that allows add/update features without much service disruption as well as provides flexibility to run Move anywhere.\n\nIf you are running an older version of Move, upgrade to Move 4.x and later.\n\nTo upgrade, find the latest version and procedure below:\n\nLatest Move bundle: here\n\nUpgrade Procedure: here\n\nNote: If you face issues in booting the VM after migration to Nutanix AHV, please collect logs on AHV cluster using Logbay.. Overview\n\nMove File locationsWhich user do we use to SSH into the Move VMLog files locations for troubleshootingBasic CommandsHow to generate Move support bundle from cliHow to configure static IP on Move VMFirewall Ports requirements for MoveTesting Connectivity between Move and ESXi/vCenter/Hyper-VCommon user permissions required on the Windows OS if using the domain administrator or local administrator accountsMove is stuck at 0% while \"Seeding Data\"Move VM migration failure due to missing snapshotWhat happens if Move upgrade is stuckAre there any logs created on the VM by MoveIf there is a backup schedule, will it affect Move\n\nMove File locationsMove will use the below host directories. All of them are created under /opt/xtract-vm/.\n\nFiles LocationDescription/opt/xtract-vm/certsPath to the certificate key-pairs used by Move/opt/xtract-vm/keyPath to the SSH keys generated and used by Move/opt/xtract-vm/kvstoreKVStore directories used by srcagent and diskwriter/opt/xtract-vm/logsPath to Move service logs\n\nWhich user do we use to SSH into the Move VMWhen logging in to the Move VM using SSH, use:\n\nUsername : admin\nNote: The admin user on Move does not really have all the major rights, so best way is to change the user to root. To do that, on the Move VM, run:\n\nUsername : admin\n\nEnter the password for admin. You will be able to change the mode to root user and have control over the Move VM. \n\nLog files locations for troubleshootingBelow are the log files which will be important for troubleshooting.\n\nLogs File LocationDescription of the Log File/opt/xtract-vm/logs/diskreader.logUses NFC mechanism to take a copy of the disk and ship it to the AHV side using \"pipe\"./opt/xtract-vm/logs/diskwriter.logThis is the receiver of the copy disk through pipe and it writes it to the container mounted on the Move VM./opt/xtract-vm/logs/mgmtserver.logOrchestrator service, exposes REST APIs to source, target side. If this service is working fine, then the UI will load perfectly.opt/xtract-vm/logs/srcagent.logDealing with source side, it prepares the migration by enabling CBT (Changed Block Tracking), shutting down the VM and shipping the last snapshot before the VM finally boots up on AHV side./opt/xtract-vm/logs/tgtagent.logDealing more on the AHV side. Collects cluster information and mounts the needed container to Move during migration and power on the VM on the AHV side.\n\nBasic CommandsTo check the status of the services, run the command svcchk as below. Note that the last column will give you the name of the service (for example, bin_srcagent_1, bin_tgtagent_1, bin_diskreader_1).\n\nUsername : admin\n\nNote: If restarting any service/Docker with an active Migration Plan, the Migration Plan will turn into failed state and the target vdisk can get corrupted. In this case, a manual cleanup on AHV and a new Migration Plan are needed.To restart any single service, run the command \"docker restart <service name>\" as shown below. You can get the individual service names from the svcchk command.\n\nUsername : admin\nTo restart all the services, run the command svcrestart. It will ask you to confirm. Enter \"y\" (yes) to continue:\n\nUsername : admin\n\nYou can also use svcstop and svcstart commands to stop and start all the container services, respectively.\n\nHow to generate Move support bundle from cliTo generate the Move support bundle from CLI, run the command \"root@move on ~ $ /opt/xtract-vm/bin/support-bundle [--dump-path <directory>]\" as shown below. This example dumps the support bundle under the location /opt/xtract-vm/supportdumps. You can select your own directory. If you leave it blank, it will generate the bundle in the /root directory.\n\nUsername : admin\n\nHow to configure static IP on Move VMIn case you deleted the Move NIC or want to give a static IP again to the Move VM, follow the procedure of Assigning a Static IP Address to Nutanix Move. \n\nFirewall Ports requirements for MoveVerify the ports between Move VM and ESX and Target AHV is open. If the environment has different subnets, refer to Port Requirements of Move. \n\nTesting connectivity between Move and ESXi/vCenter/Hyper-V\n\nMove must be able to communicate with vCenter Server on port 443, ESXi hosts on 902 and 443, and AHV on 9440. You can run the following commands from Move VM to verify the connectivity:\n\nUsername : admin\nTesting connectivity on port 902 between Move and ESXi hostThe port connectivity for port 902 could not be tested by nc -zv <ESXi Host IP> 902.Reason: Starting from move 3.0 onwards, iptables is handling the routing for the docker containers residing in the move appliance hence, one needs to run the test through a docker container, please find the steps below:\n\nList all running docker images :\n\tUsername : admin\n\tExample:\n\n\tUsername : admin\n\tTest connectivity from within the docker container named srcagent using the following command :\n\tUsername : admin\n\tExample:\n\n\tUsername : admin\n\t\nCommon user permissions required on the Windows OS if using the domain administrator or local administrator accounts Any user account on a Windows VM must possess three properties to be used as a login account by the Move:\nThe user must have \"Logon as Batch Job\" enabled.UAC must be disabled (for local administrator, it is usually disabled).The users must have \"Restore files and directories\" rights.\n\nMove is stuck at 0%, while \"Seeding Data\"Possible Causes\n\nThe customer uses third-party solutions that rely on snapshots.Third-party snapshots may cause stale snapshots to reside in the VM folder on the VMware datastore.Old snapshots that have been forgotten.\nPossible Solutions\n\nLeverage the Consolidate option in VMware.\n\tThis can be done through the Web or vSphere client by right-clicking the VM and selecting \"Consolidate\".This may fail to work, and the snapshot files may still appear in the VM folder on the datastore.The Consolidate option may be greyed out and not available.\n\tFor Hyper-V, delete checkpoints Create a new snapshot, and then delete it.Storage vMotion the VM from one datastore to another:\n\tThe VM must move from one VMware datastore to a different VMware datastore to ensure consolidation. Verify the absence of stale snapshots in the new datastore location.If Migrate option is greyed out:\n\t\tTry powering off the VM to see if the option becomes available. There are many reasons a Storage vMotion may not be possible while the VM is running.Check for registration issues. Follow VMware KB 2044369. Remove the VM from the inventory and re-add it. Ensure you know which datastore the .vmx file lives in before removing it. In some cases, you may need to power the VM off to remove it from inventory.If none of these options works and you need to leverage this KB, it should only be done with the assistance of VMware, as it involves modifying the DB.\n\t\t\n\t\n \n\nMove VM migration failure due to missing snapshotMove migration may fail due to missing snapshots. Consolidate snapshots from the VMware Snapshot Manager to resolve.\n\nA common signature would be found in Move mgmtserver.log:\n\nUsername : admin\n\nCheck the Source VM Snapshot Manager, and you will be unable to find the snapshot the logs refer to.\n\n        \n\nConsolidate all snapshots of the source VM from VMware Snapshot Manager. Once finished, recreate a new job in the Move for migration.\n\nWhat happens if Move upgrade is stuck?If the Move upgrade is stuck, check the updater.log on the Move VM. If you cannot figure out the exact reason, collect the logs, and share them with Nutanix Support.\n\nProceed with verification of the status for all the services (the expected state is UP and not crashing) by running the command svcchk.\n\nTo roll back the Move appliance to a previous version, run the script:\n\nUsername : admin\n\nAre there any logs created on the VM by Move? Yes, logs are created on the VMs when Move triggers VirtIO installation and IP address retention scripts.On Windows, the logs can be found in:\n\nUsername : admin\n\nOn Linux, they can be found in:\n\nUsername : admin\n\nIf there is a backup schedule, will it affect Move?Yes, it will be difficult to take the number of changes into account on the disk when there is a backup job in progress. So, it would be best to perform the migration outside of backup hours or disable the backup until the migration is completed.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Move: Update of a storage container may fail if this container has a container filesystem whitelist configured. ### Assistant: Update of a storage container may fail after 5 minutes with \"stream timeout\" or \"Http Request to endpoint 127.0.0.1:9080 failed with error.Response status:1\" if this container has a container filesystem whitelist configured.. Update of a storage container that has container level filesystem (NFS) whitelist may fail with one of the following messages appearing after 5 minutes:\n\nOops - Server error\n\nOr:\n\nOops - Server error\n\nErrors observed\n\nIn Prism Element, when updating the container, an error “stream timeout” or “Http Request to endpoint 127.0.0.1:9080 failed with error. Response status:1” appears.\n\t\n\n\tOr:\n\n\t\n\tnCLI command takes several minutes to execute, followed by the error:\n\tOops - Server error\n\tNutanix Move appliance may fail to configure AOS filesystem whitelist with the following error:\n\tOops - Server error\n\tThe following errors can also be found in ~/data/logs/prism_gateway.log on the Prism leader:\n\tTo identify the Prism leader CVM, see KB 1841.\n\n\tOops - Server error\n\t\nWorkflow of modifying \"Filesystem Whitelists\" of a container\n\nPrism Gateway makes an API call to genesis to update the whitelists of the container when the container \"Filesystem Whitelists\" is configured to add Move IP aa.bb.cc.dd. The request will be logged to prism_gateway.log. The following example is to add Move IP aa.bb.cc.dd to \"Filesystem Whitelists\" of a container.\n\nOops - Server error\n\nGenesis internally makes a call to modify Firewall. Events are logged to genesis.out\n\nOops - Server error\n . Nutanix Engineering is aware of the issue and is working on a fix in a future release.\n\nWorkaround 1To work around the issue, you need to delete a per-container filesystem whitelist.\n\nNote: If you have active external clients that still need access to the container over NFS/SMB (e.g. Move appliance), make sure that you update Global Filesystem with all IP addresses from the container filesystem whitelist before proceeding with the workaround. When a container filesystem whitelist is removed, a global AOS filesystem whitelist will be used and some clients may lose access to the container if they are not whitelisted on the cluster level. Refer to the documentation for more information on Filesystem Whitelists.\n\nIn Prism Element, select Storage from the pull-down main menu (upper left of screen) and then select the Table and Storage Container tabs.Select the storage container in question and then click the Update button.Find Filesystem Whitelist in the Advanced Settings of the container.Empty Filesystem Whitelist text box and click the Save button.Once the task completes, you can retry the operation that was failing before.\n\nIn case the steps described in this article do not resolve the issue, consider engaging Nutanix Support for further investigation.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about HDD or SSD disk troubleshooting. ### Assistant: This article describes how to troubleshoot a failed disk and identify which component is causing the failure. If you have received an alert for a failed disk, you can troubleshoot a specific node instead of examining an entire cluster.. When a drive is experiencing recoverable errors, warnings or a complete failure, the Stargate service marks the disk as offline. If the disk is detected to be offline 3 times within the hour, it is removed from the cluster automatically and an alert is generated (KB 4158 or KB 6287).\n\nIf an alert is generated in Prism, the disk must be replaced. Troubleshooting steps do not need to be performed.\n\nIf a failed disk is encountered in a Nutanix Clusters on AWS, once the disk is confirmed to have failed proceed to condemn the respective node. Condemning the affected node will replace it with a new bare metal instance of the same type.. Once the disk is replaced, an NCC health check should be performed to ensure optimal cluster health. However, if an alert is not generated or further analysis is required, the steps below can be used to troubleshoot further.Before you begin troubleshooting, verify the type of HBA controller.Caution: Using the SAS3IRCU command against an LSI 3408 or higher HBA can cause NMI events that could lead to storage unavailability. Confirm the HBA controller before using these commands. To determine what type of HBA is used, look for the controller name located in /etc/nutanix/hardware_config.json on the CVM.Example of the output when SAS3008 is used. In this case, the command SAS3IRCU is the correct command to use. Note the \"led_address\": \"sas3ircu:0,1:0\" line:\n\n\"node\": {\n\nExample of the output when SAS3400/3800 (or newer) is used. In this case, using SAS3IRCU would be ill-advised. Use the storcli command instead. For information on StorCLI refer to KB-10951. Note \"led_address\": \"storcli:0\" line.\n\n\"node\": {\n\n \n\nIdentify the problematic disks\n\n\tCheck the Prism Web console for the failed disk. In the Diagram view, you can see red or grey for the missing disk.Check the alerts in the Prism Web console for the disk alerts, or use the following command to check for disks that generate the failure messages.\n\t\t\"node\": {\n\t\tCheck if any nodes are missing mounted disks. The two outputs should match numerically.\n\t\tCheck the disks that are mounted on the CVM (Controller VM).\n\t\t\t\"node\": {\n\t\t\tCheck the disks that are physical in the CVM.\n\t\t\t\"node\": {\n\t\t\tCheck if the status of the disks is all Online and indicated as Normal.\n\t\t\t\"node\": {\n\t\t\t\n\t\tValidate the expected number of disks in the cluster.\n\t\t\"node\": {\n\n\t\tThe output of the command above should be the sum of the outputs of steps 1c.i and 1c.ii.\n\n\t\tThere are instances where the number can be higher or lower than expected. So, it is an important metric that can be compared to the disks listed in step 1b.\n\t\tLook for extra or missing disks.\n\t\t\"node\": {\n\t\tCheck that all disks are indicated as mounted rw (read-write) and not ro (read-only).\n\t\t\"node\": {\n\t\t\n\tIdentify the problems with the disks nodes\n\tOrphaned disk ID\n\t\tThis is a disk ID that the systems no longer use but was not properly removed. Symptoms include seeing an extra disk ID listed in the output of ncli disk ls.\n\n\t\tTo fix the orphaned disk ID:\n\n\t\t\"node\": {\n\n\t\tEnsure that you validate the disk serial number and that the device is not in the system. Also, ensure that all the disks are populating using lsscsi, mount, df -h, and counting the disks for the full-disk population.\n\t\tFailed disk and/or missing disk\n\t\tA disk becomes failed and/or is no longer visible to the system for several reasons. There are two approaches to solving a disk problem. The top-down approach is the software representation of the disk down to the hardware/physical layer. The bottom-down approach documents the investigation of a disk from a bottom-up perspective since the 'top' layer can be relative to the problem reported by the software layer (Prism, Hades, Stargate, etc.).\n\n\t\tCheck if the disk is visible to the controller as it is the device whose bus the disk resides on.\n\n\t\tlspci - displays the PCI devices seen by the CVM.\n\n\t\t\tNVME device - Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01).SAS3008 controller - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) - LSI.SAS2008 controller - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03).SAS2308 controller (Dell) - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05).MegaRaid LSI 3108 (Dell) - RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 [Invader] (rev 02).LSI SAS3108 (UCS) - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3108 PCI-Express Fusion-MPT SAS-3 (rev 02).\n\t\t\tlsiutil - displays the HBA (Host Bus Adapter) cards perspective of the ports and if the ports are in an UP state. If a port is not up, either the device on it does not respond, or the port or connection to the device is bad. The most likely scenario is the device (disk).\n\t\t\t\"node\": {\n\t\t\tlsscsi - lists the SCSI bus devices seen that include any HDD or SSD (except NVME, which does not pass through the SATA controller).sas3ircu - reports slot position and disk state. It is useful for missing disks or verifying that disks are in the correct slot. (Do NOT run the following command on Lenovo HX hardware as it may lead to HBA lockups and resets)  \n\t\t\t\"node\": {\n\t\t\tstorcli - reports slot position and disk state. Commonly found on Lenovo Chassis but will be included on NX G8 platforms. \n\t\t\tSlot # (should match the physical slot)\n\n\t\t\tState is the state of the disk. New disks will be listed as Available (AVL). After bootup and SCSI inquiry healthy disks will show as Ready (RDY).\t\t\tIn the example below, a disk was detected but was never marked as Ready (RDY). It remains in Available (AVL) status due to a bad disk. In the next section, we see the related mpt3sas dmesg output showing the SCSI inquiries constantly polling this disk.  \t\t\t(Note that the Serial No has been truncated and replaced by the Firmware Revision LC72 repeated for this platform. This was consistent for the systems in this particular cluster. This does not indicate a problem, simply a unique behaviour.)\n\n\t\t\t\"node\": {\n\t\t\tCheck the CVM's dmesg for LSI mpt3sas messages. We should typically see one entry for each physical slot. (The below example shows SAS address \"0x5000c5007286a3f5\" is repeatedly checked due to a bad/failed disk. Note how the other addresses are detected once, and the suspect is repeatedly being polled.)\n\t\t\t\"node\": {\n\t\t\tsmartctl - if Hades indicate that a disk is checked by smartctl 3 times in an hour, it is automatically failed.\n\t\t\t\"node\": {\n\n\t\t\tSee KB 8094 for troubleshooting with smartctl.\n\t\t\tCheck for offline disks using NCC check disk_online_check.\n\t\t\t\"node\": {\n\n\t\t\tSee KB 1536 for further troubleshooting offline disks.\n\t\t\tCheck if NCC has any errors indicated for disks, mount paths being offline, or failed disks in the system.Confirm if disks are seen from LSI Config Utility:\n\t\t\tEnable LSI HBA OPROMGo to “Advanced-->PCIe/PCI/PnP Configuration”, change CPU1 SLOT2 PCI-E 3.0 x8 OPRM to “Enabled” as below showing,Press \"F4\"Enter LSI Config Utility on bootup:\n\t\t\t\t\n\t\t\t\tPress <Enter> twice to confirm if disks are seen or not:\n\t\t\t\t\n\n\t\t\t\t\n\t\t\t\t\n\t\t\tThere can be a case where the disk is DOWN in lsiutil, usually after a replacement or an upgrade of the disks. When all the above checks are carried out, and the disk is still not visible, compare the old and new disk \"disk caddy or tray\". Ensure the type is the same. There can be cases where an incorrect disk type is dispatched, and it does not seat properly in the disk bay hence not being detected by the controller.\n\t\t\n\t\n\n \n\nIdentify the node type or the problematic node\n\n\tRun ncli host ls and find the matching node ID.\n\n\tSpecific node slot location, node serial, and node type is important information to document in case of recurring issues. It also helps to track the field issues with the HBA's, node locations, and node types.\n\tIdentify the failure occurrence\n\tCheck the Stargate log.\n\t\tThe stargate.INFO log for the corresponding period indicates if Stargate saw an issue with a disk and sent it to the Disk Manager (Hades) to be checked or had other errors accessing the disk. Use the disk ID number and serial number to grep for in the Stargate log on the corresponding node the disk is in.\n\t\tThe Hades log contains information about the disks it sees and the health of the disks. It also checks which disk is metadata or Curator disk and selects one if one did not already exist in the system or was removed/disappeared from the system. Check the Hades log.Check df -h in /home/nutanix/data/logs/sysstats/df.INFO to see when the disk was last seen as mounted.Check /home/nutanix/data/logs/sysstats/iostat.INFO to see when the device was last seen.Check /home/log/messages for errors on the device, specifically using the device name, for example, sda or sdc.Check dmesg for errors on the controller or device. Run dmesg | less for the current messages in the ring, or look at the logged dmesg output in /var/log.\n\tIdentify the reasons for disks failure\n\tCheck when the CVM was last started if the disk's last usage data were not available. Again, reference the Stargate and the Hades logs.Check the Stargate log around the time of disk failure. Stargate sends a disk to Hades to check if it does not respond in a given time and ops timeout against that disk. Different errors and versions represent it differently, so always search by disk ID and disk serial.\n\tCheck the count of disk failure\n\tIf a drive failed more than once in this slot and the disk was replaced, it would indicate a potential chassis issue at that point.\n\tCheck if lsiutil is showing errors.\n\tIf lsiutil shows errors evenly on multiple slots, it indicates a bad controller.\n\t\n\nSample command output for reference:\n\n\"node\": {\n\nOn a Nutanix Clusters on AWS:\n\n\"node\": {\n\nTo run the commands in a Hyper-V environment, use winsh on the CVM:\n\n\"node\": {\n\n\"node\": {\n\nNOTE: For ID: 191, G-Sense_Error_Rate for Seagate HDD's can be safely ignored unless there is performance degradation. G-Sense_Error_Rate value only indicates HDD adapting to shock or vibration detection. Seagate recommends not to trust these values as this counter dynamically changes the threshold during runtime.\n\n\"node\": {. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Pre-upgrade Check: AOS Upgrade (test_sha_mac_ssh_config). ### Assistant: This pre-upgrade check runs during an AOS upgrade. It confirms that no hosts on your cluster are configured to only accept deprecated SSH ciphers, and flags any such hosts.. test_sha_mac_ssh_config is a pre-upgrade check that runs during AOS upgrades. It checks whether your hypervisor is configured to use the proper SSH ciphers. Starting with AOS 5.15.2 and 5.17.x, we start to enforce the use of sha2 ciphers for SSH and without your host configured to accept this cipher the upgrade will stall on the affected host. You need to correct this problem before starting the upgrade.Error messages generated on UI by this check:\nHost(s) : [u'x.x.x.x'] configured to only accept deprecated SSH ciphers, Please refer KB 9730\nOn any CVM, you can run the following command to confirm which SSH ciphers are being accepted by your hypervisor\n\nnutanix@cvm$ hostssh 'grep \"^MACs\" /etc/ssh/sshd_config'\nThe expected value currently for hosts with CVMs running AOS 5.15.2+ and 5.17.1+ are MACs hmac-sha2-256,hmac-sha2-512,hmac-sha1. Manually updating the sshd config:\n\nOn each host which is reported by the pre-check, do the following\n\nSSH to the hypervisorEdit the /etc/ssh/sshd_config file with vi or other text editorFind the line which starts with \"MACs\" such as this \n\tMACs hmac-sha1,hmac-sha1-96\n\tChange the line to reflect the newer SSH ciphers:\n\tMACs hmac-sha1,hmac-sha1-96\n\t\n\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about How to obtain the SEL output from the IPMI or CLI. ### Assistant: The SEL (System Event Log) output is an indispensable asset in countless scenarios, serving as a vital resource for pinpointing the underlying causes of issues, devising effective solutions, and conducting thorough hardware diagnostics. This article will provide comprehensive guidance on retrieving this information through the user-friendly web GUI and the command line interface.. Frequently, when troubleshooting hardware issues it becomes necessary to obtain the SEL or System Event Log output. Nutanix support has the option to retrieve this information from Insights. However, there are instances where Pulse is not enabled or the command fails to run successfully. This article aims to provide customers with a set of instructions to obtain the SEL output either through the webUI (IPMI) or the command line (CLI).. Navigating to the IPMI from Prism Element: \n\nPrism Element > Hardware > Diagram > select the Node for the IPMI you want to access > IPMI should be hyperlinked\n\nYou can also obtain the IPMI address from the CLI with the following command: \n\nnutanix@cvm:~$ panacea_cli show_all_ips\n\nNote: If you need to update or change your password to the IPMI, click here for instructions how.\n\nOption 1: Downloading the SEL output from the WebUI\n\nIn G5, G6 and G7 IPMI, you will find the SEL output under Server Health > Health Event Log\n\nTo download the output as a .csv file you will click 'Save' highlighted in red. \n\nIn G8 IPMIs, the SEL output can be found by expanding System > Health Event Log\n\nTo download the output as an excel file you will click 'Export to Excel' highlighted in red\n\n \n\nOption 2: Gathering the SEL output from the CLI\n\nAHV\n\nnutanix@cvm:~$ panacea_cli show_all_ips\nESXi\n\nnutanix@cvm:~$ panacea_cli show_all_ips\nHyper-V\n\nnutanix@cvm:~$ panacea_cli show_all_ips. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Pre-check: test_is_nos_upgrade_supported_on_esx. ### Assistant: The pre-check test_is_nos_upgrade_supported_on_esx verifies if the version of ESXi hosts on the cluster is supported on the target AOS version.. test_is_nos_upgrade_supported_on_esx check is a pre-upgrade check that verifies if the version of ESXi hosts on the cluster is supported on the target AOS version.\n\nThe pre-check may fail if the ESXi version is lower than 6.7.0 and the target AOS version for the upgrade is 5.18 or above for STS or 5.15.3 or above for LTS, with the failure message below:\n\nHost x.x.x.x is running on ESXi version 6.5.0 which is lower than supported version 6.7.0. Upgrade host, Please refer KB 9646. Make sure the ESXi version is 6.7.0 or above before upgrading AOS to 5.18 or above for STS or 5.15.3 or above for LTS.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: ahv_remote_log_forwarding_check. ### Assistant: NCC 4.1.0 ahv_remote_log_forwarding_check detects issues with AHV host log forwarding to the configured remote server.. The NCC health check ahv_remote_log_forwarding_check detects whether log forwarding to remote server is operating normally.This check runs on only AHV, but also only if the cluster has had additional STIG DoDIN APL enablement performed on supported AOS/AHV versions. By default, and unless specific steps have been taken to enable this functionality on-demand, this check will return a PASS result.When remote log forwarding is needed to forward the AHV host logs to a remote machine, the host itself records the failure or success of sending these records to the remote machine (configured by the user by following the Portal Guide; Acropolis Advanced Administration Guide \\ Configuring the Remote Syslog Server Settings). This status is maintained in an rsyslog stats file (/var/log/rsyslog-stats) on each AHV host. The records of success/failure is updated regularly and this health check parses the recent entries in that file to recognise any failures and trigger any alert workflows configured on the cluster such as email, syslog, SNMP trap, Prism Alert notifications.It can be run as part of the complete NCC check by running:\nnutanix@cvm$ ncc health_checks run_all\n or individually as:\n\nnutanix@cvm$ ncc health_checks run_all\n You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check runs on each CVM to assess its AHV host. It does not run on Prism Central.This check is scheduled to run every 5 minutes and produces a WARN alert after 3 consecutive failures.Sample Output:Check Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\nCheck Status: WARN\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID111088DescriptionCheck log forwarding to remote server.Causes of failureForwarding logs to remote server is failing.ResolutionsCheck failure causes for forwarding the logs to remote server. Refer KB10811.ImpactRemote server will not be receiving log output from cluster.Alert IDA111088Alert TitleForwarding logs to remote server is failing.Alert MessageForwarding logs to remote server is failing on host {host_ip}.\n . A warning produced by this check could indicate one or more of the following possible conditions:\nNetwork connectivity issue between the AHV host and the remote rsyslog serverAHV host uplink or local network issueWAN/VPN/inter-site routing issue between Nutanix AHV cluster and remote rsyslog serverFirewall/ACL blocking AHV host communication to the remote rsyslog serverRemote rsyslog server offline/powered off, services failed, storage fullRemote rsyslog server unable to handle incoming volume of messages from one or more AHV hosts, or other rsyslog clients in addition to the AHV hosts.\nFrom the affected AHV host(s), consider using the following commands/utils to help isolate the issue;\n\nPing the configured remote rsyslog server to expose any basic switching/routing issues.Use NCAT to test a protocol:port specific connection is able to be opened (or blocked by firewall/ACL) \n\n[root@AHV ~]# nc -zvu -w3 <remote_syslog_server_IP_FQDN> <remote_port_usually_514>\n\nInvestigate the contents of the /var/log/rsyslog-stats log file for any entries around the timestamp reported by this NCC check/alert.\n\n[root@AHV ~]# nc -zvu -w3 <remote_syslog_server_IP_FQDN> <remote_port_usually_514>\nIn case the above mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/To speed up the resolution time and minimize possible impact, collect and attach the following information to the support case:\n\nA complete NCC report:\n\n[root@AHV ~]# nc -zvu -w3 <remote_syslog_server_IP_FQDN> <remote_port_usually_514>\n\nA log bundle generated from the cluster. This can be collected through Prism Web Console's health page. Select Actions > Collect Logs. Logs can also be collected through command line using logbay (KB 6691 - NCC - Logbay Quickstart Guide):\n\n[root@AHV ~]# nc -zvu -w3 <remote_syslog_server_IP_FQDN> <remote_port_usually_514>\n\nA copy of the /var/log/rsyslog-stats log file from any affected AHV host.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about CVM services won't start after the Hardware replacement for VMware NSX Enabled ESXi servers. ### Assistant: Customer has VMware Infrastructure and using the SDN (Software Defined Networking) solution NSX-V for DFW (Distributed Firewall) or Guest Introspection features. When there is hardware replaced like a DIMM or SATADOM, VMware Administrator remove the ESXi host from NSX solution which generally removes the CVM from the exclusion list in the NSX solution. After the hardware part is replaced, we notice the communication between CVM & ESXi via 192.168.x.x network is broken and services can't start with ssh key based messages in the genesis.out log file.. CVM services failed to start with the below symptoms:The /home/nutanix/data/logs/genesis.out will show the following:\nnutanix@cvm:~$ tail -f genesis.out\nRunning the fix_host_ssh fails to fix this issue:\nnutanix@cvm:~$ tail -f genesis.out\nCVM failed to communicate via the eth1 network interface to the ESXi host. \nnutanix@cvm:~$ tail -f genesis.out\nThe steps below steps do not resolve the issue:\nRe-create the vmk1 and check the communicationRe-create vSwitchNutanix but failed to communicateInject the Static route via the arp commandReboot the CVM and enabled/disabled the NIC2 for CVM - tried network services restartReboot the ESXi host. Confirm that CVM is part of the exclusion list:Procedure:\n\nNavigate to Exclusion List settings.\n\tIn NSX 6.4.1 and later, navigate to Networking & Security > Security > Firewall Settings > Exclusion List.In NSX 6.4.0, navigate to Networking & Security > Security > Firewall > Exclusion List.\n\tClick Add.Move the VMs (CVM with services issue) you want to exclude to Selected Objects.Click OK.\nReference. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Windows Server 2008 R2 VM on non-AHV hypervisor boots into recovery mode when VirtIO is uninstalled. ### Assistant: Uninstalling VirtIO 1.1.6 or higher from Windows Server 2008 R2 VM on non-AHV hypervisor makes Windows boots into recovery mode.\r\nAlso, Migrating Windows Server 2008 R2 VM by Nutanix Move 4.2.1 or higher from ESXi/Hyper-v makes the source VM boots into recovery mode.. Uninstalling VirtIO 1.1.6 or higher from Windows Server 2008 R2 VM on non-AHV hypervisor makes Windows boots into recovery mode.Also, Migrating Windows Server 2008 R2 VM by Nutanix Move 4.2.1 or higher from ESXi/Hyper-V makes the source VM boots into recovery mode.. Perform the following steps to recover the VM:\nPress F8 key during the VM powering up to show Windows advanced boot options.Select \"Disable Driver Signature Enforcement\".\tWindows boots up normally.Start Registry Editor. Click on the Start button and search for regedit.exe.Delete \"HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\services\\vioscsi\" branch.Reboot Windows and confirm Windows boots up normally.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about On mixed-hypervisor cluster with AHV storage-only nodes, VM management fails with error \"Error occurred while getting network list\". ### Assistant: VM management in a mixed hypervisor cluster fails to due to a missing entry in /etc/nutanix/hardware_config.json on AHV storage-only nodes. VM create or update operations on mixed ESXi+AHV clusters may fail with the following error:\n\nError occurred while getting network list\n\nThis issue can typically occur after cluster expansion and adding AHV storage-only nodes.\n\nthe following error can be found in the /home/nutanix/data/logs/uhura.out log:\n\nError occurred while getting network list\n\nThe minimal_compute_node field is missing in the /etc/nutanix/hardware_config.json file on the storage-only CVMs:\n\nError occurred while getting network list\n\nTo check all nodes, run the following command:\n\nError occurred while getting network list. Starting from AOS 6.1 it is possible to select HCI or Storage-only node ензу while expanding a cluster. HCI to Storage-only or Storage-only to HCI node conversion is taken care of by the Expand cluster operation itself. Remove affected nodes from the cluster and re-add them while selecting the correct node type.In case removing the node is not possible due to space constraints, consider engaging Nutanix Support. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A130348 - VmInPausedState. ### Assistant: NCC 4.2: This Nutanix article provides the information required for troubleshooting the alert \"VM is in Paused state\" for your Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert \"VM is in Paused state\" for your Nutanix cluster.Alert OverviewThe \"VM is in Paused state\" alert is generated when the Acropolis service detects a VM in Paused state. This is an unexpected VM state which should not be seen during daily operations.Sample Alert\nWarning: VM found in paused state.\nPotential ImpactVM is not operational.Output Messaging \nCheck ID130348DescriptionVM found in paused state.Cause of FailureFailed migration or some hypervisor issue.ResolutionsPower cycle the VM to get it out of this state.ImpactVM in paused state most likely means downtime and it's in an irrecoverable state.Alert Smart TitleVM {vm_name} with ID {vm_uuid} has been found in paused stateAlert TitleVM is in Paused stateAlert MessageVM {vm_name} with ID {vm_uuid} has been found in paused state. TroubleshootingPower Off the affected VM and then power it On to recover. Contact Nutanix Support at https://portal.nutanix.com as soon as possible to fully troubleshoot the problem. Collect additional information and attach them to the support case.\n\nCollecting Additional Information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB-2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB-6691.\n\nnutanix@cvm$ logbay collect --aggregate=true\n\nIf the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command:\n\nnutanix@cvm$ logbay collect --aggregate=true\nAttaching Files to the CaseAttach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB-1294.Requesting AssistanceIf you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact.Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Pre-upgrade check: Hypervisor Upgrade (test_host_upgrade_versions_compatible). ### Assistant: This pre-upgrade check runs during hypervisor upgrades. It checks if the base and target ESX versions are upgrade compatible.. test_host_upgrade_versions_compatible is an ESXi pre-upgrade check that runs during hypervisor upgrade. It checks whether the base and target ESXi versions are upgrades compatible. The reason for this check is that ESXi introduced a new certificate for VIB signing into the trust store in 2018 to begin the transition from the old certificate (which expired on 31st December 2019).\n\nError messages generated on UI by this check:\n\n\"Current host version %s and the target host version %s are not compatible for upgrade\". VMware recommends upgrading ESXi hosts to a minimum of the builds (listed below), which were released after March 2018 and have the newer signing certificate in the trust store. Doing so prevents future installation and upgrade failures due to the untrusted signing certificate:\n6.0 ESXi600-201807001 (Build: 9239799) or later 6.5 U2 (Build: 8294253) or later6.7 GA (Build: 8169922) or later.\n\nThe upgrade should be retried using a compatible ESXi release (above). Once upgraded to a compatible build, you may upgrade to the target release version. For all ESXi build numbers and versions, refer to VMware KB 2143832.VMware has addressed this issue in VMware KB 76555.For any queries consider engaging Nutanix Support.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Nutanix Files - File and folder permissions. ### Assistant: Configure Nutanix Files Admin and default permissions for a home and general purpose shares.. It may be difficult to determine user permissions without access to Active Directory (AD). If there are issues with share access or domain joining because of permissions, and you are certain that there are no issues with connectivity, follow the steps below to verify if the logged-in user has AD user privileges.. Execute the following command on Windows SMB Client to retrieve a list of groups of which the logged-in user is a member. Any user that is a member of the BUILTIN\\Administrators group (most of the time, Domain Admins) should be able to access the share.\n\nPS C:\\Windows\\system32> whoami /groups /FO list\n\nSample output:\n\nPS C:\\Windows\\system32> whoami /groups /FO list\n\nDefault Permissions for Nutanix Files SharesThere are two share types: distributed and standard. In the example output below, nufs-sec-smb1 is a distributed share while nufs-sec-smb2 is a standard share.\n\nPS C:\\Windows\\system32> whoami /groups /FO list\n\nLook at the default permissions of these shares (from the client machine). In the example output below, the named directory UserJohn is the TLD (top-level directory) at the root of the distributed share.\n\nPS C:\\Windows\\system32> whoami /groups /FO list\n\nPS C:\\Windows\\system32> whoami /groups /FO list\n\nPS C:\\Windows\\system32> whoami /groups /FO list\n\nThe same permissions can be checked using icacls or Explorer.\n\nIn Nutanix Files 3.0 and later, a user or a group can be added as a File Server Admin/Backup Admin in Files user interface. Alternatively, use the ncli fs add-admin-user command.\n\n \n\nGet files/directories authorization information.From Nutanix Files version 3.8 onwards there is a new CLI introduced to query files and directories authorization information. \n\nPS C:\\Windows\\system32> whoami /groups /FO list\nIn the following example, a user named \"Admin\" is unable to access the folder \"users\" in the share \"Eng\". Use the following steps in order to validate the permissions or authorization for a user named \"Admin\" in the folder \"users\".1. SSH into any FSVM2. List current ACL set on a share and folder. In this example, it is share \"Eng\" and folder \"users\". In the output below, AD user \"Admin\" has set explicit DENY ACL in the \"users\" folder.\n\nPS C:\\Windows\\system32> whoami /groups /FO list. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Migration of resource heavy VMs might get stuck on AHV clusters. ### Assistant: On AHV clusters, migration of large VMs might fail. In the Prism web console, you can see the migration progress up to 99%, and then the migration stops indefinitely.. VM live migration can be manually initiated or be a result of putting the host into maintenance mode (to perform firmware or hypervisor upgrade, for example). Live migration of the VM that has a high rate of change to its memory contents may take a significant amount of time or look \"stuck\" and the process eventually fails. This can happen on a cluster where 1G and 10G uplinks are configured. In both cases, this happens when the change rate of the VM memory exceeds the network bandwidth.\n\nPrism will generate an alert when the migration task fails. Furthermore, sample errors can be found in the acropolis.out log on the Acropolis leader. Follow the steps from KB 2305 to find the Controller VM (CVM) with the Acropolis leader role.\n\nThe following error messages are commonly seen in the acropolis.out log on the Acropolis leader on clusters configured with 1G uplinks:\n\n2020-05-08 14:24:03 ERROR enter_maintenance_mode_task.py:887 Failed to evacuate 18657db7-1f76-48e3-a7b9-ea1311423d28: HypervisorError: Src:[operation failed: migration job: Not converging after 50 attempts at 99%.]; Dest:[internal error: qemu unexpectedly closed the monitor: 2020-05-08T20:24:02.501332Z qemu-kvm: load of migration failed: Input/output error\n\n2020-05-08 14:24:03 ERROR enter_maintenance_mode_task.py:887 Failed to evacuate 18657db7-1f76-48e3-a7b9-ea1311423d28: HypervisorError: Src:[operation failed: migration job: Not converging after 50 attempts at 99%.]; Dest:[internal error: qemu unexpectedly closed the monitor: 2020-05-08T20:24:02.501332Z qemu-kvm: load of migration failed: Input/output error\n\nThe following error messages are commonly seen in the acropolis.out log on the Acropolis leader on clusters configured with 10G uplinks:\n\n2020-05-08 14:24:03 ERROR enter_maintenance_mode_task.py:887 Failed to evacuate 18657db7-1f76-48e3-a7b9-ea1311423d28: HypervisorError: Src:[operation failed: migration job: Not converging after 50 attempts at 99%.]; Dest:[internal error: qemu unexpectedly closed the monitor: 2020-05-08T20:24:02.501332Z qemu-kvm: load of migration failed: Input/output error. Perform the following steps to investigate the issue:\n\nCheck maximum available NIC bandwidth and load balancing mode:\n\tRun the following commands on CVM:\n\t\tnutanix@cvm:~$ manage_ovs show_uplinks\n\t\tIn the output of commands mentioned above, check which NICs are assigned to br0 bridge and their speed:\n\t\tnutanix@cvm:~$ manage_ovs show_uplinks\n\t\t\n\tCheck VM migration progress by running the following command multiple times with a delay of 1-2 minutes on the source AHV host:\n\tnutanix@cvm:~$ manage_ovs show_uplinks\n\tTo identify the network bandwidth utilization, you can check the sar stats for the AHV source host.\n\tFor example, where VM migration failed on 1st Feb at 08:43 AM, the following command can be run on the sar file. The command is filtering occurrence where physical NIC is transferring more 8Gb per second of data.\n\n\tnutanix@cvm:~$ manage_ovs show_uplinks\n\n\tStarting from AHV 20201105.2159, in /var/log/libvirt/qemu/<vm_uuid>.log on the source host, you can also see the memory dirty rate of the migrated VM which is helpful when troubleshooting failures:\n\n\tnutanix@cvm:~$ manage_ovs show_uplinks\n\t\n\nFor further troubleshooting, consider checking /home/data/logs/acropolis.out on the Acropolis leader. KB 7949 describes the normal VM Live Migration Process and can be used as a reference.\n\nWorkarounds to consider:\n\nPower off memory-intensive VMs and use acli to start it on the target AHV host instead of live migrating the VM. Run the following command on any CVM in the cluster to start VM on a specific AHV host:\n\tnutanix@cvm:~$ manage_ovs show_uplinks\n\tSchedule the VM migration or maintenance at a time when the VM is as idle as possible.Use faster uplinks if available.Verify that the network uplinks are not degraded.\n\nNote: Use of LACP load balancing mode may not improve the situation as a single TCP connection can only use the bandwidth of a single NIC.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Lazan service restarts unexpectedly due to non-ASCII characters in VM disk names. ### Assistant: Lazan service restarts frequently due to non-ASCII characters being present in the VM name/VMDK files on the cluster after upgrading to AOS 6.5.x or 6.0.x. Lazan service restarts frequently due to non-ASCII characters in the VM name/VMDK files on the cluster after upgrading to AOS 6.5.x or 6.0.xThe following alert is generated:\nOne or more cluster services have restarted within 15 minutes in the PCVM/CVM, the total restarts being >= 10. The following are the service(s) that have restarted : ['lazan'].\nThe Following CRITICAL error signature can be seen in the /home/nutanix/data/logs/lazan.out logs:\n\nOne or more cluster services have restarted within 15 minutes in the PCVM/CVM, the total restarts being >= 10. The following are the service(s) that have restarted : ['lazan'].. This issue is resolved in:\nAOS 6.5.X family (LTS): AOS 6.5.3\nPlease upgrade AOS to versions specified above or newer.Engage Nutanix Support at https://portal.nutanix.com/ to recover the cluster.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about LCM Log Collection Utility. ### Assistant: Overview and instructions on using LCM Log Collection Utility..  PurposeLCM (Life Cycle Manager) Log Collection Utility provides an independent script designed to collect all the important log files necessary to debug any LCM issue.\n\nNCC log_collector bundle is significant in size and contains multiple files that are irrelevant to LCM. In contrast, Log Collection Utility collects only the files that are relevant to LCM workflow thereby reducing the log bundle size and speeding up the process of log collection.The utility is available in LCM version 2.2 or later by default. It could also be used on earlier versions of LCM.Functionality\n\nCollects LCM log files.Detects if the node is stuck in Phoenix and collects kernel logs.Collects logs for the services with which LCM interacts (Example: Foundation, Prism, etc.).The utility state is independent of such of the LCM hence it is capable of collecting logs even if the framework is down.Parses LCM configurations from Zookeeper nodes.\n\nCaveatsThe script cannot pull logs from the node booted into Phoenix when the node's IP address is not reachable over the network. In that case, apply the respective CVM (Controller VM) IP address to the Phoenix instance using IPMI following KB 5346 before running the script. Once the network is restored, /tmp/phoenix.log should also be collected from phoenix for troubleshooting purposes.Files and information collected\n\nNode level information\n\tLCM leader\n\tLogs from CVM\n\tLCM leader\n\tCommand output from Phoenix\n\tLCM leader\n\t. For LCM 2.3.1 or later (automatic log collection)\n\nLCM 2.3.1 automatically triggers the LCM log collector utility when any LCM operation fails.Log bundle will be created in /home/nutanix/data/log_collector/.For LCM 2.3.1.1 and later, LCM prints Log location along with the failure message as below. Log into the CVM IP to fetch the LCM log bundle and upload it to the Support case for review.\n\nOperation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). \n\nFor LCM 2.2.x or later (manual log collection)\n\nIt can be used to trigger a new log collection.Run the following command from a CVM to begin log collection.\n\nOperation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). \n\nSometimes you might  see below error message :\n\nOperation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). \n\nIf you get the above error, please look for the file python lcm_log_collector.pyc instead and run the command shown below:\n\n\tOperation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). \n\t\n\nLog bundle will be created in /home/nutanix directory on the node from which the script is executed, with the name in the following format:\n\nOperation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, ip: [xx.xx.x.xx] due to Multistage upgrade encountered an error: ('The read operation timed out',). \n\nFor LCM 2.2.x, log bundle will be created in /home/nutanix/data/logs/lcm_logs/.\n\nFor LCM 2.3 and later, the log bundle will be created in /home/nutanix/data/log_collector/. \n\nFor Uploading the logsOnce the logs are collected refer to KB-1294 for different methods to upload the LCM logs to the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: hwclock_check. ### Assistant: The NCC health check hwclock_check verifies hardware clock functionality and any time differences.. The NCC Health Check hwclock_check checks if the hardware clock is healthy. It checks the date from the real-time clock (hardware clock) on the hypervisor against the date of the Controller VM (CVM).Time differences in the hardware clock and system clock can cause problems. The hardware clock operational check ensures that the hardware clock is functional.\n\nRunning the NCC CheckYou can run this check as part of the complete NCC Health Checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr you can run this check separately:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 10 minutes, by default.\n\nThis check will generate an alert after 5 consecutive failures across scheduled intervals.\n\nSample output\nFor Status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\nFor Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nDescriptionCheck if the Hardware clock has failedCauses of failureThe RTC clock on the host has failed, or the RTC battery has died.ResolutionsReplace the node. Refer to the Nutanix documentation for instructions.Alert TitleHardware Clock FailureAlert MessageHardware clock in host host_ip has failed.\n\nNote: This hardware-related check executes on the below hardware\n\nNutanix NXDell XCHPELenovo HXNutanix SXFujitsu XFLenovo SRPoweredgeIntelIBMHyveVoyager TDC. This NCC check or alert for the hardware clock can be raised for the following reasons:\nNetwork failure between the NTP server and the host.The system clock and hardware clock on the host are incorrect.Incorrect time on the NTP server.The RTC is powered by the same battery as the CMOS. This battery may have expired, or the RTC clock on the host has failed.\n\nIf this check fails, consider engaging Nutanix Support. Additionally, gather the output of ncc health_checks run_all and attach it to the support case. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about PC Cluster Runway/Scenarios not working on NCC 3.7.1/3.7.1.x. ### Assistant: Due to ENG-225071, cluster runway or new scenarios may stop working when you upgrade to NCC 3.7.1 or 3.7.1.x prior to upgrading PC. You can work around the issue by reinstalling NCC.. Scenario#1 Due to a known issue, Prism Central or Cluster Runway scenarios may stop working. This happens when a customer upgrades to NCC 3.7.1 or NCC 3.7.1.1 in advance before upgrading Prism Central to PC 5.10.3.x. You may notice the following symptoms:\n\nCluster Runway is showing N/a instead of data.Cluster Runway shows, \"We are currently collecting usage data. Cluster usage data will be displayed here in 24 hours.\" for an extended period (longer than 24 hours).\"RPC error kTransportError raised\" is seen when creating a new scenario.\nScenario#2\n\nThe same issue is reported on pc.2022.1, where the Neuron server is continuously crashing, because of which the Capacity planning feature does not work.As per recent instances, the issue may get resolved by upgrading PC to latest version i.e 2022.1.0.2 and a fix for this is also tracked in above mentioned tickets.. Scenario#1:\n\nLong-term fixes for this issue are included in AOS 5.10.4, AOS 5.11, and NCC 3.7.2. Upgrade to AOS 5.10.4, 5.11 or later, and to NCC 3.7.2 or later.A workaround for this issue is to reinstall NCC manually on Prism Central and then wait for 24 hours for data to populate. Instructions for this are available on the portal; for example, this link for 5.10 https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v5_18:ncc-ncc-install-t.html.\n\nScenario#2:\n\nThe issue is resolved in pc.2022.9, pc.2022.6.0.1, pc. 2023.1.0.1. Please upgrade to those releases for the fix.\n\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Support during a critical upcoming upgrade/maintenance. ### Assistant: Support during a critical upcoming upgrade/maintenance. Can Nutanix Support join a customer in a call to guide them and provide coverage during critical upcoming activities? Nutanix SREs are often expected to engage in customer remote sessions for installation, upgrades or configuration activities during their maintenance window.\n\nWhile Nutanix Support would always like to be available for our customers, we are not equipped to handle requests related to installation, upgrades or configuration. If we remain on a remote session for the duration of such an activity, then we become unavailable to address critical break/fix issues.\n\nNutanix Support's primary responsibility is addressing and troubleshooting customer break/fix issues, not handling other maintenance activities.. Nutanix prides itself on providing robust tools and documentation to facilitate maintenance activities such as installation, upgrades, etc. We highly recommend leveraging these resources. The Acropolis Upgrade Guide outlines the steps and resources needed for such activities.\n\nUse the Life Cycle Manager (LCM) to upgrade all or some of the software and firmware components. In the event LCM cannot be used, use the Legacy 1-click method from the Web Console.\n\nEnsure that all the prerequisites of maintenance activity are met. It is possible that during the maintenance, you exhaust all the documentation (both KB articles and guides) and still require assistance from a Nutanix SRE. These instances may be, but are not limited to, the following situations:\n\nAn installation/upgrade is stalledAn installation/upgrade completes but one or more nodes do not come upIntermittent service crashesInter/intracluster connectivity issuesNetwork/storage unavailability or instability\n\nIn the event assistance is still needed, the best way to get support is to raise a case via the Support Portal.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A111066 - Failed to send alert Emails. ### Assistant: This Nutanix article provides the information required for troubleshooting the alert: \"Alert E-mail Failure\" for your Nutanix cluster.. Alert Overview\n\nThe Alert E-mail Failure alert can be generated due to:\n\nErrors in email-alerts scriptError while executing email-alerts scriptErrors while creating an email file for the alert\n\nCheck ID111066DescriptionFailed to send alert emails.Causes of failureAlert is raised if a Controller VM is unable to send an alert email.ResolutionsEngage Nutanix Support.ImpactAlerts notification emails will not be sent.Alert TitleAlert E-mail FailureAlert MessagesUnable to send alert E-mail from Controller VM {service_vm_external_ip} due to “E-mail client error”. Capture alert manager service logs, email-alerts.log, and send-email.log from all the CVMs in the cluster. Open case with Nutanix Support and attach the log files to the case.Log files location:\n~/data/logs/alert_manager.out~/data/logs/email-alerts.log~/data/logs/send-email.log \nTo collect these logs using the Logbay tool and upload them to the support case:\n\nlogbay collect -t alerts,email --dst=ftp://nutanix -c <case number>\nFor more information about the log collection tool Logbay, please refer to KB-6691.Note: If the alert persists in Prism UI. Please try disabling and re-enabling in the Health page in Prism.In Prism select Health from menu. To find the alert select the Checks tab on right hand side of page. Then search for \"Alert E-mail Failure\". Click 'Turn Check Off,' then 'Turn Check On' to clear its status.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Frozen I/O State in Microsoft SQL Server During application-consistent snapshots. ### Assistant: MSSQL databases can remain frozen indefinitely during application-consistent snapshots.. \n\n\nDuring the creation of application-consistent snapshots, there is a possibility of encountering a problem that can cause Microsoft SQL Server (MSSQL) databases to become frozen. This issue specifically occurs during VSS-based backups, where the MSSQL databases can remain frozen indefinitely. \n\n\n\nIn MSSQL Logs, the error logs will show as below (The logs can be obtained by following this Microsoft article)\n\n2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.\n\nObtain the Microsoft SQL version the UVM is running to confirm if this KB applies.Method 1:Connect to the instance of SQL Server, and then run the following query:\n\n2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.\n\nThe result of running the query above would be the following:\n\n2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.\n\nMethod 2:Another way of checking the version is by connecting to the instance of SQL Server, and then run the following query in SQL Server Management Studio (SSMS):\n\n2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.\n\nThe result of running the query above would be the following:\n\n2023-05-18 23:48:52.79 spid77 I/O is frozen on database [DATABASE_NAME]. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.. This is a known issue (reported in MS KB-4523102) detected by Microsoft on SQL databases that contains memory-optimized filegroup and VSS based backups. Microsoft has released the following cumulative updates for SQL Server:\n\nCumulative Update 2 for SQL Server 2019 (Build version: 15.0.4013.40)Cumulative Update 19 for SQL Server 2017 (Build version: 14.0.3281.6)Cumulative Update 10 for SQL Server 2016 SP2 (Build version: 13.0.5492.2). ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NDB might get an alert about Nutanix cluster connectivity during a cluster upgrade. ### Assistant: When a software or firmware upgrade is performed on the Nutanix cluster, NDB can generate an alert about the cluster connectivity.. During a software or firmware upgrade on the Nutanix cluster, NDB can generate an alert about the cluster connectivity status:\nHealth check failed for cluster (name:<Cluster_Name>), Please make sure the cluster is in valid state\nThe era_server.log on the NDB server will have the following entry:\nHealth check failed for cluster (name:<Cluster_Name>), Please make sure the cluster is in valid state\n . If the alert is generated during an upgrade activity, it is expected and can be safely ignored as long as the cluster connectivity status goes back to normal when the upgrade is completed.To check the cluster connectivity status: NDB -> Administration -> Nutanix clusters.The reason why it happens is because during most of the upgrades, the CVMs reboot in a rolling manner and the virtual IP address gets re-hosted from the Prism leader CVM to a different CVM. That can take a few seconds, so the virtual IP can be briefly unavailable, which causes the API checks coming from the NDB to fail and it is enough to generate the alert in NDB.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: stale_recovery_points_check. ### Assistant: The NCC health check stale_recovery_points_check checks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted.. The NCC health check stale_recovery_points_check checks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted.It can be run as part of the complete NCC check by running \nnutanix@cvm:~$ ncc health_checks run_all\nor individually as:\n\nnutanix@cvm:~$ ncc health_checks run_all\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every 12 hours, by default.This check will generate an alert after 2 failure.\n\nSample outputFor status: PASS\n\nnutanix@cvm:~$ ncc health_checks run_all\nFor Status: ERROR\n\nnutanix@cvm:~$ ncc health_checks run_all\n \n\nOutput messaging\n\nCheck ID110405DescriptionChecks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted.Causes of failureThe snapshot was deleted without removing the associated stale entries in the Insights database.ResolutionsCheck the running status of the Polaris service.ImpactUpcoming replications might be impacted by the stale remnant entries in the Insights database.\n . If this NCC check fails, engage Nutanix Support.Additionally, gather the following command output and attach it to the support case:  \nnutanix@cvm:~$ ncc health_checks run_all\n . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Lose local attach vDisks after adding a NIC. ### Assistant: After adding the second NIC to Windows Server, non-bootable local attach vDisks disappear from within guest OS. After adding the second NIC to Windows Server, non-bootable local attach vDisks disappear from within guest OS. Only drive C is present.Migrating or Rebooting the VM does not solve the issue. . Open disk management in windows and check the disks should be marked as offline.To bring the disks online, right click on each vdisk and click \"Online\". Now you should be able to see and use the vdisks within guest OS.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about OVA UI may take longer to load due to unreachable Prism Element cluster registered to Prism Central. ### Assistant: When connected clusters are unreachable the OVA UI may take longer to load. OVA UI may take longer to load if one or more registered Prism Element clusters are unreachable.\nThe /home/nutanix/data/logs/aplos.out log file on Prism Central will contain the following error:\n\n2023-01-06 02:18:40,257Z ERROR resource.py:231 Traceback (most recent call last):\nYou may also notice the following RPC timeouts in the logs: \n2023-01-06 02:18:40,257Z ERROR resource.py:231 Traceback (most recent call last):\n\n . Prism Central tries connecting to every connected Prism Element cluster to check compatibility.To find the unreachable Prism Element UUID, you can refer to the below logs:\n2023-01-06 01:45:22,964Z WARNING fanout_protobuf_rpc.py:230 Error # 5 sending RPC [500] to http://127.0.0.1:9444/v3/fanout_proxy?remote_cluster_uuid=0005eea4-b61d-36d5-408d-7cc255078170&url_path=remote_rpc_requ\nIf the Prism Element cluster is deleted without properly unregistering, refer to KB 4944 to clean the Prism Element cluster details.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Enabling Jumbo MTU on AHV for UVMs. ### Assistant: This article describes how to enable jumbo MTU on AHV for UVMs.. Nutanix Controller VMs (CVMs) use the standard Ethernet MTU (maximum transmission unit) of 1,500 bytes for all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance and stability. Nutanix does not support configuring the MTU on a CVM's network interfaces to higher values.You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces of AHV, ESXi, or Hyper-V hosts and user VMs if the applications on your user VMs require them. If you use jumbo frames on hypervisor hosts, enable them end to end in the desired network and consider both the physical and virtual network infrastructure impacted by the change.. PreparationMake sure that the following changes are implemented before making any changes:\n\nThe physical L2 switches must be configured to support a 9000-byte Ethernet payload and any additional network overhead such as VLAN tags and VXLAN encapsulation. This means the switching infrastructure is often configured to support a maximum MTU of 9216 bytes, larger than the endpoint MTU.Any routed L3 interfaces between networks must also be configured to support the larger MTU.\nConsult the documentation of your network vendor for this configuration. Only after the entire physical network is ready to handle jumbo frames between all endpoints can we configure our Nutanix AHV hosts.\n\nCheck the current network configurationTo check the current host network configuration, connect to the CVM you will be making changes on and run the following command:\n\nnutanix@cvm$ manage_ovs show_uplinks\n\nSample output:\n\nnutanix@cvm$ manage_ovs show_uplinks\n\nOne of the main items we want to check at this step is the bond_mode. If balance-slb or balance-tcp (LACP) modes are used, restore this configuration after making the MTU changes.\n\nNext, connect to the AHV host and verify the current MTU value of the physical interfaces. These typically show up as ethX, but may vary depending on the hardware in use.\n\nnutanix@cvm$ manage_ovs show_uplinks\n\n \n\nChange MTU (AOS 5.19 and newer)\n\nRefer to the Prism Web Console Guide for your appropriate AOS version for instructions on how to update your Cluster Virtual Switch's MTU.\n\nLogin Prism > Settings > Network Configuration > Virtual Switch > Edit Virtual Switch\n\t\n\tSet the MTU and Save the Update (MTU on Nutanix Host must be set to a value in 1500 ~ 9000 inclusive range)\n\t\n\t\nChange MTU (AOS 5.18 and older)Perform the following steps on every host in the cluster that needs these changes.\nConnect to the host using the IPMI console and log in to the AHV host as root.Log on to the Controller VM via SSH to the management address.\n\tnutanix@cvm$ manage_ovs show_uplinks\n\tPut the node and CVM in maintenance mode. Refer to Putting a node into maintenance mode chapter from AHV Administration Guide for more details.Run the following command to enable jumbo frames on the interfaces.\n\tnutanix@cvm$ manage_ovs show_uplinks\n\twhere <bond mode> equals one of the following modes: active-backup, balance-slb or balance-tcp. The current bond_mode can be found in the output of the manage_ovs show_uplinks command. If --bond_mode parameter is not specified, then the bond mode will be reset to active-passive, which may affect network connectivity.\tNote: In the above command the syntax \"--mtu 9000\" or \"--mtu=9000\" are both valid.If LACP is used then a few more additional parameters should be added:\n\tnutanix@cvm$ manage_ovs show_uplinks\n\n\tExample:\n\n\tnutanix@cvm$ manage_ovs show_uplinks\n\n\tIf a bridge other than br0 is used for UVMs, the configuration should be similar to the following example:\n\n\tnutanix@cvm$ manage_ovs show_uplinks\n\n\tThe above example applies to configuration with and without backplane network segmentation, as it only changes MTU of physical NICs without touching interface br0, or interface br1-backplane if configured.\t\tNote: --require_link=false is only required if any physical NIC is down while configuring, which you can check with manage_ovs show_interfaces.\n\tWARNINGS:\n\n\tOn clusters running any AOS release between 5.10 and 5.10.3.2 DO NOT run \"update_uplinks\" with a single interface specified as this may impact cluster availability. If you have a node with a single interface in bridge, upgrade AOS to 5.10.4 or newer, or engage Nutanix Support if you want to change the bond configuration.Updating uplinks using manage_ovs will delete and recreate the bond with the default configuration.The use of the allssh manage_ovs update_uplinks command may lead to a cluster outage. Only use it if a cluster is not in production and has no user VMs running.\n\tIf balance-slb or balance-tcp (LACP) modes were used, then refer to the following KB articles to restore the configuration. Otherwise, if LACP is not enabled, go directly to step 7:\n\tbalance-slb: KB 5135balance-tcp (LACP): KB 3263\n\tGracefully shut down the Controller VM on this host:\n\tnutanix@cvm$ manage_ovs show_uplinks\n\tReboot the AHV host.\n\tnutanix@cvm$ manage_ovs show_uplinks\n\tOnce the configuration is completed, make sure both host and CVM are accessible via network.If all tests are successfully completed, remove CVM and node from maintenance mode. Refer to Exiting a node from the maintenance mode chapter from AHV Administration Guide for more details.Create a VM network.\n\tnutanix@cvm$ manage_ovs show_uplinks\n\tEnable jumbo MTU inside UVM guest OS.Test the reachability of the jumbo frame between UVMs.\tLinux:\n\tnutanix@cvm$ manage_ovs show_uplinks\n\tWindows:\n\n\tnutanix@cvm$ manage_ovs show_uplinks\n\n\tNote: \"-M\" and \"-f\" are to set df-bit (don't fragment), and 8972 is ICMP data size, which equals 9000B (MTU size) - 20B (IP header) - 8B (ICMP header). There are another 14 bytes/18 bytes for Ethernet header (6B source MAC + 6B destination MAC + 2B ethertype + 4B VLAN tag (VLAN tag is optional)), which are not part of MTU size. Therefore, the maximum frame size allowed after the above configuration is 9014B/9018B (with VLAN tag).\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Open Source Support. ### Assistant: This article describes how Nutanix supports open source repositories.. This article describes how Nutanix supports open source repositories.. OverviewNutanix is supportive of open-source technologies and leverages a community-supported model. Nutanix welcomes pull requests and responds to issues on a best-effort basis. Nutanix maintains public GitHub repositories for initiatives that help customers integrate the Nutanix platform with other third-party products. Support for these initiatives is handled directly via the GitHub repository. Issues and enhancement requests can be submitted in the Issues tab of each repository. Search for and review existing open issues before submitting a new issue.\n\nRepositoriesThe Github repositories locations:\n\nhttps://github.com/nutanixhttps://github.com/nutanix-cloud-native\n\nRepositories include:\n\nAnsible. Nutanix ansible collection nutanix.ncp is the official Nutanix ansible collection to automate Nutanix Cloud Platform (NCP). Location: https://github.com/nutanix/nutanix.ansibleCluster-API provider for Nutanix AHV. The Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management. Cluster API Provider for Nutanix Cloud Infrastructure is a concrete implementation of Cluster API for Nutanix Cloud Infrastructure. Location: https://github.com/nutanix-cloud-native/cluster-api-provider-nutanixKubeflow. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Location: https://www.kubeflow.org/docs/distributions/nutanix/install-kubeflowTerraform. An infrastructure-as-code tool for building, changing and versioning infrastructure safely and efficiently. Location: https://github.com/nutanix/terraform-provider-nutanixPacker Plugin for Nutanix AHV. The Nutanix Packer builder is able to create Nutanix AHV virtual machine images and export them. Location: https://github.com/nutanix-cloud-native/packer-plugin-nutanix\n\nThe complete list of repositories:\n\nhttps://github.com/orgs/nutanix/repositorieshttps://github.com/orgs/nutanix-cloud-native/repositories\n\nReporting an IssueTo report an issue:\n\nNavigate to the specific GitHub repository and click on Issues. For example:\n\n\t\n\tClick New issue.\n\t\n\tSelect Bug report or Feature request, then click the corresponding Get started button to the right.\n\t\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: check_cvm_ssh_security. ### Assistant: The NCC health check check_cvm_ssh_security checks whether the cluster uses Nutanix SSH security settings, such as lockdown and SSH security level.. The NCC health check check_cvm_ssh_security checks whether the cluster uses Nutanix SSH security settings, such as lockdown and SSH security level. This check was introduced in NCC 4.6.4.Running the NCC checkYou can run the check as part of the full NCC health check, as shown below.\nnutanix@cvm$ ncc health_checks run_all\nYou can also run this check independently, as shown below.\n\nnutanix@cvm$ ncc health_checks run_all\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check runs on Controller VMs only. It does not run on Prism Central.This check is scheduled to run every 24 hours, by default.This check will raise an NCC alert when a cluster does not use our SSH security settings, such as lockdown and SSH security level.\nSample Pass output:For Status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\nIf the check result is a PASS, Password-based remote login is not enabled. Everything is good.For Status: INFO\n\nnutanix@cvm$ ncc health_checks run_all\nIf the check result is a INFO, Password-based login is enabled. It is recommended to not use password-based ssh login access to the cluster.Output messaging \nCheck ID6219DescriptionCheck the strength of SSH security of the CVM.Causes of failureThe SSH security of the CVM is weak.ResolutionsChange the SSH security setting of the CVM.ImpactThe CVM's security might get compromised.ScheduleThis check is scheduled to run every 24 hours by default.Alert IDA6219Alert TitleThe cluster is using password based ssh access.Alert Smart TitleThe cluster is using password based ssh access for the cvm {cvm_ip} Alert MessageThe cluster is using password based ssh access for the cvm {cvm_ip} . Nutanix recommends using key-based ssh access to the CVM instead of password-based ssh access for better security. Refer to the Controlling Cluster Access section in the AOS Security Guide to set up key-based SSH access.Please perform the following actions to clear the INFO message in NCC:\nClick the gear icon in the main menu and then select Cluster Lockdown in the Settings page.Disable remote login access by unchecking the Enable Remote Login with Password box. Refer Controlling Cluster Access.Click the New Public Key button and add Public Key for SSH access. Refer Controlling Cluster Access. Follow KB 1895 on how to set up password-less SSH Key pair.\nNutanix recommends avoiding ssh access to the CVM appliances as much as possible. Key-based SSH access is preferred instead of password-based access in the rare cases where one has to login to the CVM.Adding rsa/ecdsa public key will allow the admin and nutanix users to access CVMs and hosts using the ssh key pair.Note: Password-based SSH access will be deprecated in 2024.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Under what conditions do Curator scans run?. ### Assistant: Lists the scenarios in which Curator Scheduled and Triggered scans are run.. There are two main types of Curator scans.\nScheduled Scans\n\tCurator Full ScanCurator Partial Scan\n\tTriggered Scans. Scheduled Scans\nScheduled scans run periodically on the cluster for the general maintenance of the system. There are two types of scheduled scans.\n\n\n\tPartial Scans\n\n\tPartial scans are responsible for performing the following tasks:\n\n\tInformation Life Cycle Management (ILM)Snapshot chain severingDelete data that was marked to_remove but not dedupedCorrect block awarenessDelete or replicate over-replicated or under-replicated data\n\n\tBy default, partial scans run 1 hour (3600 seconds) after the last partial scan completes.\n\t\n\n\n\tFull Scans\n\n\tIn addition to all the Partial scan functions, Full scans also perform on-disk deduplication.\tAs only full scans perform on-disk deduplication, you might have to perform multiple scans to ensure that deduped data is deleted completely. Full scans run on their own timer and run 6 hours (21600 seconds) after the last Full scan completes. When this timer appears and if another scan is running, the timer waits for the scan to complete before running another Full scan.\n\t\n\nTriggered Scans\nTriggered scans respond to a situation in the cluster where Curator is urgently required.The most common triggered scans are as follows:\n\nILM - If the hot tier gets full, an ILM scan is triggered to drain some data to the cold tier.Disk Failure - If a disk fails, a scan is triggered to replicate the data having less replicas.Node Failure - This scan is similar to a disk failure scan, but runs to replicate data from all disks having had replica on the failed node.User - Manually triggered by the SRE team in certain situations. This can be a Full or Partial scan.\n These scans are all partial scans (unless the user manually triggers a full scan). They reset the clock on the next periodic partial scan (or full scan if it is a user triggered scan).. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A1120 - CuratorJobRunningTooLong. ### Assistant: Investigating CuratorJobRunningTooLong issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert CuratorJobRunningTooLong for your Nutanix cluster.\nAlert Overview\nThe Curator Job Running Too Long alert A1120 is generated if the active Curator jobs have been running for too long. An alert will be raised if a Curator scan runs for more than 21600 seconds (6 hours) and then after every 12600 seconds (3.5 hours). \nSample Alert\n\nFor status: INFO\n\nBlock Serial Number: 19SMXXXXXXXX \n\nOutput messaging\n\nCheck ID130015DescriptionThe Curator job has been running for too long.Causes of failureVariousResolutionsContact Nutanix support if this issue persists.ImpactBackground cluster maintenance tasks might get affected in the future. The root cause should be addressed as soon as possible.Alert IDA1120Alert TitleCurator Job Running Too LongAlert MessageCurator job name with id execution_id has been running for a long time i.e. elapsed_time_secs seconds.. Troubleshooting\n\nCurator scan times depend on the cluster size (number of nodes), global metadata size, and metadata scan speed in the Map phase of the Map/Reduce job, among other things.For more information on local vs global metadata, see Scalable metadata chapter in the Nutanix bible.\n\nIn order to check metadata size, run the following command: \n\nnutanix@CVM:~$ nodetool -h 0 ring\n\nExample output:\n\nnutanix@CVM:~$ nodetool -h 0 ring\n\nOne possible reason for high metadata usage can be using the Deduplication feature for workloads not suitable for it. For more information on Deduplication Best Practices, see the corresponding chapter in Prism Web Console Guide.Another reason for long-running Curator Jobs, or failing Curator scans, might be the RPO settings on clusters having dense nodes.Check both the frequency and retention policy for the snapshots, if data protection features are in use:\n\n1 hour vs 6 hours vs 24 hours RPO,for 1 hour RPO, if supported, the 12/1/1/1 retention should not be exceeded. \nIn case if cluster contains dense nodes, run the NCC dense_node_configuration_checks and refer to KB 7196 for more details.\n\nResolving the issue\n\nIf you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support.Collect additional information and attach them to the support case.\n\nCollecting Additional Information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691. \n\tnutanix@CVM:~$ nodetool -h 0 ring\n\t\n\nAttaching Files to the Case\n\nWhen viewing the support case on the support portal, use the Reply option and upload the files from there.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Upload of images larger that 16 Gb on AOS 5.20.2 or newer failing with \"Image upload stream was interrupted\" error. ### Assistant: Upload of images larger that 16 Gb on AOS 5.20.2 or newer failing with \"Image upload stream was interrupted\" error. Image uploads using Prism Element may fail with an \"Image upload stream was interrupted\" error if all of the following conditions apply:\nCluster is running AOS 5.20.2 or newer.The image file size is larger than 16 Gb.\nThe following stack trace may be found in /home/nutanix/data/logs/prism_gateway.log:\n\nERROR 2021-12-09 15:53:53,207Z http-nio-127.0.0.1-9081-exec-7 [] commands.acropolis.UploadImage.doExecute:405 java.net.SocketTimeoutException\n . This issue is resolved in:\nAOS 5.20.X family (LTS): AOS 5.20.4AOS 6.5.X family (LTS): AOS 6.5.1\nPlease upgrade AOS to versions specified above or newer.WorkaroundUse From URL method.\nPrepare a web server and share the image file via HTTP protocol.Select \"From URL\" as \"Image Source\" of \"Image Configuration\" and specify the URL to a disk file.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Configuration Steps for Okta Integration with Prism Central. ### Assistant: Okta is a supported Identity Provider that can be integrated with Prism Central for user authentication. This KB documents the steps required to configure Okta integration from both the PC and Okta Administration side.. Nutanix supports several different Identity Providers for integration with Prism Central (PC) including:\n\nActive Directory Federated Services (ADFS)Azure ADFSPingOneShibbolethKeycloakOkta\n\nThis document details specifically what needs to be configured on both the Prism Central and Okta Administration side in order to successfully integrate both for user authentication.. From the Okta Administration side:\n\nCreate a new SAML 2.0 application\n\t\n\tConfigure the following SAML settings:\n\tSingle Sign On URL: https://<PC_IP>:9440/api/iam/authn/callback\tAudience URI (SP Entity ID): https://<PC_IP>:9440/api/iam/authn\t\tNOTE:\tWhether to use the FQDN or PC IP address for above configuration, can be decided by checking the metadata config file from following Prism Page\t\t\tCheck the following entityID and Location fields in metadata xml from PC, highlighted in following screenshot.\tAudience URI should match with entityID URI and Single Sign On URL should match with Location URL in okta settings.\t\t\t\t\tAttribute Statements:\n\n\t\n\n\tNote: If you intend to assign Groups to the PC Application, then configure the Group Attribute Statement as follows:\n\n\t\n\tAssign Users and/or Groups to the PC Application in Okta\n\t\n\tView the Identity Provider metadata from the PC SAML application and save it to your local machine as an .xml file.\n\t\t\t\t\n\t\n\nFrom the Prism Central side:\n\nThe Microservices Infrastructure (CMSP) needs to be enabled from the Prism Central Settings -> Manage Prism Central -> Prism Central on Microservices Infrastructure\n\t\n\n\tMore detailed information on Microservices enablement can be found on the Nutanix Support Portal at the following link: https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-cmsp-enable-pc-t.html.\n\tThe Okta Identity Provider can be added from Prism Central Settings > Authentication > Identity Providers\n\tNote: If you intend to perform Group Role Mapping in PC, then configure the Group Attribute Name as follows:\n\n\t\t\n\tAssign individual user role mappings from Prism Central Settings > Role Mapping or group role mappings from Administration > Roles\n\tNote: The format of the username must match Okta's expected \"Application username format\"\n\n\t\n\t\n\nThe user can now authenticate to Prism Central with Okta credentials.\n\nUsing Okta Native SAML group:We can also use okta SAML group and assign the required users to this group to enable okta authentication for them.For this we need to do following additional configuration both on Okta and Nutanix side.Configuration from Okta side:1. In the Okta Admin dashboard, go to Directory → GroupsClick on \"Add Group\" to create a new group and name it.2. Assign people to that groupAfter assigning required people click on \"Done\"3. Then go the \"PC application\" on okta side and assign that application to this group.4. Under the Sign On Section of the app, following values need to be given for group attributes.And save the changes for the application settings.Configuration from PC side:1. Modify IDP configuration in PC to specify the 'Group' attribute2. Navigate on PC to Administration → RolesYou can create a custom role or just use one of the built-in ones.3. Then go to \"manage assignment\" option of that role and create select Okta user_group(SAML) from drop down option.NOTE: Make sure you give exact name for the group as given on okta side, the SAML user/group names does not get auto-populated as PC does not have Okta credentials to query and search user/group from Okta.4. Then you should see the okta group listed under \"user_group\" in this role information.After completing all the above configuration any user assigned to the group on Okta side should be able to login to PC using Okta.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Register for my.nutanix.com and Support Portal access. ### Assistant: Registering for my.nutanix.com Portal account and how to gain access to the Support Portal. The following article describes how to register for the my.nutanix.com Portal and access the Support Portal.. Please perform the following procedure to register for the my.nutanix.com Portal. Once successful you can then perform the additional steps to access the Support Portal. \nIn the address bar of a web browser, type: https://my.nutanix.com and press EnterClick Sign up nowType the values of First Name, Last Name, Email, Password in the fields and click Submit\tNote: Follow the specified password complexity requirements when you are creating the password\tA confirmation page will be displayed and you will receive an email from mynutanix@nutanix.com after you have successfully completed the registration process. The following is an example of the email:\n\nHi First Name,\n\nClick the link provided in the email to confirm the registration process. A message briefly appears in the browser confirming your email address and the my.nutanix.com Portal opens in the browser.Type the Email and Password that you used to register and click the arrow in the Password field. The my.nutanix.com Portal Welcome Back page appears.\nFor access to the Support Portal, continue with the following steps:\n\nSelect Support Portal and click the arrow in the Support Portal dialog box. Note: You must first activate the account before you can browse the Support Portal and create support casesIn the Activation Required dialog box that appears, enter the Business Email and a valid Nutanix Serial Number or Service Tag and click Activate. The Activation Complete! screen appears after you have successfully completed the activationClick the box next to I have read and agreed to the above terms and conditions to accept the terms. The Support Portal page will appear\nIf you are still experiencing access issues after following the steps above please contact portal-accounts@nutanix.com.Note: For access to the Partner Portal, there are 2 ways.\n\nif your account does not have any assets or SW registration ID\n\nGo to https://www.nutanix.com/partner-signup Fill out the Request Portal Access form and submit\n\nIf your account  has assets or SW registration ID\n\nClick Activate on the Partner Portal tileFill out the Request Portal Access form and submit\nRefer to the Partner team for any assistance with Partner Portal access. For help please send an email to partnerhelp@nutanix.com. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: ilo_bootdrive_raid_check. ### Assistant: NCC 3.10.0. The NCC health check ilo_bootdrive_raid_check checks the iLO boot drive RAID status.. The NCC health check ilo_bootdrive_raid_check checks the iLO boot drive RAID status.\n\nThis check does the following:\n\nCheck the logical drive status.If the status is not okay, check available number of physical drives.If only one physical drive is available, generate a WARN alert with the working drive's information. The non-working drive can then be easily found and replaced.If both physical drives are available, check their health. If health is not okay, generate a WARN alert with the failed drive's information. The failed drive can then be replaced.If the drives are healthy, this check returns PASS.\n\nThis check runs only on the following:\n\niLO-based nodesAOS 5.11.1.5, 5.11.2, 5.16 or higherWith RAID boot drive\n\nRunning the NCC checkIt can be run as part of the complete NCC check by running:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr individually as:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is scheduled to run every hour by default.\n\nThis check will generate an alert after 1 failure.\n\nSample output\n\nFor status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\nFor status: ERR\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID106066DescriptionChecks the status of HPE Integrated Lights-Out(iLO) boot drive.Causes of failureHypervisor RAID Boot Device is not reachable or is faultyResolutionsCheck the status of the RAID device. Refer KB-9064 to resolve this issue. If the problem persists, please engage Nutanix Support.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert Smart TitleHypervisor RAID Boot Device failed on ip_address serial number SerialNumberAlert TitleHypervisor boot drive RAID status check.Alert MessageWorking Boot Drive Info: LogicalDriveNumber: ldrive, SerialNumber: serialnumber in Bay: drive_bay, Model: model on Host IP x.x.x.x\t\t\tFailed Boot Drive Info: LogicalDriveNumber: ldrive, SerialNumber: serialnumber in Bay: drive_bay, Model: model on Host IP x.x.x.x\nNote: This hardware-related check executes on the below hardware: HPE. . Refer to the Hypervisor Boot Drive Replacement procedure (given below) and follow the instructions mentioned there.\n\nDX4200 Gen10 24LFF - Hypervisor Boot Drive Replacement (Failed Boot Device) for DX4200 Gen10 24LFF with Mirrored RAID Hypervisor Boot DrivesDL380 Gen10 - Hypervisor Boot Drive Replacement (Failed Boot Device) for DL380 Gen10 with Mirrored RAID Hypervisor Boot DrivesDX NS204i - Hypervisor Boot Device Replacement (Failed Boot Device) for HPE DX NS204i Boot Device\n\nReplacement drives can be obtained from HPE.HPE NS204i-p NVMe OS Boot Device detects Warning intermittently because the fan speed is low. On this occasion, increasing fan speed to cool down the temperature of NVMe resolves this WARN.HPE ProLiant DL560 Gen10, HPE ProLiant DL580 Gen10, HPE ProLiant DL360 Gen10, HPE ProLiant DL380 Gen10, and HPE ProLiant DX380 Gen10 Plus - Intermittent Server Shutdown Due to Thermal Issues with HPE NS204i-p NVMe OS Boot DeviceThe status ERR for hpe_bootdrive_raid_check is a false positive and resolved after upgrading NCC to 4.2.0 and beyond.When the status ERR for ilo_bootdrive_raid_check \"ILO boot drive RAID info Not avaliable\" happens, please reset the iLO on the node and try the check again. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about \"Cluster Memory Usage\" widget may report inconsistent values on AHV cluster. ### Assistant: On the AHV cluster running AOS 5.15.1 or newer, \"Cluster Memory Usage\" widget may report inconsistent values.. On the AHV cluster running AOS version 5.15.1 or newer, \"Cluster Memory Usage\" widget may report inconsistent values.\n\nWithout an actual change in memory usage on a cluster, you may see the \"Cluster Memory Usage\" widget showing inconsistent values:\n\n. This issue is resolved in:\n\nAOS 5.15.X family (LTS): AOS 5.15.4AOS 5.19.X family (STS): AOS 5.19.1\n\nUpgrade AOS to the version specified above or newer.\n\nIf upgrading to the previously mentioned versions doesn't resolve your issue, consider opening a case with Nutanix  support . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A160180 - PC is not AZ paired. ### Assistant: Troubleshooting and resolving alert \"Remote file server of data protection policy is registered to a Prism Central which is not in an Availability Zone pair\".. This Nutanix article provides the information required for troubleshooting the alert Host-limit-check for your Nutanix Files cluster.\nAlert overview\n\nThe PC-is-not-AZ-paired alert is generated when the number of hosts exceeds the maximum of 32.\n\nSample alert\n\nBlock Serial Number: 23SMXXXXXXXX\nOutput messaging\n\nCheck ID160180DescriptionNewly Registered File Server is a Participant in an Existing Data Protection Policy with a Remote File Server Registered to a Prism Central, which isn't in an Availability Zone Pair with the Local Prism CentralCause of failureThe newly registered file server is part of an existing data protection policy with a remote file server registered to Prism Central, which isn't in an Availability Zone Pair with the Local Prism CentralResolutionsAdd an Availability Zone pair between local Prism Central and Prism Central of the remote file server. Alternatively, you can unregister the newly registered Prism Element and register it to another Prism Central in an Availability Zone pair with the Prism Central of the Remote File Server. If you have any issues, please refer to KB article 14865.ImpactThe remote file server is not visible on the local Prism Central. This means that the data protection policy cannot be managed appropriately. Also, DR workflow cannot be accomplished.Alert IDA160180 Alert TitleRemote file server of data protection policy is registered to a Prism Central which is not in an Availability Zone pairAlert MessageRemote file server {remote_fs_name} for data protection policy {policy_uuid} with file server {fs_name} is registered to Prism Central {remote_pc_name} ({remote_pc_uuid}), which is not in an Availability Zone pair with the local Prism Central\n . Resolving the issueThis alert ensures that Availability Zones are maintained through File Server Migrations. If a File Server, or a File Server Remote DR site, moves to a new Prism Central instance, and the new Prism Central is not included in the Availability Zones, this alert will trigger. Add an Availability Zone pair between local Prism Central and Prism Central of the remote file server. Alternatively, you can unregister the newly registered Prism Element and register it to another Prism Central in an Availability Zone pair with the Prism Central of the Remote File Server.  For information about Availability Zones and their management, refer to the Nutanix Disaster Recovery Guide.\n\nIf you need assistance or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support. Collect additional information and attach them to the support case.\n\nCollecting additional information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094.\n\tCVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run\n\t\tnutanix@cvm:~$ afs info.get_leader\n\t\t\t\tOnce you are on the Minerva leader CVM, run:\n\t\tnutanix@cvm:~$ afs info.get_leader\n\n\t\tFor example:\n\n\t\tnutanix@cvm:~$ afs info.get_leader\n\t\t\n\t\n\nAttaching files to the caseTo attach files to the case, follow KB 1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A160171 - FileServerTieredFileRestoreFailed. ### Assistant: Investigating FileServerTieredFileRestoreFailed issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert FileServerTieredFileRestoreFailed for your Nutanix cluster.\n\nAfter deleting a file server or a share, there could be files tiered referring to objects in the object-store. These objects could become stale if there are no references to them.\n\nNutanix recommends expiring these objects after the retention period configured for the object store profile.\n\nAlert Overview\n\nThe FileServerTieredFileRestoreFailed alert is generated on share delete and FS delete operation.\n\nSample Alert\n\nBlock Serial Number: 16SMXXXXXXXX\n\nOutput messaging\n\nCheck ID160171DescriptionTiered File Restore Failed.Causes of failureTiered file restore failed unexpectedly.ResolutionsRefer to KB article 14507. Contact Nutanix support if issue still persists or assistance needed.ImpactTiered file is in inconsistent state.Alert IDA160171Alert TitleTiered File Restore FailedAlert Message{message]. Troubleshooting\n\nThe message from the alert will provide one of two reasons for failing to restore the file.\n\nScenario 1: \"No object store configuration was found for profile.\" This means the profile was somehow deleted or not accessible.Scenario 2: When object was not found because it was deleted manually from object store or it was removed from object store as part of garbage cleaning.\n\nResolving the issue\n\nScenario 1: Contact Nutanix Support for assistance. See \"Collecting Additional Information\" below.Scenario 2: Attempt to undo the share-restore and try again with a newer snapshot.\n\nCollecting additional information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094.\n\tCVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run:\n\t\tnutanix@CVM$ afs info.get_leader\n\t\tOnce you are on the Minerva leader CVM, run:\n\n\t\tnutanix@CVM$ afs info.get_leader\n\n\t\tFor example:\n\n\t\tnutanix@CVM$ afs info.get_leader\n\t\t\n\t\n\nAttaching files to the case\nTo attach files to the case, follow KB 1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A160170 - FileServerCAShareConfigCheck. ### Assistant: Investigating FileServerCAShareConfigCheck issues on a Nutanix cluster.. This Nutanix article provides the information required for troubleshooting the alert FileServerCAShareConfigCheck for your Nutanix Files cluster.\n\nAlert overview\n\nThe FileServerCAShareConfigCheck alert is generated if Continuous Availability (CA) is enabled for SMB standard share or nested share.\n\nSample alert\n\nBlock Serial Number: 16SMXXXXXXXX\n\nOutput messaging\n\nCheck ID160170DescriptionCheck if CA is configured on SMB standard or nested share on a File Server.Cause of failureContinous Availability is enabled for SMB standard share or nested shareResolutionsDisable Continous Availability feature on SMB standard or nested sharesImpactEnabling Continous Availability on an SMB standard / nested share could result in performance issues.Alert IDA160170Alert TitleFile Server CA Share Config CheckAlert MessageMisconfig of Continous Availability detected on a File Server SMB standard share / nested share.. Use of Continuous Availability (CA) should be limited to distributed shares. If it is enabled on Standard/General shares, performance might be impacted.\n\nTroubleshootingCheck to see if any shares have CA enabled and if they are Standard (General) shares.\n\nnutanix@FSVM$ afs share.list|grep 'Continuous\\|Share type\\|Share path\\|Share name'\n\nResolving the issue\n\nIf you have any Standard shares with CA enabled, disable CA using:\n\nnutanix@FSVM$ afs share.list|grep 'Continuous\\|Share type\\|Share path\\|Share name'\n\nIf there are any concerns, or additional assistance is needed, contact Nutanix Support.\n\nCollecting additional information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094.\n\tCVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run:\n\t\tnutanix@FSVM$ afs share.list|grep 'Continuous\\|Share type\\|Share path\\|Share name'\n\t\tOnce you are on the Minerva leader CVM, run:\n\n\t\tnutanix@FSVM$ afs share.list|grep 'Continuous\\|Share type\\|Share path\\|Share name'\n\n\t\tFor example:\n\n\t\tnutanix@FSVM$ afs share.list|grep 'Continuous\\|Share type\\|Share path\\|Share name'\n\t\t\n\t\n\nAttaching files to the case\n\nTo attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.\n\nRequesting assistanceIf you need assistance from Nutanix Support, add a comment in the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers. You can also press the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will be in contact.\n\nClosing the caseIf this KB resolves your issue and you want to close the case, click the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: check_cvm_panics. ### Assistant: The NCC health check check_cvm_panics checks the CVM serial console log for panics.. The NCC health check check_cvm_panics checks the CVM (Controller VM) serial console log for panics and warns the user if the CVM has rebooted due to a panic recently. If a CVM panics, the panic output from the kernel is captured in the serial console log in the host in the NutanixCvmConsole event log.\n\nRunning the NCC check\n\nThis check will run as part of the full NCC health check:\n\nnutanix@cvm$ ncc health_checks run_all\n\nIt can also be run as a standalone check:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check is not scheduled to run on an interval.\n\nThis check does not generate an alert.\n\nSample output\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID106469DescriptionCheck if any CVMs have panicked.Causes of failureAt least one CVM panicked recently.ResolutionsFollow the instructions in KB 4379.ImpactStorage may have been unavailable while the CVM was restarting.. Determine current cluster state.\n\nnutanix@CVM:~$ cs | grep -v UP\n\nSSH to cluster and run confirm uptime.\n\nnutanix@CVM:~$ cs | grep -v UP\n\nSSH to the CVM which generated the alert and run the command last reboot to find the precise time of the panic.This alert references a brief event, rather than an ongoing state.  After this alert is triggered, the cluster usually returns to a healthy state.If the check fails, and the cluster is NOT in a healthy state (i.e. CVM down, services down, etc), contact Nutanix Support for additional diagnosis.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A160176 - ConfigChangeNotifyFailed. ### Assistant: Investigating ConfigChangeNotifyFailed issues on a Nutanix cluster. This Nutanix article provides the information required for troubleshooting the alert ConfigChangeNotifyFailed for your Nutanix Files cluster.\"\nAlert Overview\n\nThe ConfigChangeNotifyFailed alert is generated when there is a discrepancy between the File Servers and Prism Central configuration. \n\nSample Alert\n\nBlock Serial Number: 23SMXXXXXXXX\nOutput messaging\n\nCheck ID160176DescriptionFailed To Notify Config Change To File ServerCause of failureFile Server is down or not reachable while config change happens on PC/PEResolutionsCheck if File server is up and all the services are running. If File server is up, check network connectivity between File Server and PC/PE controller VM. If the failure persists, then refer to KB article 14745 to push the config change using afs cli.ImpactNAAlert IDA160176Alert TitleFailed To Notify Config Change To File ServerAlert MessageFailed to generate PCConfigChangeNotifyFailure alert due to 'reason'.. Troubleshooting\nCheck if the Files Server is up.\n\nnutanix@NTNX-A-CVM:~$ ncli fs ls | grep \"File server status\"\n\nCheck if all service are up and running on the FSVMs.\n\nnutanix@NTNX-A-CVM:~$ ncli fs ls | grep \"File server status\"\n\nCheck the connectivity between Prism Central and Prism Element, including the various required ports.  This information can be found on the Nutanix Portal.Run NCC on Prism Element to confirm that no \"Prism Central is unreachable\" alerts are raised. \n\nResolving the Issue\n\nIf Prism Central is reachable you can push the Files configuration to Prism Central to resolve this issue.For syncing config changes from Prism Element to Prism Central\n\nnutanix@NTNX-A-CVM:~$ ncli fs ls | grep \"File server status\"\nFor syncing changes from Prism Element to Files\n\nnutanix@NTNX-A-CVM:~$ ncli fs ls | grep \"File server status\"\n \n\nCollecting Additional Information\n\nAttaching Files to the Case\n\nIf this alert automatically generates a support case, add the following sections:\n\n\"If you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact.\"\n\n\"If this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.\"\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094.\n\tCVM logs stored in ~/data/logs/minerva_cvm*\n\t\tNVM logs stored within the NVM at ~/data/logs/minerva*\n\t\t\n\t\tTo collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run\n\n\t\tnutanix@NTNX-A-CVM:~$ ncli fs ls | grep \"File server status\"\n\n\t\tOnce you are on the Minerva leader CVM, run:\n\t\t\tnutanix@NTNX-A-CVM:~$ ncli fs ls | grep \"File server status\"\n\n\t\t\tFor example:\n\n\t\t\tnutanix@NTNX-A-CVM:~$ ncli fs ls | grep \"File server status\"\n\t\t\t\n\t\t\n\tTo attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.\n\nRequesting Assistance\n\nIf you need assistance from Nutanix Support, add a comment in the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers. You can also press the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will be in contact.\n\nClosing the Case\n\nIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case and Nutanix Support will close the case.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about cpupower frequency-info loads acpi-cpufreq driver instead of intel_pstate. ### Assistant: This article describes an issue where the command cpupower frequency-info loads acpi-cpufreq driver instead of intel_pstate.. This issue affects the following components combination:\n\nHardware: Intel Sapphire Rapids-based (SPR) nodes, including NX-G9AHV: 5.10.170-2.el7.nutanix.20220304.420.x86_64AOS: 6.5.3.1\n\nSymptomsIn BIOS, the recommended setting for Hardware P-States is Disable by default on G9 SPR node.\n\n\n\nIn this case, cpupower frequency-info would use the acpi-cpufreq driver instead of the expected intel_pstate.\n\n[root@host]# cpupower frequency-info\n\nThe turbostat command will continue to work when explicitly run.\n\nThis issue is caused by the intel_pstate driver being unable to retrieve CPU frequency values on SPR nodes. It requires support for SPR CPU set on AHV.. cpupower frequency-info using the acpi-cpufreq driver is a known behavior on SPR nodes.\n\nNutanix Engineering is aware of this behavior and is working on a fix to use the intel_pstate driver in a future release.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Objects - Manual steps to configure emails for Objects Alerts. ### Assistant: Below are the steps to configure email notification for Objects Alerts. Currently, Prism Central does not show Objects Alerts and one can use the following workaround until Objects Alerts are integrated with Prism Central.. Customers using Nutanix Objects no longer receive notifications for Objects Alerts on Prism Central. They have to launch the Objects page from Prism Central and navigate to the Alerts page to view and take appropriate action based on alert severity. Below are steps to enable email notifications for Objects Alerts until they are integrated with Prism Central in future releases.Note: review the steps thoroughly before applying the workaround. If you have questions or concerns, contact Nutanix Support for assistance.. To set up email notifications, update the Alertmanager configuration presented to the Alertmanager Pod(s) as a Kubernetes secret on MSP cluster. Refer to KB-8170 for steps to access Objects cluster.Overall steps to configure email notifications:\nUpdate the Alertmanager configuration with an appropriate template.Base64 encode the updated Alertmanager configuration.Update the YAML template containing the Alertmanager secret using the Base64 encoded value of the new configuration.Apply the secret on the MSP cluster.\n\nUpdate the Alertmanager configuration with an appropriate template\tRefer to the below YAML file and change the highlighted parameters as desired. The highlighted parameters will vary based on your environment. Save it as alertmanager.yaml.\n\n\tglobal:\n\tBase64 encode the alertmanager.yaml\n\tglobal:\n\tGenerate secret from the YAML. Reference the below YAML file and change the highlighted parameter with the base64 encoded string. Save it as alertmanager-secret.yaml.\n\tglobal:\n\tUpdate the secret.\tCopy over the alertmanager-secret.yaml to the MSP master and apply the configuration using the kubectl command:\n\tglobal:\n\tVerify if template is applied successfully to alertmanager container:\n\tglobal:\n\n\tglobal:\n\t\tAfter applying the secret successfully, you should start receiving emails for Objects Alerts based on template configuration.\t\t\t         Fig 1: Email notification for Objects Alert\t Check if the emails can be sent: \tReview alertmanager logs to see if there are any issues sending emails:\n\tglobal:\n\t\tIf there are issues similar to the above, test sending email to the SMTP server manually using telnet or netcat as described in KB 1638.\t\tYou can compare sending emails from PC or PE cluster if they are registered with the same SMTP server.\nTemplate examplesBelow are a few examples of templates that can be used to configure email notifications based on your requirements. Refer Prometheus documentation for more details.\n\nReceive an email for custom Alerts: \n\tglobal:\n\tReceive customized email with formatting:\n\tglobal:\n\tReceive all email for all except few Alerts:\n\tglobal:\n\tReceive email for all Alerts:\n\tglobal:\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Files upgrades and FSM requirements. ### Assistant: Files requires upgrades of FSM to minimum versions.. Nutanix Files requires a component in AOS known as Files Server Module (FSM). The version of FSM required varies depending on the desired version of Files. Normally, this requirement is enforced during a Nutanix Files upgrade via Life Cycle Manager (LCM) or 1-click. This requirement can be circumvented if Nutanix Files 4.0.0.2 or higher is downloaded from the Nutanix Portal, and then uploaded manually to the Controller VMs (CVMs) for upgrade.. Nutanix Files 4.0.0.2 requires a minimum FSM version 2.1.1. Upgrade FSM to version 2.1.1 and above prior to upgrading.\n\nNutanix encourages the use of LCM for Files upgrades, which will enforce the proper FSM requirement.\n\nTo determine the version of FSM currently running, check LCM in Prism Element.\n\n\n\nShould FSM show a version of 2.1.0 or earlier, then LCM is required to update FSM.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about Alert - A160185 - File Server Found Stale Reverse Data Protection Policy During Failover. ### Assistant: Alert for handling Stale Replication policies. This Nutanix article provides the information required for troubleshooting the alert Stale File Server DB entries found for your Nutanix Files cluster.Alert Overview\n\nIf entries in the File Server database are found to be stale, it raises an alert to have the entries verified and cleaned. \n\nSample Alert\n\nBlock Serial Number: 23SMXXXXXXXX\n\nOutput messaging\n\nCheck ID160185DescriptionChecks if stale DB File server entry is present or notCauses of failureFile Server DB entries are found to be stale.ResolutionsFor additional detail, please refer to KB-15033.ImpactFile Server share access could be potentially impactedAlert IDA160185Alert TitleStale File Server DB entries foundAlert Smart TitleFile Server {file_server_name} has stale DB entriesAlert Message{alert_msg}. Troubleshooting    Should this alert trigger, a stale File Server Database entry has been detected.  There is no additional troubleshooting to be done.  Please proceed with the resolution.  \n\nResolving the Issue\n\nPerform failback:  From the target PC (Prism Central to which the target File Server of the protection policy is registered), click on Files ->Data Protection -> Protected File Servers -> FailbackCheck whether or not the stale reverse replication policy (whose UUID is given in the alert) is still listed in the Prism Central UI (Files -> Data Protection -> Policies)If it is still present, delete itPerform failover again\n\nShould the repeated failover have any issues, or if you have any additional concerns, please contact Nutanix Support\n\nCollecting Additional Information\n\nBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871.Run a complete NCC health_check on the cluster. See KB 2871.Collect Files related logs. For more information on Logbay, see KB-3094.\n\tCVM logs stored in ~/data/logs/minerva_cvm*\n\t\tNVM logs stored within the NVM at ~/data/logs/minerva*\n\t\t\n\t\tTo collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run\n\n\t\tnutanix@cvm:~$ afs info.get_leader\n\n\t\tOnce you are on the Minerva leader CVM, run:\n\t\t\tnutanix@cvm:~$ afs info.get_leader\n\n\t\t\tFor example:\n\n\t\t\tnutanix@cvm:~$ afs info.get_leader\n\t\t\t\n\t\t\n\t\n\nAttaching Files to the Case\n\nTo attach files to the case, follow KB 1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: notifications_dropped_check. ### Assistant: The NCC Health Check notifications_dropped_check identifies if any alert notifications from any system components have been dropped by Cluster Health and/or Alert Manager due to various possible downstream service or configuration issues. The check aims to proactively expose when alerts can not be delivered successfully and thus a genuine issue on the cluster that requires attention may otherwise go unnoticed.. The NCC Health Check notifications_dropped_check identifies if any alert notifications from any system components have been dropped by Cluster Health and/or Alert Manager due to various possible downstream service or configuration issues within the last 24 hours. The check aims to proactively expose when alerts can not be delivered successfully and thus, a genuine problem on the cluster that requires attention may otherwise go unnoticed.When the queue of alert notifications waiting to be delivered becomes >= 1000, it may suggest that services such as cluster_health and alert_manager cannot process and deliver the alerts, preventing emails, SNMP traps, and Rsyslog messages from being sent. Notifications that arrive when the queue depth is already at 1000 are dropped, and an email notification is sent to the cluster's configured alert email address(es) to inform that alerts and notifications were dropped.\n\nTwo reasons why notifications might be dropping : \n\nAlert Manager service is down.The RPC queue is full.  \n\nRunning the NCC CheckRun this check as part of the complete NCC Health Checks.\n\nnutanix@CVM:~$ ncc health_checks run_all\n\nOr run this check separately:\n\nnutanix@CVM:~$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nThis check does not generate an alert.\n\nSample outputFor Status: PASS\n\nnutanix@CVM:~$ ncc health_checks run_all\n\nFor Status: FAIL\n\nnutanix@CVM:~$ ncc health_checks run_all\n\nFor Status: INFO\n\nnutanix@CVM:~$ ncc health_checks run_all\n\nOr:\n\nnutanix@CVM:~$ ncc health_checks run_all\n\nExample email:\n\nnutanix@CVM:~$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID111052DescriptionSome alert notifications were droppedCauses of failureOne or more cluster services are not healthy.ResolutionsContact Nutanix Support.ScheduleThis check is scheduled to run every day by default.\nNote: This NCC check has multiple enhancements in NCC v4.1.0 - Please upgrade your cluster to NCC v4.1.0 or above and re-run the check.. If this alert is seen and/or emails are received, ensure the NCC is up to date and running the latest version (at least NCC-3.9.2.1 or higher). Due to an upstream change in how alert retries are handled and logged, this health check is over-sensitive in NCC 3.9.0/3.9.1 and may be triggered unnecessarily. After the upgrade, re-run the check and confirm the PASS result. If you continue to see a non-PASS result, you may have a genuine issue, or the now-stale trigger in the JSON state file needs to be cleaned up (per step #5 below.) Use the following steps to confirm and resolve.Check the Alert page in the Prism web console to verify what alert events are accounted for.\n\tIf you see there are no events/notifications, but the check still fails, check :\n\n\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\n\tIf we look at the .json file, we can see that timestamps for dropped notifications increment even though there are no alerts in Prism. This is because we keep retrying to send stuck notifications:\n\n\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\tInvestigate why notifications are being dropped.\n\tOne of the reasons could be that you might be using \"admin\" account to access some APIs. You should create a separate user with admin privileges instead. Refer to 5.11 Security Guide - Creating A User Account​​​Confirm all CVMs and Cluster Services are up and stable:\n\t\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\t\tValidate that the configured SMTP server is reachable and can relay the alert notification emails from the cluster's configured source email address to the destination email address(es).Validate the configured SMTP server security settings.\n\tCheck older health_server.logs also to make sure the error has not rolled off:\n\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\tThe notifications_dropped_check health check might still complain about the same issue based on data on a CVM which was previously but not currently the Cluster Health service leader, as it is looking for any indication of issues by way of a non-zero value in the JSON state file, despite current Cluster Health leadership. Drops might increment only on the leader, so we have to clear the stats on the old leader where the problem may have previously occurred.\tSCP the file from the CVM where the values are 0 and replace it. To replace this for all CVMs, you can use this script:\n\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\tExample:\n\n\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\tRun the NCC health check again and ensure the check results in a PASS status.\n\nIf the above solution does not help you or none of the CVMs notifications_dropped_state.json file has the value of 0 to scp, collect the following information, and consider engaging Nutanix Support.\n\n\n\tAn output of the following NCC health check:\n\n\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\t\n\n\n\tList of alerts:\n\n\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\t\n\n\n\tAn output of the following command from the PE cluster or the PC VM depending on where the alert was received:\n\n\tnutanix@CVM:~$ allssh ls -l /home/nutanix/data/serviceability/notifications_dropped_state.json\n\t. ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."} {"text":"### Human: Can you tell about NCC Health Check: host_nic_error_check. ### Assistant: The NCC health check host_nic_error_check checks whether the NICs on the host are experiencing errors.. The NCC health check host_nic_error_check checks whether the NICs on the host are experiencing errors. It checks if each NIC has fewer than 1% errors during a span of execution of 1 hour (3600 seconds).\n\nRunning the NCC Check\n\nYou can run this check as part of the complete NCC health checks:\n\nnutanix@cvm$ ncc health_checks run_all\n\nOr you can run this check individually:\n\nnutanix@cvm$ ncc health_checks run_all\n\nYou can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.\n\nSample Output\n\nFor status: PASS\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: FAIL\n\nnutanix@cvm$ ncc health_checks run_all\n\nnutanix@cvm$ ncc health_checks run_all\n\nFor status: WARN (changed in NCC-4.6.2)\n\nnutanix@cvm$ ncc health_checks run_all\n\nOutput messaging\n\nCheck ID6011DescriptionChecks whether the NICs on the hypervisor are experiencing errors.Causes of failureNIC may be failing.ResolutionsCheck NIC health; if required, replace the NIC as soon as possible. Refer to the Nutanix documentation for instructions.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert TitleNIC packet receive an error rate highAlert Message\n\t\t\tnic_errors packet receive errors on NIC nic_name out of nic_packets in the last {period} seconds\n\t\t\tScheduleThis check is scheduled to run every hour by default.Number of failures to alertThis check will generate an alert after 1 failure.\nNote: Check_id 6011 does not generate an alert when the check fails prior to NCC-4.6.2 \nCheck ID6013DescriptionChecks whether the NICs on the hypervisor are experiencing rx_crc errors.Causes of failureNIC or its peripherals may be failing.ResolutionsReplace the NIC, cable, SFP+ adapter, or switch port. Refer to the Nutanix documentation for instructions.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert TitleNIC RX_CRC Error Rate HighAlert MessageNIC nic_name in host host_ip has encountered many error_type.ScheduleThis check is scheduled to run every hour by default.Number of failures to alertThis check will generate an alert after 1 failure.\n\nCheck ID6014DescriptionChecks whether the NICs on the hypervisor are experiencing rx_missed errors.Causes of failureRX buffer size or transient condition.ResolutionsRX buffer size might be too small to absorb current inbound traffic. This can be a transient condition. If the alert reappears frequently, contact Nutanix support.ImpactCluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests.Alert TitleNIC RX_Missed Error Rate HighAlert MessageNIC nic_name in host host_ip has encountered many error_type.ScheduleThis check is scheduled to run every hour by default.Number of failures to alertThis check will generate an alert after 1 failure.\n\nThis hardware-related check executes on the below hardware:\n\nNutanix NXDell XCHPELenovo HXNutanix SXFujitsu XFLenovo SRPowerEdgeIntelIBMHyveVoyager TDC. Ensure that you are running the latest version of NCC when investigating NIC errors, whether you have found them through NCC or Health alerts in Prism. For information on upgrading NCC, refer to KB-2871.\n\nNote: ixgben version 1.7.1 is the minimum version required. If required, the drivers can be upgraded to ixgben version 1.7.17, the latest qualified driver.\n\nWhat is a CRC error?The sending host computes a cyclic redundancy check (CRC) of the entire Ethernet frame and puts this value in the Ethernet frame's FCS (frame check sequence) section after the user payload. The intermediate switch and the destination host check this computed value against the value they compute to determine if the frame has been corrupted in transit.\n\nPossible causes of NIC errors:\n\nLink flap (interface continually goes up and down)Cable disconnect/connectFaulty external switch portMisconfiguration of the external switch portFaulty NIC portFaulty cableFaulty SFP+ module\n\nFaulty cables and/or SFP+ modules are the most common cause of these errors. If the problem occurs regularly on a particular interface, you should perform testing in a controlled fashion during a change window to isolate the faulty component by swapping cables, modules, and switch ports as needed to isolate the issue.\n\nHow do I determine if the Nutanix node is experiencing NIC errors?\n\nLog in to the hypervisor and use the ethtool command to retrieve the RX counts of the NIC interface reported in the alert. Continue checking the counters to determine if they are increasing.\n\nESXi\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\tOr:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tDepending on the driver version, the output of \"ethtool -S\" may differ. Some old versions do not have details like the above. In this case, you can try the command below (ESXi only):\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tWe want to avoid any CRC errors, which could greatly impact usability. Collect all errors on all node interfaces and from the upstream switch.\n\n\tFor each node, run the following command:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tAs per the upstream switch, it depends on the vendor. For most, the command \"show interface <interface>\" should work from a privileged console.\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tLook for the following output:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tNote: For Mellanox, use:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\tAHV\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tDepending on the driver version, the output of \"ethtool -S\" may differ. Some old versions do not have details like the above. In this case, you can try this command:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\tFor enhanced usage of ethtool, you can incorporate additional counters to effectively detect network issues   \n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\tBy running this command, you can retrieve information about various counters related to network errors, faults, checksums, and frame integrity. This can assist you in identifying potential network issues that require further investigation or troubleshooting.Hyper-V\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\t\n\nThe two big error counters are rx_crc_errors and rx_over_errors (and, in conjunction, rx_missed_errors/rx_fifo_errors).\n\nrx_crc_errors are caused either by faults in layer 1 (in the past, we have seen failed twinax cables and incorrect types of fibre being used), or issues with jumbo frames on the network. In an environment with 10 Gig switches that use cut-through forwarding (Cisco Nexus, Arista, and Cisco devices using IOS default to Store and Forward switching), any packets that come into the switch will get forwarded out the destination interface once the switch has read the destination MAC address. If that packet has an MTU over what is configured on the interface, it will cut off the packet at the designated MTU, causing the server to receive a malformed packet, which will throw a CRC error.\n\nrx_over_errors are caused when the hardware receive buffer on the physical NIC is full, and some of the received packets have to be dropped at the physical NIC layer. In most cases, the values reported by this counter will equal rx_missed_errors and rx_fifo_errors. The packet drops can happen during high bursts of traffic.Note: rx_crc_errors_phy is the counter used on AHV for some Mellanox cards to count CRC errors. From NCC-4.3.0 onwards, the counter is added to the check.  For NCC versions prior to 4.3.0, this counter should be checked manually using the steps mentioned in the solution section of this KB article.\n\nShould I be concerned that my NIC port shows any NIC errors?As indicated above, NIC errors can be triggered by several scenarios and can usually be ignored. However, continuously increasing NIC errors typically point to a failing physical layer component. The Nutanix alert is triggered when the error rate is considered excessive. This alert should be investigated whenever it is raised, and the source should be corrected.\n\nHow does the packet error handling between cut-through and traditional store-and-forward switches differ?Traditional store-and-forward switches store each Ethernet frame and perform various checks such as valid frame length and checksum check. If the frame is valid, it is switched to the destination port. In the succeeding example, if the non-Nutanix node generates a corrupted Ethernet frame, switch port 3 drops it. This behavior differs with cut-through (also known as low latency) switches, which do not store the entire frame before forwarding it. The switch usually only needs to look at the first 12 bytes of a packet to decide. The downside of cut-through processing is that a corrupted frame is forwarded to the destination. Cut-through forwarding relies on the destination (host NIC) to perform the error check and discarding, if necessary.\n\nExample setup for scenarios 1 and 2 below:\n\nNutanix node A ------- NIC port 0 -------- Switchport 1Nutanix node B ------- NIC port 0 -------- Switchport 2Non-Nutanix node --- NIC port 0 -------- Switchport 3\n\nThe following two scenarios are the most common for NIC errors.\n\nScenario 1. NIC errors are only increasing on node A. Nutanix Node B is not experiencing increasing NIC errors.\n\nPerform basic physical layer checks between Nutanix node A and switch port 1.\n\tMake sure the cables are properly connected.Check cables for physical damage, pinched fibre, and bend radius issues.If the cable is Twinax, make sure that it is supported. Nutanix ships lengths of 1, 3, and 5 meters Twinax cables. 7-meter cables might work, but the signal can be marginal and varies between switch vendors. Some switch vendors may be able to drive a signal 7 meters while others may be marginal. Use a 5-meter cable if in doubt.If the cable is optical, ensure the proper optics are used. Currently, we only support multimode short-reach optics (850 nm), and 1310 nm optics are not supported.\n\tDetermine if the switch port causes the NIC errors.\tIf the NIC error is incrementing on Nutanix node A, check switch port 1 for tx output or similar transmission errors. Transmit output errors are usually a good indication that the switch port is the source of the NIC errors.Further isolation of the NIC errors.\tIf the external switch port does not show any NIC TX errors, then the physical layer media (switch port transceiver, cable) could be causing the NIC RX errors. Proceed if the NIC RX errors continue to increase.\n\tIf using Twinax 10GE cabling, replace the cable. Proceed to step 3d if the NIC errors continue to increase.If using optical links, start by replacing the multimode cable. Proceed to the next step if the RX CRC errors continue to increase.If using optical links, replace the optical transceiver associated with the switch port. Proceed to the next step if the RX CRC errors continue to increase.Reboot the host.Replace the host NIC.\n\t\n\nScenario 2. NIC errors are increasing on all Nutanix nodes at the same time.\n\nThis scenario usually indicates the use of a cut-through Ethernet switch being used. These switches are unable to perform frame discarding at the switch input port. The first step in troubleshooting this scenario is to confirm that the cut-through switch is being used.\tCheck the external cut-through switch for incrementing error counters. Typically, a single switch port logs RX input errors, and the remaining ports report TX output errors. In this case, the port reporting RX input errors is attached to the node generating the illegal (CRC, length) frames.show tech from the switch or show interface for all the enabled interfaces on the switch. Look for any interfaces that are flagging RX errors.Run ethtool every minute for 5 minutes. Take rx_crc rate of increase of errors on the Nutanix interfaces.Note the Nutanix nodes that are not flagging these errors. This is to find out where the good nodes are connected to. A network diagram would be helpful.\n\nGeneral Troubleshooting\n\nRemediation Steps for CRC Errors:\n\nThere are a few different ways of looking at things.If you have a layer 1 issue, you will see rx_crc_errors, not on every but some nodes.\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tThis points to this node sending out malformed packets, and we should see a related CRC error on its upstream switch port. This is because the packets coming out of this host will never be sent back to the port from which they were sent.\n\tIf you see many rx_crc_errors on all nodes, verify that there is no \"noisy neighbor\" on the network.This could be that a host is broadcasting or multicasting out traffic at a larger MTU size. This process will require more network troubleshooting that may fall outside of Nutanix Support.\n\nThe general rule is that the rx_over_errors should be 1 in every 5,000 rx_packets at the most. Use the following command on an ESXi host to determine the total number of packets and the number of rx_over_packets:\n\n[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\nRemediation Steps for Missed Errors:\n\nCheck the current NCC version if you do not see any performance impact. Make sure that it is upgraded to the latest compatible version. Previous versions of NCC were aggressive in reporting the NIC error rates.If there is a need for flow control, enabling the Ethernet flow control feature will allow the physical NIC to signal the switch to slow down the packet transmission rate.More information about this feature can be found in Wiki - Ethernet flow control.\n\nFlow control should only be enabled if it falls within the SOP for your network environment and if a considerable amount of congestion is encountered on the upstream switch. The impact on the network is as follows:\n\nFlow control operates at a lower layer than TCP or IP and thus is independent of them. In other words, flow control can be used regardless of the higher-level protocols. An important side-effect is that neither TCP nor IP knows what Ethernet's flow control is doing. They operate assuming that there is no flow control other than what they may or may not provide themselves.Flow control functions between two directly connected network devices and flow control frames are never forwarded between links. Thus, two computers connected via a switch will never send pause frames to each other but could send pause frames to the switch itself (and vice versa: the switch can send pause frames to the two computers).Pause frames have a limited duration; they will automatically \"expire\" after a certain time. The expiration time is set by the device that transmits the pause frame.A paused link is not a discriminator of protocols; it will prevent data from passing across the link other than more pause frames.Another solution is to consider enabling load balancing at the network layer so both interfaces are used and also have additional buffer space. Check KB-5135 for details on how to configure AHV.\n\nReceive Length Error\n\nWhat is rx_length_error?Each error captured by a driver is explained in Intel 82599 datasheet.\n\nNumber of packets with receive length errors. A length error occurs if an incoming packet length field in the MAC header doesn't match the packet length. To enable the receive length error count, the HLREG.RXLNGTHERREN bit needs to be set to 1b. This register counts all packets received, regardless of L2 filtering and receiving enablement.\n\nFor rx_length_error, in most cases, the switch port sends BPDU packets without changing the length value properly.Use the two options below to overcome the error:\n\nPortfast\n\tIOS set STP portfast:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tNXOS set port type edge:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\t\n\tEnable BPDU guard and BPDU filter (in Cisco/Arista switch).\n\n\tHighly recommended global BPDU Filter/Guard\n\n\tIOS:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tNXOS:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tEnable BPDU Filter/Guard on a port:\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tHP Switches (Verify the HP Switch Model before running the below commands):\n\n\t[root@esxi]# ethtool -S <vmnic> | egrep \"rx_errors|rx_crc_errors|rx_missed_errors\"\n\n\tAlso, consider disabling Jumbo Frames on the Switch ports the Nutanix Nodes are connected to (HP switches can come with Jumbo Frames enabled by default). According to the AHV Networking Best Practice Guide, enabling jumbo frames on AHV is not recommended.\n\t\n\nNote: If you see this alert for a vUSB device, upgrade NCC to the latest version to fix the false issue.\nIf the steps mentioned above do not resolve the issue, consider engaging Nutanix Support for further investigation. . ### Human: Now explain this as a Nutanix support agent summarizing the answer step by step. In case, you don't know the answer say - I don't know."}