title
stringlengths
0
125
url
stringlengths
67
206
markdown
stringlengths
55
86.1k
html
stringlengths
198
350k
crawlDate
stringlengths
24
24
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/data-types.rst.txt
``` .. _neuron-data-types: Data Types ========== .. contents:: Table of contents :local: :depth: 2 Introduction ------------ :ref:`Inferentia <neurondevice_inferentia>` and :ref:`Trainium <neurondevice_trainium>` NeuronDevices include different NeuronCore versions, which support different data-types. This section describes what data-types are supported in each NeuronCore version, for details about NeuronCore versions see :ref:`neuron_hw_arch`. NeuronCore v1 Data Types ------------------------ Neuron Data-Types ^^^^^^^^^^^^^^^^^ Neuron enables developers to choose from multiple data-types. The supported data-types are FP32, FP16, and BF16. Developers can train their models on their platform of choice (e.g. EC2 P3 instances), and then easily move their trained models to EC2 Inf1 for execution. .. raw:: html <style type="text/css">table, td, th { border: 1px solid black; padding: 5px; } </style> <table style="table-layout: fixed; width: 50%; border-spacing:0px;"> <tbody> <tr> <th width="20%">Data Type</th> <th width="10%">S</th> <th colspan="8">Range</th> <th colspan="23">Precision</th> </tr> <tr> <td>FP32</td> <td bgcolor="#ad3bff">1</td> <td bgcolor="#AFEFA9" colspan="8">8 bits</td> <td bgcolor="#FAC49E" colspan="23">23 bits</td> </tr> <tr> <td>BF16</td> <td bgcolor="#ad3bff">1</td> <td bgcolor="#AFEFA9" colspan="8">8 bits</td> <td style="border-right: 0px" colspan="13" /> <td colspan="3" /> <td bgcolor="#FAC49E" colspan="7">7 bits</td> </tr> <tr> <td>FP16</td> <td bgcolor="#ad3bff">1</td> <td colspan="3" /> <td bgcolor="#AFEFA9" colspan="5">5 bits</td> <td colspan="13" /> <td bgcolor="#FAC49E" colspan="10">10 bits</td> </tr> </tbody> </table> <p/> FP16/BF16 models ~~~~~~~~~~~~~~~~ Models natively trained in FP16/BF16 will be executed in their trained data-types. This is a straightforward migration from the training platform to Inf1. FP32 models ~~~~~~~~~~~ Neuron SDK supports **automatic model conversion** from FP32 to BF16 by default. This capability allows developers to train their models using FP32 format for the highest accuracy, and achieve performance benefits without having to worry about low-precision training (e.g. no need for loss-scaling during training). ML models are typically robust to FP32 to BF16 conversion, with minimal to no impact on accuracy. The conversion accuracy is model dependent; therefore, users are encouraged to benchmark the accuracy of the auto-converted model against the original FP32 trained model. When the compiler is supplied with an unmodified FP32 model input it will automatically compile the model to run as BF16 on Inferentia. During inference the FP32 input data will be auto-converted internally by Inferentia to BF16 and the output will be converted back to FP32 data-type. For explicit FP16 inferencing, either use an FP16 trained model, or use an external tool (like AMP) to make the explicit conversions. .. _neuron-data-types-v2: NeuronCore v2 Data Types ------------------------ The NeuronCore v2 supports the following data types: * 32 and 16-bit Floating Point (FP32 / FP16) * TensorFloat-32 (TF32) * Brain Floating Point (BFloat16) * 8-bit Floating point with configurable range and precision (cFP8) * Unsigned 8-bit integer (UINT8) .. note:: Neuron Compiler support for cFP8 and UINT8 is planned for a future Neuron SDK release. For INT8, see `Neuron Compiler: Enable Neuron INT8 support <https://github.com/aws/aws-neuron-sdk/issues/36>`_ for details. The layout for these is as follows: .. raw:: html <style type="text/css">table, td, th { border: 1px solid black; padding: 5px; } </style> <table style="table-layout: fixed; width: 50%; border-spacing:0px;"> <tbody> <tr> <th width="20%">Data Type</th> <th width="10%">S</th> <th colspan="8">Range</th> <th colspan="23">Precision</th> </tr> <tr> <td>FP32</td> <td bgcolor="#ad3bff">1</td> <td bgcolor="#AFEFA9" colspan="8">8 bits</td> <td bgcolor="#FAC49E" colspan="23">23 bits</td> </tr> <tr> <td>TF32</td> <td bgcolor="#ad3bff">1</td> <td bgcolor="#AFEFA9" colspan="8">8 bits</td> <td colspan="13" /> <td bgcolor="#FAC49E" colspan="10">10 bits</td> </tr> <tr> <td>BF16</td> <td bgcolor="#ad3bff">1</td> <td bgcolor="#AFEFA9" colspan="8">8 bits</td> <td style="border-right: 0px" colspan="13" /> <td colspan="3" /> <td bgcolor="#FAC49E" colspan="7">7 bits</td> </tr> <tr> <td>FP16</td> <td bgcolor="#ad3bff">1</td> <td colspan="3" /> <td bgcolor="#AFEFA9" colspan="5">5 bits</td> <td colspan="13" /> <td bgcolor="#FAC49E" colspan="10">10 bits</td> </tr> <tr> <td>FP8_e5m2</td> <td bgcolor="#ad3bff">1</td> <td colspan="3" /> <td bgcolor="#AFEFA9" colspan="5">5 bits</td> <td style="border-right: 0px" colspan="18" /> <td colspan="3" /> <td bgcolor="#FAC49E" colspan="2">2 bits</td> </tr> <tr> <td>FP8_e4m3</td> <td bgcolor="#ad3bff">1</td> <td style="border-right: 0px" colspan="3" /> <td colspan="1" /> <td bgcolor="#AFEFA9" colspan="4">4 bits</td> <td style="border-right: 0px" colspan="20" /> <td bgcolor="#FAC49E" colspan="3">3 bits</td> </tr> <tr> <td>FP8_e3m4</td> <td bgcolor="#ad3bff">1</td> <td style="border-right: 0px" colspan="4" /> <td colspan="1" /> <td bgcolor="#AFEFA9" colspan="3">3 bits</td> <td style="border-right: 0px" colspan="19" /> <td bgcolor="#FAC49E" colspan="4">4 bits</td> </tr> <tr> <td>UINT8</td> <td colspan="1" /> <td bgcolor="#AFEFA9" colspan="8">8 bits</td> <td colspan="23" /> </tr> </table> <p/> Model Type Conversion ^^^^^^^^^^^^^^^^^^^^^ The Neuron SDK supports automatic model conversion from FP32 to BF16 by default. This capability allows developers to train their models using FP32 format for the highest accuracy, and then achieve run-time performance benefits without having to worry about low-precision training (e.g. no need for loss-scaling during training). ML models are typically robust to FP32 to BF16 conversion, with minimal to no impact on accuracy. Since conversion accuracy is model dependent, users are encouraged to benchmark the accuracy of the auto-converted model against the original FP32 trained model. See :ref:`Mixed Precision and Performance-accuracy Tuning for Training<neuronx-cc-training-mixed-precision>` for more details on supported data types and their properties. The Neuron compiler offers the :option:`--auto-cast` and :option:`--auto-cast-type` options to specify automatic casting of FP32 tensors to other data types to address performance and accuracy tradeoffs. See the :ref:`Neuron Compiler CLI Reference Guide<neuron-compiler-cli-reference-guide>` for a description of these options. NeuronCore v2 Rounding Modes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Because floating point values are represented by a finite number of bits, they cannot represent all real numbers accurately. Floating point calculations that exceed their defined data type size are rounded. The NeuronCore v2 performs a Round-to-Nearest (RNE) algorithm with ties to Even by default. It also provides a new Stochastic Rounding mode. When Stochastic Rounding is enabled, the hardware will round the floating point value up or down using a proportional probability. This could lead to improved model convergence. Use the environment variable NEURON_RT_STOCHASTIC_ROUNDING_EN to select a rounding mode. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-data-types: Data Types ========== .. contents:: Table of contents :local: :depth: 2 Introduction ------------ :ref:`Inferentia &lt;neurondevice_inferentia&gt;` and :ref:`Trainium &lt;neurondevice_trainium&gt;` NeuronDevices include different NeuronCore versions, which support different data-types. This section describes what data-types are supported in each NeuronCore version, for details about NeuronCore versions see :ref:`neuron_hw_arch`. NeuronCore v1 Data Types ------------------------ Neuron Data-Types ^^^^^^^^^^^^^^^^^ Neuron enables developers to choose from multiple data-types. The supported data-types are FP32, FP16, and BF16. Developers can train their models on their platform of choice (e.g. EC2 P3 instances), and then easily move their trained models to EC2 Inf1 for execution. .. raw:: html &lt;style type="text/css"&gt;table, td, th { border: 1px solid black; padding: 5px; } &lt;/style&gt; &lt;table style="table-layout: fixed; width: 50%; border-spacing:0px;"&gt; &lt;tbody&gt; &lt;tr&gt; &lt;th width="20%"&gt;Data Type&lt;/th&gt; &lt;th width="10%"&gt;S&lt;/th&gt; &lt;th colspan="8"&gt;Range&lt;/th&gt; &lt;th colspan="23"&gt;Precision&lt;/th&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;FP32&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td bgcolor="#AFEFA9" colspan="8"&gt;8 bits&lt;/td&gt; &lt;td bgcolor="#FAC49E" colspan="23"&gt;23 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;BF16&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td bgcolor="#AFEFA9" colspan="8"&gt;8 bits&lt;/td&gt; &lt;td style="border-right: 0px" colspan="13" /&gt; &lt;td colspan="3" /&gt; &lt;td bgcolor="#FAC49E" colspan="7"&gt;7 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;FP16&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td colspan="3" /&gt; &lt;td bgcolor="#AFEFA9" colspan="5"&gt;5 bits&lt;/td&gt; &lt;td colspan="13" /&gt; &lt;td bgcolor="#FAC49E" colspan="10"&gt;10 bits&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;p/&gt; FP16/BF16 models ~~~~~~~~~~~~~~~~ Models natively trained in FP16/BF16 will be executed in their trained data-types. This is a straightforward migration from the training platform to Inf1. FP32 models ~~~~~~~~~~~ Neuron SDK supports **automatic model conversion** from FP32 to BF16 by default. This capability allows developers to train their models using FP32 format for the highest accuracy, and achieve performance benefits without having to worry about low-precision training (e.g. no need for loss-scaling during training). ML models are typically robust to FP32 to BF16 conversion, with minimal to no impact on accuracy. The conversion accuracy is model dependent; therefore, users are encouraged to benchmark the accuracy of the auto-converted model against the original FP32 trained model. When the compiler is supplied with an unmodified FP32 model input it will automatically compile the model to run as BF16 on Inferentia. During inference the FP32 input data will be auto-converted internally by Inferentia to BF16 and the output will be converted back to FP32 data-type. For explicit FP16 inferencing, either use an FP16 trained model, or use an external tool (like AMP) to make the explicit conversions. .. _neuron-data-types-v2: NeuronCore v2 Data Types ------------------------ The NeuronCore v2 supports the following data types: * 32 and 16-bit Floating Point (FP32 / FP16) * TensorFloat-32 (TF32) * Brain Floating Point (BFloat16) * 8-bit Floating point with configurable range and precision (cFP8) * Unsigned 8-bit integer (UINT8) .. note:: Neuron Compiler support for cFP8 and UINT8 is planned for a future Neuron SDK release. For INT8, see `Neuron Compiler: Enable Neuron INT8 support &lt;https://github.com/aws/aws-neuron-sdk/issues/36&gt;`_ for details. The layout for these is as follows: .. raw:: html &lt;style type="text/css"&gt;table, td, th { border: 1px solid black; padding: 5px; } &lt;/style&gt; &lt;table style="table-layout: fixed; width: 50%; border-spacing:0px;"&gt; &lt;tbody&gt; &lt;tr&gt; &lt;th width="20%"&gt;Data Type&lt;/th&gt; &lt;th width="10%"&gt;S&lt;/th&gt; &lt;th colspan="8"&gt;Range&lt;/th&gt; &lt;th colspan="23"&gt;Precision&lt;/th&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;FP32&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td bgcolor="#AFEFA9" colspan="8"&gt;8 bits&lt;/td&gt; &lt;td bgcolor="#FAC49E" colspan="23"&gt;23 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;TF32&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td bgcolor="#AFEFA9" colspan="8"&gt;8 bits&lt;/td&gt; &lt;td colspan="13" /&gt; &lt;td bgcolor="#FAC49E" colspan="10"&gt;10 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;BF16&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td bgcolor="#AFEFA9" colspan="8"&gt;8 bits&lt;/td&gt; &lt;td style="border-right: 0px" colspan="13" /&gt; &lt;td colspan="3" /&gt; &lt;td bgcolor="#FAC49E" colspan="7"&gt;7 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;FP16&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td colspan="3" /&gt; &lt;td bgcolor="#AFEFA9" colspan="5"&gt;5 bits&lt;/td&gt; &lt;td colspan="13" /&gt; &lt;td bgcolor="#FAC49E" colspan="10"&gt;10 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;FP8_e5m2&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td colspan="3" /&gt; &lt;td bgcolor="#AFEFA9" colspan="5"&gt;5 bits&lt;/td&gt; &lt;td style="border-right: 0px" colspan="18" /&gt; &lt;td colspan="3" /&gt; &lt;td bgcolor="#FAC49E" colspan="2"&gt;2 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;FP8_e4m3&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td style="border-right: 0px" colspan="3" /&gt; &lt;td colspan="1" /&gt; &lt;td bgcolor="#AFEFA9" colspan="4"&gt;4 bits&lt;/td&gt; &lt;td style="border-right: 0px" colspan="20" /&gt; &lt;td bgcolor="#FAC49E" colspan="3"&gt;3 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;FP8_e3m4&lt;/td&gt; &lt;td bgcolor="#ad3bff"&gt;1&lt;/td&gt; &lt;td style="border-right: 0px" colspan="4" /&gt; &lt;td colspan="1" /&gt; &lt;td bgcolor="#AFEFA9" colspan="3"&gt;3 bits&lt;/td&gt; &lt;td style="border-right: 0px" colspan="19" /&gt; &lt;td bgcolor="#FAC49E" colspan="4"&gt;4 bits&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;UINT8&lt;/td&gt; &lt;td colspan="1" /&gt; &lt;td bgcolor="#AFEFA9" colspan="8"&gt;8 bits&lt;/td&gt; &lt;td colspan="23" /&gt; &lt;/tr&gt; &lt;/table&gt; &lt;p/&gt; Model Type Conversion ^^^^^^^^^^^^^^^^^^^^^ The Neuron SDK supports automatic model conversion from FP32 to BF16 by default. This capability allows developers to train their models using FP32 format for the highest accuracy, and then achieve run-time performance benefits without having to worry about low-precision training (e.g. no need for loss-scaling during training). ML models are typically robust to FP32 to BF16 conversion, with minimal to no impact on accuracy. Since conversion accuracy is model dependent, users are encouraged to benchmark the accuracy of the auto-converted model against the original FP32 trained model. See :ref:`Mixed Precision and Performance-accuracy Tuning for Training&lt;neuronx-cc-training-mixed-precision&gt;` for more details on supported data types and their properties. The Neuron compiler offers the :option:`--auto-cast` and :option:`--auto-cast-type` options to specify automatic casting of FP32 tensors to other data types to address performance and accuracy tradeoffs. See the :ref:`Neuron Compiler CLI Reference Guide&lt;neuron-compiler-cli-reference-guide&gt;` for a description of these options. NeuronCore v2 Rounding Modes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Because floating point values are represented by a finite number of bits, they cannot represent all real numbers accurately. Floating point calculations that exceed their defined data type size are rounded. The NeuronCore v2 performs a Round-to-Nearest (RNE) algorithm with ties to Even by default. It also provides a new Stochastic Rounding mode. When Stochastic Rounding is enabled, the hardware will round the floating point value up or down using a proportional probability. This could lead to improved model convergence. Use the environment variable NEURON_RT_STOCHASTIC_ROUNDING_EN to select a rounding mode. </pre></body></html>
2023-09-29T20:55:15.530Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/devflows/inference/byoc-hosting-devflow.rst.txt
``` .. _byoc-hosting-devflow: Bring Your Own Neuron Container to Sagemaker Hosting (inf1) ==================================================== .. contents:: Table of Contents :local: :depth: 2 Description ----------- |image| .. |image| image:: /images/byoc-then-hosting-dev-flow.png :width: 850 :alt: Neuron developer flow on SageMaker Neo :align: middle You can use a SageMaker Notebook or an EC2 instance to compile models and build your own containers for deployment on SageMaker Hosting using ml.inf1 instances. In this developer flow, you provision a Sagemaker Notebook or an EC2 instance to train and compile your model to Inferentia. Then you deploy your model to SageMaker Hosting using the SageMaker Python SDK. Follow the steps bellow to setup your environment. Once your environment is set you'll be able to follow the :ref:`BYOC HuggingFace pretrained BERT container to Sagemaker Tutorial </src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>` . .. _byoc-hosting-setenv: Setup Environment ----------------- 1. Create a Compilation Instance: If using an **EC2 instance for compilation** you can use an Inf1 instance to compile and test a model. Follow these steps to launch an Inf1 instance: .. include:: /general/setup/install-templates/inf1/launch-inf1-ami.rst If using an **SageMaker Notebook for compilation**, follow the instructions in `Get Started with Notebook Instances <https://docs.aws.amazon.com/sagemaker/latest/dg/gs-setup-working-env.html>`_ to provision the environment. It is recommended that you start with an ml.c5.4xlarge instance for the compilation. Also, increase the volume size of you SageMaker notebook instance, to accomodate the models and containers built locally. A volume of 10GB is sufficient. .. note:: To compile the model in the SageMaker Notebook instance, you'll need to update the conda environments to include the Neuron Compiler and Neuron Framework Extensions. Follow the installation guide on the section :ref:`how-to-update-to-latest-Neuron-Conda-Env` to update the environments. 2. Set up the environment to compile a model, build your own container and deploy: To compile your model on EC2 or SageMaker Notebook, follow the *Set up a development environment* section on the EC2 :ref:`ec2-then-ec2-setenv` documentation. Refer to `Adapting Your Own Inference Container <https://docs.aws.amazon.com/sagemaker/latest/dg/adapt-inference-container.html>`_ documentation for information on how to bring your own containers to SageMaker Hosting. Make sure to add the **AmazonEC2ContainerRegistryPowerUser** role to your IAM role ARN, so you're able to build and push containers from your SageMaker Notebook instance. .. note:: The container image can be created using :ref:`how-to-build-neuron-container`. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _byoc-hosting-devflow: Bring Your Own Neuron Container to Sagemaker Hosting (inf1) ==================================================== .. contents:: Table of Contents :local: :depth: 2 Description ----------- |image| .. |image| image:: /images/byoc-then-hosting-dev-flow.png :width: 850 :alt: Neuron developer flow on SageMaker Neo :align: middle You can use a SageMaker Notebook or an EC2 instance to compile models and build your own containers for deployment on SageMaker Hosting using ml.inf1 instances. In this developer flow, you provision a Sagemaker Notebook or an EC2 instance to train and compile your model to Inferentia. Then you deploy your model to SageMaker Hosting using the SageMaker Python SDK. Follow the steps bellow to setup your environment. Once your environment is set you'll be able to follow the :ref:`BYOC HuggingFace pretrained BERT container to Sagemaker Tutorial &lt;/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb&gt;` . .. _byoc-hosting-setenv: Setup Environment ----------------- 1. Create a Compilation Instance: If using an **EC2 instance for compilation** you can use an Inf1 instance to compile and test a model. Follow these steps to launch an Inf1 instance: .. include:: /general/setup/install-templates/inf1/launch-inf1-ami.rst If using an **SageMaker Notebook for compilation**, follow the instructions in `Get Started with Notebook Instances &lt;https://docs.aws.amazon.com/sagemaker/latest/dg/gs-setup-working-env.html&gt;`_ to provision the environment. It is recommended that you start with an ml.c5.4xlarge instance for the compilation. Also, increase the volume size of you SageMaker notebook instance, to accomodate the models and containers built locally. A volume of 10GB is sufficient. .. note:: To compile the model in the SageMaker Notebook instance, you'll need to update the conda environments to include the Neuron Compiler and Neuron Framework Extensions. Follow the installation guide on the section :ref:`how-to-update-to-latest-Neuron-Conda-Env` to update the environments. 2. Set up the environment to compile a model, build your own container and deploy: To compile your model on EC2 or SageMaker Notebook, follow the *Set up a development environment* section on the EC2 :ref:`ec2-then-ec2-setenv` documentation. Refer to `Adapting Your Own Inference Container &lt;https://docs.aws.amazon.com/sagemaker/latest/dg/adapt-inference-container.html&gt;`_ documentation for information on how to bring your own containers to SageMaker Hosting. Make sure to add the **AmazonEC2ContainerRegistryPowerUser** role to your IAM role ARN, so you're able to build and push containers from your SageMaker Notebook instance. .. note:: The container image can be created using :ref:`how-to-build-neuron-container`. </pre></body></html>
2023-09-29T20:55:15.649Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/devflows/training/parallelcluster/parallelcluster-training.rst.txt
``` .. _parallelcluster-training: Train your model on ParallelCluster =================================== .. contents:: Table of Contents :local: :depth: 3 Description ------------ This document explains how to use AWS ParallelCluster to build HPC compute environment that uses Trn1 compute nodes to run your distributed ML training job. Once the nodes are launched, we will run a training task to confirm that the nodes are working, and use slurm commands to check the job status. In this tutorial, we will use AWS `pcluster` command to run a yaml file in order to generate the cluster. As an example, we are going to launch multiple Trn1.32xl nodes in our cluster. We are going to set up our ParallelCluster infrastructure as below: .. image:: ../../../../images/vpc-setup.png As shown in the figure above, inside a VPC, there are two subnets, a public and a private ones. Head Node resides in the public subnet, while the compute fleet (in this case, trn1 instances) are in the private subnet. A Network Address Translation (NAT) gateway is also needed in order for nodes in the private subnet to connect to clients outside the VPC. In the next section, we are going to describe how to set up all the necessary infrastructure for trn1 ParallelCluster. Setup environment ----------------- 1. Install prerequisite infrastructure: Follow `these setup <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/general/network/vpc-subnet-setup.md>`_ instructions to install VPC and all the necessary components for ParallelCluster. 2. Create and launch ParallelCluster Follow `these creating cluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/cluster-configs/trn1-16-nodes-pcluster.md>`_ instructions to launch ParallelCluster in the VPC. 1. Launch training job Follow `these running training <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/dp-bert-launch-job.md>`_ instructions to submit a model training script as a slurm job. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _parallelcluster-training: Train your model on ParallelCluster =================================== .. contents:: Table of Contents :local: :depth: 3 Description ------------ This document explains how to use AWS ParallelCluster to build HPC compute environment that uses Trn1 compute nodes to run your distributed ML training job. Once the nodes are launched, we will run a training task to confirm that the nodes are working, and use slurm commands to check the job status. In this tutorial, we will use AWS `pcluster` command to run a yaml file in order to generate the cluster. As an example, we are going to launch multiple Trn1.32xl nodes in our cluster. We are going to set up our ParallelCluster infrastructure as below: .. image:: ../../../../images/vpc-setup.png As shown in the figure above, inside a VPC, there are two subnets, a public and a private ones. Head Node resides in the public subnet, while the compute fleet (in this case, trn1 instances) are in the private subnet. A Network Address Translation (NAT) gateway is also needed in order for nodes in the private subnet to connect to clients outside the VPC. In the next section, we are going to describe how to set up all the necessary infrastructure for trn1 ParallelCluster. Setup environment ----------------- 1. Install prerequisite infrastructure: Follow `these setup &lt;https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/general/network/vpc-subnet-setup.md&gt;`_ instructions to install VPC and all the necessary components for ParallelCluster. 2. Create and launch ParallelCluster Follow `these creating cluster &lt;https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/cluster-configs/trn1-16-nodes-pcluster.md&gt;`_ instructions to launch ParallelCluster in the VPC. 1. Launch training job Follow `these running training &lt;https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/dp-bert-launch-job.md&gt;`_ instructions to submit a model training script as a slurm job. </pre></body></html>
2023-09-29T20:55:15.714Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/index.rst.txt
``` .. _neuron-architecture-index: Neuron Architecture =================== The Neuron Architecture provides insights into Neuron enabled instances system, software and chip capabilities. The EC2 Trn and Inf instance architecture provides an overview of the EC2 instances powered by the Inferentia and Trainium chips (Neuron Devices), and the corresponding system features like inbox and network connectivity, memory hierarchy, and NeuronCores versions and capabilities. The Neuron model architecture fit provides insights to what is the best match between deep-learning model architectures and the NeuronCore version. .. contents:: Table of contents :local: :depth: 1 Trn and Inf instances --------------------- .. grid:: 2 .. card:: EC2 Trn1/Trn1n Architecture :link: aws-trn1-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: EC2 Inf2 Architecture :link: aws-inf2-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: EC2 Inf1 Architecture :link: aws-inf1-arch :link-type: ref :class-body: sphinx-design-class-title-small Trainium and Inferentia devices ------------------------------- .. grid:: 2 .. card:: AWS Trainium Architecture :link: trainium-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: AWS Inferentia2 Architecture :link: inferentia2-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: AWS Inferentia Architecture :link: inferentia-arch :link-type: ref :class-body: sphinx-design-class-title-small NeuronCores ----------- .. grid:: 2 .. card:: NeuronCore-v1 :link: neuroncores-v1-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: NeuronCore-v2 :link: neuroncores-v2-arch :link-type: ref :class-body: sphinx-design-class-title-small Neuron Model Architecture ------------------------- .. grid:: 2 .. card:: Neuron Model Architecture Fit Guidelines :link: model_architecture_fit :link-type: ref :class-body: sphinx-design-class-title-small Other ----- .. grid:: 2 .. card:: Neuron Glossary :link: neuron_hw_glossary :link-type: ref :class-body: sphinx-design-class-title-small .. toctree:: :maxdepth: 1 :hidden: /general/arch/neuron-hardware/inf1-arch /general/arch/neuron-hardware/trn1-arch /general/arch/neuron-hardware/inf2-arch /general/arch/neuron-hardware/inferentia /general/arch/neuron-hardware/inferentia2 /general/arch/neuron-hardware/trainium /general/arch/neuron-hardware/neuroncores-arch Neuron Model Architecture Fit Guidelines <model-architecture-fit> Neuron Glossary </general/arch/glossary> ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-architecture-index: Neuron Architecture =================== The Neuron Architecture provides insights into Neuron enabled instances system, software and chip capabilities. The EC2 Trn and Inf instance architecture provides an overview of the EC2 instances powered by the Inferentia and Trainium chips (Neuron Devices), and the corresponding system features like inbox and network connectivity, memory hierarchy, and NeuronCores versions and capabilities. The Neuron model architecture fit provides insights to what is the best match between deep-learning model architectures and the NeuronCore version. .. contents:: Table of contents :local: :depth: 1 Trn and Inf instances --------------------- .. grid:: 2 .. card:: EC2 Trn1/Trn1n Architecture :link: aws-trn1-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: EC2 Inf2 Architecture :link: aws-inf2-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: EC2 Inf1 Architecture :link: aws-inf1-arch :link-type: ref :class-body: sphinx-design-class-title-small Trainium and Inferentia devices ------------------------------- .. grid:: 2 .. card:: AWS Trainium Architecture :link: trainium-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: AWS Inferentia2 Architecture :link: inferentia2-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: AWS Inferentia Architecture :link: inferentia-arch :link-type: ref :class-body: sphinx-design-class-title-small NeuronCores ----------- .. grid:: 2 .. card:: NeuronCore-v1 :link: neuroncores-v1-arch :link-type: ref :class-body: sphinx-design-class-title-small .. card:: NeuronCore-v2 :link: neuroncores-v2-arch :link-type: ref :class-body: sphinx-design-class-title-small Neuron Model Architecture ------------------------- .. grid:: 2 .. card:: Neuron Model Architecture Fit Guidelines :link: model_architecture_fit :link-type: ref :class-body: sphinx-design-class-title-small Other ----- .. grid:: 2 .. card:: Neuron Glossary :link: neuron_hw_glossary :link-type: ref :class-body: sphinx-design-class-title-small .. toctree:: :maxdepth: 1 :hidden: /general/arch/neuron-hardware/inf1-arch /general/arch/neuron-hardware/trn1-arch /general/arch/neuron-hardware/inf2-arch /general/arch/neuron-hardware/inferentia /general/arch/neuron-hardware/inferentia2 /general/arch/neuron-hardware/trainium /general/arch/neuron-hardware/neuroncores-arch Neuron Model Architecture Fit Guidelines &lt;model-architecture-fit&gt; Neuron Glossary &lt;/general/arch/glossary&gt; </pre></body></html>
2023-09-29T20:55:15.754Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-hardware/inf2-arch.rst.txt
``` .. _aws-inf2-arch: AWS Inf2 Architecture ===================== In this page, we provide an architectural overview of the AWS Inf2 instances, and the corresponding Inferentia2 NeuronDevices that power them (Inferentia2 devices from here on). Inf2 Architecture ----------------- The EC2 Inf2 instance is powered by up to 12 :ref:`Inferentia2 devices <inferentia2-arch>`, and allows customers to choose between four instances sizes: .. list-table:: :widths: auto :header-rows: 1 :stub-columns: 1 :align: left * - Instance size - # of Inferentia2 devices - vCPUs - Host Memory (GiB) - FP8/FP16/BF16/TF32 TFLOPS - FP32 TFLOPS - Device Memory (GiB) - Instance Memory Bandwidth (GiB/sec) - NeuronLink-v2 device-to-device (GiB/sec/device) * - Inf2.xlarge - 1 - 4 - 16 - 190 - 47.5 - 32 - 820 - N/A * - Inf2.8xlarge - 1 - 32 - 128 - 190 - 47.5 - 32 - 820 - N/A * - Inf2.24xlarge - 6 - 96 - 384 - 1140 - 285 - 192 - 4920 - 192 * - Inf2.48xlarge - 12 - 192 - 768 - 2280 - 570 - 384 - 9840 - 192 Inf2 offers a low-latency and high-bandwidth chip-to-chip interconnect called NeuronLink-v2, which enables high performance collective communication operations (e.g. AllReduce, AllGather). This allows sharding large models across Inferentia2 devices (e.g. via Tensor Parallelism), and thus optimizing latency and throughput. This capability is especially useful when deploying Large Generative Models. .. image:: /images/inf2-topology.jpg ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _aws-inf2-arch: AWS Inf2 Architecture ===================== In this page, we provide an architectural overview of the AWS Inf2 instances, and the corresponding Inferentia2 NeuronDevices that power them (Inferentia2 devices from here on). Inf2 Architecture ----------------- The EC2 Inf2 instance is powered by up to 12 :ref:`Inferentia2 devices &lt;inferentia2-arch&gt;`, and allows customers to choose between four instances sizes: .. list-table:: :widths: auto :header-rows: 1 :stub-columns: 1 :align: left * - Instance size - # of Inferentia2 devices - vCPUs - Host Memory (GiB) - FP8/FP16/BF16/TF32 TFLOPS - FP32 TFLOPS - Device Memory (GiB) - Instance Memory Bandwidth (GiB/sec) - NeuronLink-v2 device-to-device (GiB/sec/device) * - Inf2.xlarge - 1 - 4 - 16 - 190 - 47.5 - 32 - 820 - N/A * - Inf2.8xlarge - 1 - 32 - 128 - 190 - 47.5 - 32 - 820 - N/A * - Inf2.24xlarge - 6 - 96 - 384 - 1140 - 285 - 192 - 4920 - 192 * - Inf2.48xlarge - 12 - 192 - 768 - 2280 - 570 - 384 - 9840 - 192 Inf2 offers a low-latency and high-bandwidth chip-to-chip interconnect called NeuronLink-v2, which enables high performance collective communication operations (e.g. AllReduce, AllGather). This allows sharding large models across Inferentia2 devices (e.g. via Tensor Parallelism), and thus optimizing latency and throughput. This capability is especially useful when deploying Large Generative Models. .. image:: /images/inf2-topology.jpg </pre></body></html>
2023-09-29T20:55:15.773Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-hardware/trainium.rst.txt
``` .. _trainium-arch: Trainium Architecture ---------------------- At the heart of the Trn1 instance are 16 x Trainium devices (each Trainium include 2 x :ref:`NeuronCore-v2 <neuroncores-v2-arch>`). Trainium is the second generation purpose-built Machine Learning accelerator from AWS. The Trainium device architecture is depicted below: .. image:: /images/trainium-neurondevice.png Each Trainium device consists of: - Compute: * 2x :ref:`NeuronCore-v2 <neuroncores-v2-arch>` cores, delivering 380 INT8 TOPS, 190 FP16/BF16/cFP8/TF32 TFLOPS, and 47.5 FP32 TFLOPS. - Device Memory: * 32GiB of device memory (for storing model state), with 820 GiB/sec of bandwidth. - Data movement: * 1 TB/sec of DMA bandwidth, with inline memory compression/decompression. - NeuronLink: * NeuronLink-v2 for device-to-device interconnect enables efficient scale-out training, as well as memory pooling between the different Trainium devices. - Programmability: * Trainium supports dynamic shapes and control flow, via ISA extensions of NeuronCore-v2. In addition, Trainium also allows for user-programmable :ref:`rounding mode <neuron-rounding-modes>` (Round Nearest Even Stochastic Rounding), and custom-operators via the deeply embedded GPSIMD engines. More detailed description of all the hardware engines can be seen at :ref:`NeuronCore-v2 <neuroncores-v2-arch>` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _trainium-arch: Trainium Architecture ---------------------- At the heart of the Trn1 instance are 16 x Trainium devices (each Trainium include 2 x :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;`). Trainium is the second generation purpose-built Machine Learning accelerator from AWS. The Trainium device architecture is depicted below: .. image:: /images/trainium-neurondevice.png Each Trainium device consists of: - Compute: * 2x :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;` cores, delivering 380 INT8 TOPS, 190 FP16/BF16/cFP8/TF32 TFLOPS, and 47.5 FP32 TFLOPS. - Device Memory: * 32GiB of device memory (for storing model state), with 820 GiB/sec of bandwidth. - Data movement: * 1 TB/sec of DMA bandwidth, with inline memory compression/decompression. - NeuronLink: * NeuronLink-v2 for device-to-device interconnect enables efficient scale-out training, as well as memory pooling between the different Trainium devices. - Programmability: * Trainium supports dynamic shapes and control flow, via ISA extensions of NeuronCore-v2. In addition, Trainium also allows for user-programmable :ref:`rounding mode &lt;neuron-rounding-modes&gt;` (Round Nearest Even Stochastic Rounding), and custom-operators via the deeply embedded GPSIMD engines. More detailed description of all the hardware engines can be seen at :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;`</pre></body></html>
2023-09-29T20:55:15.938Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-hardware/inferentia.rst.txt
``` .. _inferentia-arch: Inferentia Architecture ----------------------- At the heart of the Inf1 instance are 16 x Inferentia devices (each Inferentia include 4 x :ref:`NeuronCore-v1 <neuroncores-v1-arch>`), as depicted below: .. image:: /images/inferentia-neurondevice.png Each Inferentia device consists of: - Compute: * 4x :ref:`NeuronCore-v1 <neuroncores-v1-arch>` cores, delivering 128 INT8 TOPS and 64 FP16/BF16 TFLOPS - Device Memory: * 8GiB of device DRAM memory (for storing parameters and intermediate state), with 50 GiB/sec of bandwidth - NeuronLink: * Enables co-optimization of latency and throughput via the :ref:`Neuron Core Pipeline <neuroncore-pipeline>` technology ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _inferentia-arch: Inferentia Architecture ----------------------- At the heart of the Inf1 instance are 16 x Inferentia devices (each Inferentia include 4 x :ref:`NeuronCore-v1 &lt;neuroncores-v1-arch&gt;`), as depicted below: .. image:: /images/inferentia-neurondevice.png Each Inferentia device consists of: - Compute: * 4x :ref:`NeuronCore-v1 &lt;neuroncores-v1-arch&gt;` cores, delivering 128 INT8 TOPS and 64 FP16/BF16 TFLOPS - Device Memory: * 8GiB of device DRAM memory (for storing parameters and intermediate state), with 50 GiB/sec of bandwidth - NeuronLink: * Enables co-optimization of latency and throughput via the :ref:`Neuron Core Pipeline &lt;neuroncore-pipeline&gt;` technology </pre></body></html>
2023-09-29T20:55:15.954Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/index.rst.txt
``` .. _neuron-features-index: Neuron Features =============== Neuron features provide insights into Neuron capabilities that enable high-performance and improve usability of developing and deploying deep learning acceleration on top of Inferentia and Trainium based instances. .. grid:: 2 .. card:: Data Types :link: neuron-data-types :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Rounding Modes :link: neuron-rounding-modes :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Batching :link: neuron-batching :link-type: ref :class-body: sphinx-design-class-title-small .. card:: NeuronCore Pipeline :link: neuroncore-pipeline :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Persistent Cache :link: neuron-caching :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Collective Communication :link: feature_cccom :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Control Flow :link: feature-control-flow :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Custom C++ Operators :link: feature-custom-c++-operators :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Dynamic Shapes :link: dynamic-shapes :link-type: ref :class-body: sphinx-design-class-title-small .. toctree:: :maxdepth: 1 :hidden: Data Types <data-types> Rounding Modes <rounding-modes> Neuron Batching </general/arch/neuron-features/neuroncore-batching> NeuronCore Pipeline </general/arch/neuron-features/neuroncore-pipeline> Neuron Persistent Cache <neuron-caching> Collective Communication <collective-communication> control-flow custom-c++-operators dynamic-shapes ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-features-index: Neuron Features =============== Neuron features provide insights into Neuron capabilities that enable high-performance and improve usability of developing and deploying deep learning acceleration on top of Inferentia and Trainium based instances. .. grid:: 2 .. card:: Data Types :link: neuron-data-types :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Rounding Modes :link: neuron-rounding-modes :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Batching :link: neuron-batching :link-type: ref :class-body: sphinx-design-class-title-small .. card:: NeuronCore Pipeline :link: neuroncore-pipeline :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Persistent Cache :link: neuron-caching :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Collective Communication :link: feature_cccom :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Control Flow :link: feature-control-flow :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Custom C++ Operators :link: feature-custom-c++-operators :link-type: ref :class-body: sphinx-design-class-title-small .. card:: Neuron Dynamic Shapes :link: dynamic-shapes :link-type: ref :class-body: sphinx-design-class-title-small .. toctree:: :maxdepth: 1 :hidden: Data Types &lt;data-types&gt; Rounding Modes &lt;rounding-modes&gt; Neuron Batching &lt;/general/arch/neuron-features/neuroncore-batching&gt; NeuronCore Pipeline &lt;/general/arch/neuron-features/neuroncore-pipeline&gt; Neuron Persistent Cache &lt;neuron-caching&gt; Collective Communication &lt;collective-communication&gt; control-flow custom-c++-operators dynamic-shapes </pre></body></html>
2023-09-29T20:55:15.978Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-hardware/trn1-arch.rst.txt
``` .. _aws-trn1-arch: AWS Trn1/Trn1n Architecture =========================== In this page, we provide an architectural overview of the AWS Trn1/Trn1n instances, and the corresponding :ref:`Trainium <trainium-arch>` NeuronDevices that power them (Trainium devices from here on). .. contents:: Table of contents :local: :depth: 2 .. _trn1-arch: Trn1/Trn1n Architecture ----------------------- The EC2 Trn1/Trn1n instance is powered by up to 16 :ref:`Trainium <trainium-arch>` devices. .. list-table:: :widths: auto :header-rows: 1 :stub-columns: 1 :align: left * - Instance size - # of Trainium devices - vCPUs - Host Memory (GiB) - FP8/FP16/BF16/TF32 TFLOPS - FP32 TFLOPS - Device Memory (GiB) - Device Memory Bandwidth (GiB/sec) - NeuronLink-v2 device-to-device (GiB/sec/device) - EFA bandwidth (Gbps) * - Trn1.2xlarge - 1 - 8 - 32 - 190 - 47.5 - 32 - 820 - N/A - up-to 25 * - Trn1.32xlarge - 16 - 128 - 512 - 3,040 - 760 - 512 - 13,120 - 384 - 800 * - Trn1n.32xlarge - 16 - 128 - 512 - 3,040 - 760 - 512 - 13,120 - 768 - 1,600 The Trn1.2xlarge instance size allows customers to train their models on a single Trainium device, which is useful for small model training, as well as model experimentation. The Trn1.32xlarge/ Trn1n.32xlarge instance size comes with a high-bandwidth and low-latency NeuronLink-v2 device-to-device interconnect, which utilizes a 4D-HyperCube topology. This is useful for collective-communication between the Trainium devices during scale-out training, as well as for pooling the memory capacity of all Trainium devices, making it directly addressable from each one of the devices. In a Trn1/Trn1n server, the Trainium devices are connected in a 2D Torus topology, as depicted below: .. image:: /images/trn1-topology.png The Trn1/Trn1n instances are also available in an EC2 UltraCluster, which enables customers to scale Trn1/Trn1n instances to over 30,000 Trainium devices, and leverage the AWS-designed non-blocking petabit-scale EFA networking infrastructure. .. image:: /images/ultracluster-1.png ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _aws-trn1-arch: AWS Trn1/Trn1n Architecture =========================== In this page, we provide an architectural overview of the AWS Trn1/Trn1n instances, and the corresponding :ref:`Trainium &lt;trainium-arch&gt;` NeuronDevices that power them (Trainium devices from here on). .. contents:: Table of contents :local: :depth: 2 .. _trn1-arch: Trn1/Trn1n Architecture ----------------------- The EC2 Trn1/Trn1n instance is powered by up to 16 :ref:`Trainium &lt;trainium-arch&gt;` devices. .. list-table:: :widths: auto :header-rows: 1 :stub-columns: 1 :align: left * - Instance size - # of Trainium devices - vCPUs - Host Memory (GiB) - FP8/FP16/BF16/TF32 TFLOPS - FP32 TFLOPS - Device Memory (GiB) - Device Memory Bandwidth (GiB/sec) - NeuronLink-v2 device-to-device (GiB/sec/device) - EFA bandwidth (Gbps) * - Trn1.2xlarge - 1 - 8 - 32 - 190 - 47.5 - 32 - 820 - N/A - up-to 25 * - Trn1.32xlarge - 16 - 128 - 512 - 3,040 - 760 - 512 - 13,120 - 384 - 800 * - Trn1n.32xlarge - 16 - 128 - 512 - 3,040 - 760 - 512 - 13,120 - 768 - 1,600 The Trn1.2xlarge instance size allows customers to train their models on a single Trainium device, which is useful for small model training, as well as model experimentation. The Trn1.32xlarge/ Trn1n.32xlarge instance size comes with a high-bandwidth and low-latency NeuronLink-v2 device-to-device interconnect, which utilizes a 4D-HyperCube topology. This is useful for collective-communication between the Trainium devices during scale-out training, as well as for pooling the memory capacity of all Trainium devices, making it directly addressable from each one of the devices. In a Trn1/Trn1n server, the Trainium devices are connected in a 2D Torus topology, as depicted below: .. image:: /images/trn1-topology.png The Trn1/Trn1n instances are also available in an EC2 UltraCluster, which enables customers to scale Trn1/Trn1n instances to over 30,000 Trainium devices, and leverage the AWS-designed non-blocking petabit-scale EFA networking infrastructure. .. image:: /images/ultracluster-1.png </pre></body></html>
2023-09-29T20:55:16.016Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/model-architecture-fit.rst.txt
``` .. _model_architecture_fit: Model Architecture Fit Guidelines ================================= .. contents:: Table of contents :local: :depth: 2 Introduction $$$$$$$$$$$$ AWS Neuron SDK enables you to train and deploy a wide range of deep learning models on :ref:`EC2 Inf1 <aws-inf1-arch>`, :ref:`EC2 inf2 <aws-inf2-arch>` and :ref:`EC2 Trn1/Trn1n <aws-trn1-arch>` instances , which are powered by :ref:`Inferentia <inferentia-arch>`, :ref:`Inferentia2 <inferentia2-arch>` and :ref:`Trainium <trainium-arch>` devices. The below table provides details of the NeuronDevices and NeuronCores enabling each instance: .. list-table:: :widths: auto :header-rows: 1 :stub-columns: 1 :align: left * - Instance - NeuronDevices - NeuronCores - # NeuronCores in a NeuronDevice * - :ref:`EC2 Trn1 <aws-trn1-arch>` - 16 x :ref:`Trainium <trainium-arch>` - 32 x :ref:`NeuronCore-v2 <neuroncores-v2-arch>` - 2 * - :ref:`EC2 Trn1n <aws-trn1-arch>` - 16 x :ref:`Trainium <trainium-arch>` - 32 x :ref:`NeuronCore-v2 <neuroncores-v2-arch>` - 2 * - :ref:`EC2 inf2 <aws-inf2-arch>` - 12 x :ref:`Inferentia2 <inferentia2-arch>` - 24 x :ref:`NeuronCore-v2 <neuroncores-v2-arch>` - 2 * - :ref:`EC2 Inf1 <aws-inf1-arch>` - 16 x :ref:`Inferentia <inferentia-arch>` - 64 x :ref:`NeuronCore-v1 <neuroncores-v1-arch>` - 4 This document describes what types of deep learning model architectures are a good fit for :ref:`Inferentia <inferentia-arch>`, :ref:`Inferentia2 <inferentia2-arch>` and :ref:`Trainium <trainium-arch>` powered instances. Model Support Overview $$$$$$$$$$$$$$$$$$$$$$ .. _model-architecture-fit-neuroncore-v2: AWS Trainium and AWS Inferentia2 (NeuronCore-v2) ------------------------------------------------ *Last update* - 05/05/2023 .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - Model Family/ Neural Network Architecture - Category - Hardware Architecture - Training with PyTorch Neuron (``torch-neuronx``) - Inference with PyTorch Neuron (``torch-neuronx``) - Inference with TensorFlow Neuron (``tensorflow-neuronx``) * - Transformer Encoders - NLP - Good Fit - Supported - Supported - Supported * - Transformer Decoders - NLP - Good Fit - Supported - Supported - :ref:`Roadmap Item <neuron_roadmap>` * - Transformer Encoder-Decoder (Sequence-to-sequence) - NLP - Good Fit - Supported - :ref:`Roadmap Item <neuron_roadmap>` - :ref:`Roadmap Item <neuron_roadmap>` * - LSTMs - NLP and Computer Vision - Good Fit - :ref:`Roadmap Item <neuron_roadmap>` - :ref:`Roadmap Item <neuron_roadmap>` - :ref:`Roadmap Item <neuron_roadmap>` * - Vision Transformer - Computer Vision - Good Fit - Supported - Supported - :ref:`Roadmap Item <neuron_roadmap>` * - Diffusion models - Computer Vision - Good Fit - :ref:`Roadmap Item <neuron_roadmap>` - Supported - :ref:`Roadmap Item <neuron_roadmap>` * - Convolutional Neural Network (CNN) models - Computer Vision - Good Fit - :ref:`Roadmap Item <neuron_roadmap>` - Supported - :ref:`Roadmap Item <neuron_roadmap>` * - R-CNNs - Computer Vision - Good Fit - :ref:`Roadmap Item <neuron_roadmap>` - :ref:`Roadmap Item <neuron_roadmap>` - :ref:`Roadmap Item <neuron_roadmap>` .. note:: Supported means that at least a single model of the model family or the neural-network architecture already enabled. .. _model-architecture-fit-neuroncore-v1: AWS Inferentia (NeuronCore v1) ------------------------------ *Last update* - 05/05/2023 .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - Model Family/ Neural Network Architecture - Category - Hardware Architecture - PyTorch Neuron (``torch-neuron``) - TensorFlow Neuron (``tensorflow-neuron (TF 1.x)``) - TensorFlow Neuron (``tensorflow-neuron (TF 2.x)``) * - Transformer Encoders - NLP - Good Fit - Supported - Supported - Supported * - Transformer Decoders - NLP - Not a Good Fit - NA - NA - NA * - Transformer Encoder-Decoder (Sequence-to-sequence) - NLP - Not a Good Fit - NA - NA - NA * - LSTMs - NLP and Computer Vision - Good Fit - Supported - NA - NA * - Vision Transformer - Computer Vision - Good Fit - Supported - :ref:`Roadmap Item <neuron_roadmap>` - :ref:`Roadmap Item <neuron_roadmap>` * - Diffusion models - Computer Vision - Good Fit - :ref:`Roadmap Item <neuron_roadmap>` - NA - NA * - Convolutional Neural Network (CNN) models - Computer Vision - Good Fit - Supported - Supported - :ref:`Roadmap Item <neuron_roadmap>` * - R-CNNs - Computer Vision - Supported with limitations - Supported with limitations - NA - NA .. note:: Supported means that at least a single model of the model family or the neural-network architecture already enabled. Clarifications on Inferentia (1st generation) Model Architecture $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Natural Language Processing (NLP) Models with Transformer ---------------------------------------------------------- Transformer Encoders ~~~~~~~~~~~~~~~~~~~~~ Autoencoding models use only the encoder part of the Transformer architecture. Representatives of this family include models like **BERT, distilBERT, XLM-BERT, Roberta, BioBert**, etc. Since the encoding process in these models can be parallelized, you can expect these models to run well both on Inferentia and Trainium. - **Architecture Fit** - Autoencoding models are a good fit for Inferentia. - **Neuron Support** - Neuron SDK support running Autoencoding models for inference on Inferentia. Please see :ref:`benchmark results <appnote-performance-benchmark>` of these models. To get started with NLP models you can refer to Neuron :ref:`PyTorch <pytorch-nlp>`, :ref:`TensorFlow <tensorflow-nlp>` and :ref:`MXNet <mxnet-nlp>` NLP tutorials. Decoder models, or autoregressive models with Transformer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Autoregressive models keep only the decoder part of the Transformer architecture. Representatives of this family include models like **GPT-3, GPT-2**, etc. - **Architecture Fit** - Autoregressive models are not a good fit for Inferentia. Usually the decoder part in these models is the most significant performance bottleneck since it must be executed once per output token, causing frequent access to the memory. Due to this these models typically experience the best performance only when the decoder maximum sequence length is short (e.g., 128). - **Neuron Support** - Neuron SDK does not support Autoregressive models inference on Inferentia. Encoder-decoder models, or sequence-to-sequence models with Transformer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sequence-to-sequence models use both of encoder and decoder of the Transformer architecture. Representatives of this family include models like **T5, Bart, Marian MT**, etc. - **Architecture Fit** - Sequence-to-sequence models are not a good fit for Inferentia. Like decoder models explained above, usually the decoder part in these sequence-to-sequence models is the most significant performance bottleneck since it must be executed once per output token, causing frequent access to the memory. Due to this, even when you enabled the models to run on Inferentia with wrapping the decoder part, these models typically experience the best performance only when the decoder maximum sequence length is short (e.g., 128). - **Neuron Support** - Neuron SDK does not support sequence-to-sequence models inference on Inferentia out of the box. However, you can run a model with defining wrappers around the encoder and decoder portions of it. For example, please refer to :ref:`MarianMT tutorial </src/examples/pytorch/Transformer-marianmt.ipynb>` on Inferentia for more details. Computer Vision Models ---------------------- Convolutional Neural Network (CNN) based models ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CNN based models are used for applications in image classification and object detection. Representatives of this family include models like **ResNet, ResNext, VGG, YOLO, SSD**, etc. - **Architecture Fit** - CNN based models are a good fit for Inferentia. - **Neuron Support** - Neuron SDK supports CNN based models inference on Inferentia. Please see the :ref:`benchmark results <appnote-performance-benchmark>` of these models. To get started with these models you can refer to Neuron :ref:`PyTorch <pytorch-computervision>`, :ref:`TensorFlow <tensorflow-computervision>` and :ref:`MXNet <mxnet-computervision>` tutorials. Region-based CNN (R-CNN) models ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Region-based CNNs (R-CNNs) models are commonly used for object detection and image segmentation tasks. Popular variants of the the R-CNN model include R-CNN, Fast R-CNN, Faster R-CNN, and Mask R-CNN. .. _rcnn_limitations_inf1: - **Architecture Fit** - R-CNN models can have a few limitations and considerations on Inferentia: **RoI Align operators**: At this time, RoI Align operators typically cannot run efficiently on NeuronCore v1. As a result, RoI Align operators are mapped directly to CPU during compilation. R-CNN models that predict a low number of bounding boxes (<100) experience the best performance on Inferentia. **Large ResNet backbone**: R-CNNs that have a large ResNet backbone (such as ResNet-50 or ResNet-101) experience the greatest performance improvement on Inferentia because a larger portion of the R-CNN compute is accelerated. - **Neuron Support** - Torch models must be traceable using :func:`torch.jit.trace` for compilation on Inferentia. Most `Detectron2 <https://github.com/facebookresearch/detectron2>`_-based R-CNNs are not jit traceable by default, so they cannot readily be compiled for optimized inference on Inferentia. The :ref:`torch-neuron-r-cnn-app-note` application note demonstrates how to compile and improve the performance of R-CNN models on Inferentia. It also provides an end-to-end example of running a Detectron2 R-CNN on Inferentia. Models with Long Short-Term Memory (LSTM) networks -------------------------------------------------- LSTMs use an internal state to process sequential data. LSTMs are commonly used to model temporal sequences of data in language processing and computer vision applications. - **Architecture Fit** - Models with LSTM cells are a good fit for Inferentia. - **Neuron Support** - Models with LSTM networks are supported on Inferentia, please see :ref:`torch_neuron_lstm_support`. Diffusion Models ---------------- - **Architecture Fit** - Diffusion models are a good fit for Inferentia. - **Neuron Support** - Diffusion models are not supported on Inferentia as of the latest Neuron release. Please track the :ref:`Neuron Roadmap <neuron_roadmap>` for details. Known Issues on Inferentia (NeuronCore v1) ------------------------------------------ Support of large models (impacts `torch-neuron` and `tensorflow-neuron` (TF1.x)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. _2gb_protobuf_issue: During compilation on Inferentia (NeuronCore v1), ``torch-neuron`` and ``tensorflow-neuron (TF1.x)`` export a protobuf that contains the model's graph structure and weights. This causes an issue when the total size of the model's weights exceeds the 2GB limitation of protobufs. As a result, customers who want to run large models such as **RegNet**, **Stable Diffusion**, and **t5-11b** might run into protobuf errors during compilation. This is a known issue related to the compilation process, not a hardware-dependent issue. Allowing large models like this to be compiled for inference on Inferentia (NeuronCore v1) is a feature that we intend to address in a future release. Please track the :ref:`Neuron Roadmap <neuron_roadmap>` for details. .. note:: Neuron release 2.5.0 added Experimental support for tracing models larger than 2GB `in `tensorflow-neuron (TF2.x)``, please see ``extract-weights`` flag in :ref:`tensorflow-ref-neuron-tracing-api` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _model_architecture_fit: Model Architecture Fit Guidelines ================================= .. contents:: Table of contents :local: :depth: 2 Introduction $$$$$$$$$$$$ AWS Neuron SDK enables you to train and deploy a wide range of deep learning models on :ref:`EC2 Inf1 &lt;aws-inf1-arch&gt;`, :ref:`EC2 inf2 &lt;aws-inf2-arch&gt;` and :ref:`EC2 Trn1/Trn1n &lt;aws-trn1-arch&gt;` instances , which are powered by :ref:`Inferentia &lt;inferentia-arch&gt;`, :ref:`Inferentia2 &lt;inferentia2-arch&gt;` and :ref:`Trainium &lt;trainium-arch&gt;` devices. The below table provides details of the NeuronDevices and NeuronCores enabling each instance: .. list-table:: :widths: auto :header-rows: 1 :stub-columns: 1 :align: left * - Instance - NeuronDevices - NeuronCores - # NeuronCores in a NeuronDevice * - :ref:`EC2 Trn1 &lt;aws-trn1-arch&gt;` - 16 x :ref:`Trainium &lt;trainium-arch&gt;` - 32 x :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;` - 2 * - :ref:`EC2 Trn1n &lt;aws-trn1-arch&gt;` - 16 x :ref:`Trainium &lt;trainium-arch&gt;` - 32 x :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;` - 2 * - :ref:`EC2 inf2 &lt;aws-inf2-arch&gt;` - 12 x :ref:`Inferentia2 &lt;inferentia2-arch&gt;` - 24 x :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;` - 2 * - :ref:`EC2 Inf1 &lt;aws-inf1-arch&gt;` - 16 x :ref:`Inferentia &lt;inferentia-arch&gt;` - 64 x :ref:`NeuronCore-v1 &lt;neuroncores-v1-arch&gt;` - 4 This document describes what types of deep learning model architectures are a good fit for :ref:`Inferentia &lt;inferentia-arch&gt;`, :ref:`Inferentia2 &lt;inferentia2-arch&gt;` and :ref:`Trainium &lt;trainium-arch&gt;` powered instances. Model Support Overview $$$$$$$$$$$$$$$$$$$$$$ .. _model-architecture-fit-neuroncore-v2: AWS Trainium and AWS Inferentia2 (NeuronCore-v2) ------------------------------------------------ *Last update* - 05/05/2023 .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - Model Family/ Neural Network Architecture - Category - Hardware Architecture - Training with PyTorch Neuron (``torch-neuronx``) - Inference with PyTorch Neuron (``torch-neuronx``) - Inference with TensorFlow Neuron (``tensorflow-neuronx``) * - Transformer Encoders - NLP - Good Fit - Supported - Supported - Supported * - Transformer Decoders - NLP - Good Fit - Supported - Supported - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` * - Transformer Encoder-Decoder (Sequence-to-sequence) - NLP - Good Fit - Supported - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` * - LSTMs - NLP and Computer Vision - Good Fit - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` * - Vision Transformer - Computer Vision - Good Fit - Supported - Supported - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` * - Diffusion models - Computer Vision - Good Fit - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - Supported - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` * - Convolutional Neural Network (CNN) models - Computer Vision - Good Fit - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - Supported - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` * - R-CNNs - Computer Vision - Good Fit - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` .. note:: Supported means that at least a single model of the model family or the neural-network architecture already enabled. .. _model-architecture-fit-neuroncore-v1: AWS Inferentia (NeuronCore v1) ------------------------------ *Last update* - 05/05/2023 .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - Model Family/ Neural Network Architecture - Category - Hardware Architecture - PyTorch Neuron (``torch-neuron``) - TensorFlow Neuron (``tensorflow-neuron (TF 1.x)``) - TensorFlow Neuron (``tensorflow-neuron (TF 2.x)``) * - Transformer Encoders - NLP - Good Fit - Supported - Supported - Supported * - Transformer Decoders - NLP - Not a Good Fit - NA - NA - NA * - Transformer Encoder-Decoder (Sequence-to-sequence) - NLP - Not a Good Fit - NA - NA - NA * - LSTMs - NLP and Computer Vision - Good Fit - Supported - NA - NA * - Vision Transformer - Computer Vision - Good Fit - Supported - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` * - Diffusion models - Computer Vision - Good Fit - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` - NA - NA * - Convolutional Neural Network (CNN) models - Computer Vision - Good Fit - Supported - Supported - :ref:`Roadmap Item &lt;neuron_roadmap&gt;` * - R-CNNs - Computer Vision - Supported with limitations - Supported with limitations - NA - NA .. note:: Supported means that at least a single model of the model family or the neural-network architecture already enabled. Clarifications on Inferentia (1st generation) Model Architecture $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ Natural Language Processing (NLP) Models with Transformer ---------------------------------------------------------- Transformer Encoders ~~~~~~~~~~~~~~~~~~~~~ Autoencoding models use only the encoder part of the Transformer architecture. Representatives of this family include models like **BERT, distilBERT, XLM-BERT, Roberta, BioBert**, etc. Since the encoding process in these models can be parallelized, you can expect these models to run well both on Inferentia and Trainium. - **Architecture Fit** - Autoencoding models are a good fit for Inferentia. - **Neuron Support** - Neuron SDK support running Autoencoding models for inference on Inferentia. Please see :ref:`benchmark results &lt;appnote-performance-benchmark&gt;` of these models. To get started with NLP models you can refer to Neuron :ref:`PyTorch &lt;pytorch-nlp&gt;`, :ref:`TensorFlow &lt;tensorflow-nlp&gt;` and :ref:`MXNet &lt;mxnet-nlp&gt;` NLP tutorials. Decoder models, or autoregressive models with Transformer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Autoregressive models keep only the decoder part of the Transformer architecture. Representatives of this family include models like **GPT-3, GPT-2**, etc. - **Architecture Fit** - Autoregressive models are not a good fit for Inferentia. Usually the decoder part in these models is the most significant performance bottleneck since it must be executed once per output token, causing frequent access to the memory. Due to this these models typically experience the best performance only when the decoder maximum sequence length is short (e.g., 128). - **Neuron Support** - Neuron SDK does not support Autoregressive models inference on Inferentia. Encoder-decoder models, or sequence-to-sequence models with Transformer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sequence-to-sequence models use both of encoder and decoder of the Transformer architecture. Representatives of this family include models like **T5, Bart, Marian MT**, etc. - **Architecture Fit** - Sequence-to-sequence models are not a good fit for Inferentia. Like decoder models explained above, usually the decoder part in these sequence-to-sequence models is the most significant performance bottleneck since it must be executed once per output token, causing frequent access to the memory. Due to this, even when you enabled the models to run on Inferentia with wrapping the decoder part, these models typically experience the best performance only when the decoder maximum sequence length is short (e.g., 128). - **Neuron Support** - Neuron SDK does not support sequence-to-sequence models inference on Inferentia out of the box. However, you can run a model with defining wrappers around the encoder and decoder portions of it. For example, please refer to :ref:`MarianMT tutorial &lt;/src/examples/pytorch/Transformer-marianmt.ipynb&gt;` on Inferentia for more details. Computer Vision Models ---------------------- Convolutional Neural Network (CNN) based models ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CNN based models are used for applications in image classification and object detection. Representatives of this family include models like **ResNet, ResNext, VGG, YOLO, SSD**, etc. - **Architecture Fit** - CNN based models are a good fit for Inferentia. - **Neuron Support** - Neuron SDK supports CNN based models inference on Inferentia. Please see the :ref:`benchmark results &lt;appnote-performance-benchmark&gt;` of these models. To get started with these models you can refer to Neuron :ref:`PyTorch &lt;pytorch-computervision&gt;`, :ref:`TensorFlow &lt;tensorflow-computervision&gt;` and :ref:`MXNet &lt;mxnet-computervision&gt;` tutorials. Region-based CNN (R-CNN) models ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Region-based CNNs (R-CNNs) models are commonly used for object detection and image segmentation tasks. Popular variants of the the R-CNN model include R-CNN, Fast R-CNN, Faster R-CNN, and Mask R-CNN. .. _rcnn_limitations_inf1: - **Architecture Fit** - R-CNN models can have a few limitations and considerations on Inferentia: **RoI Align operators**: At this time, RoI Align operators typically cannot run efficiently on NeuronCore v1. As a result, RoI Align operators are mapped directly to CPU during compilation. R-CNN models that predict a low number of bounding boxes (&lt;100) experience the best performance on Inferentia. **Large ResNet backbone**: R-CNNs that have a large ResNet backbone (such as ResNet-50 or ResNet-101) experience the greatest performance improvement on Inferentia because a larger portion of the R-CNN compute is accelerated. - **Neuron Support** - Torch models must be traceable using :func:`torch.jit.trace` for compilation on Inferentia. Most `Detectron2 &lt;https://github.com/facebookresearch/detectron2&gt;`_-based R-CNNs are not jit traceable by default, so they cannot readily be compiled for optimized inference on Inferentia. The :ref:`torch-neuron-r-cnn-app-note` application note demonstrates how to compile and improve the performance of R-CNN models on Inferentia. It also provides an end-to-end example of running a Detectron2 R-CNN on Inferentia. Models with Long Short-Term Memory (LSTM) networks -------------------------------------------------- LSTMs use an internal state to process sequential data. LSTMs are commonly used to model temporal sequences of data in language processing and computer vision applications. - **Architecture Fit** - Models with LSTM cells are a good fit for Inferentia. - **Neuron Support** - Models with LSTM networks are supported on Inferentia, please see :ref:`torch_neuron_lstm_support`. Diffusion Models ---------------- - **Architecture Fit** - Diffusion models are a good fit for Inferentia. - **Neuron Support** - Diffusion models are not supported on Inferentia as of the latest Neuron release. Please track the :ref:`Neuron Roadmap &lt;neuron_roadmap&gt;` for details. Known Issues on Inferentia (NeuronCore v1) ------------------------------------------ Support of large models (impacts `torch-neuron` and `tensorflow-neuron` (TF1.x)) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. _2gb_protobuf_issue: During compilation on Inferentia (NeuronCore v1), ``torch-neuron`` and ``tensorflow-neuron (TF1.x)`` export a protobuf that contains the model's graph structure and weights. This causes an issue when the total size of the model's weights exceeds the 2GB limitation of protobufs. As a result, customers who want to run large models such as **RegNet**, **Stable Diffusion**, and **t5-11b** might run into protobuf errors during compilation. This is a known issue related to the compilation process, not a hardware-dependent issue. Allowing large models like this to be compiled for inference on Inferentia (NeuronCore v1) is a feature that we intend to address in a future release. Please track the :ref:`Neuron Roadmap &lt;neuron_roadmap&gt;` for details. .. note:: Neuron release 2.5.0 added Experimental support for tracing models larger than 2GB `in `tensorflow-neuron (TF2.x)``, please see ``extract-weights`` flag in :ref:`tensorflow-ref-neuron-tracing-api` </pre></body></html>
2023-09-29T20:55:16.138Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/neuroncore-pipeline.rst.txt
``` .. _neuroncore-pipeline: NeuronCore Pipeline =================== The Neuron software feature referred to as a NeuronCore Pipeline refers to the process of sharding a compute-graph across multiple NeuronCores, caching the model parameters in each core’s on-chip memory (cache), and then streaming inference requests across the cores in a pipelined manner. Based on the number of NeuronCores selected, the model might get seamlessly sharded across up-to 16 Inferentia devices (i.e. 64 NeuronCores). This enables users to optimize for both throughput and latency, as it enables the NeuronCores to process neural-networks with locally cached data and avoid the cost of accessing external memory. |Image:| One benefit to this approach is that NeuronCore Pipeline can typically hit maximal hardware efficiency without the need for batching (e.g. BERT, ResNet50). For maximal performance, users should choose an instance-size that can cache the entire model by using sufficient NeuronCores. Inf1 instance types have different number of Inferentia devices, each of which has 4 NeuronCores, as shown here https://aws.amazon.com/ec2/instance-types/inf1/ To enable the NeuronCore Pipeline optimization, the compiler should be invoked with the following flags: ``--neuroncore-pipeline-cores N``. The number of NeuronCores is typically chosen to be the minimal number that can fit the entire model, which is currently done through a trial-and-error process (compiling to different number of cores and looking for compilation success/failure message). This process will be automated in the future. A simple formula to help define the number of NeuronCores that may be an appropriate choice is :: neuroncore-pipeline-cores = 4 * round( number-of-weights-in-model/(2 * 10^7) ) This allocates a set of NeuronCores based on the size of the given model's weights and normalizes to multiples of 4 so it uses full Inferentias. The code snippet below shows how to compile a model with NeuronCore Pipeline for 16 NeuronCores (instance size inf1.6xlarge). :: import numpy as np import tensorflow.neuron as tfn example_input = np.zeros([1,224,224,3], dtype='float16') tfn.saved_model.compile("rn50_fp16", "rn50_fp16_compiled/1", model_feed_dict={'input_1:0' : example_input }, compiler_args = ['--neuroncore-pipeline-cores', '16']) .. |Image:| image:: ./images/NeuronCorePipelining.png ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuroncore-pipeline: NeuronCore Pipeline =================== The Neuron software feature referred to as a NeuronCore Pipeline refers to the process of sharding a compute-graph across multiple NeuronCores, caching the model parameters in each core’s on-chip memory (cache), and then streaming inference requests across the cores in a pipelined manner. Based on the number of NeuronCores selected, the model might get seamlessly sharded across up-to 16 Inferentia devices (i.e. 64 NeuronCores). This enables users to optimize for both throughput and latency, as it enables the NeuronCores to process neural-networks with locally cached data and avoid the cost of accessing external memory. |Image:| One benefit to this approach is that NeuronCore Pipeline can typically hit maximal hardware efficiency without the need for batching (e.g. BERT, ResNet50). For maximal performance, users should choose an instance-size that can cache the entire model by using sufficient NeuronCores. Inf1 instance types have different number of Inferentia devices, each of which has 4 NeuronCores, as shown here https://aws.amazon.com/ec2/instance-types/inf1/ To enable the NeuronCore Pipeline optimization, the compiler should be invoked with the following flags: ``--neuroncore-pipeline-cores N``. The number of NeuronCores is typically chosen to be the minimal number that can fit the entire model, which is currently done through a trial-and-error process (compiling to different number of cores and looking for compilation success/failure message). This process will be automated in the future. A simple formula to help define the number of NeuronCores that may be an appropriate choice is :: neuroncore-pipeline-cores = 4 * round( number-of-weights-in-model/(2 * 10^7) ) This allocates a set of NeuronCores based on the size of the given model's weights and normalizes to multiples of 4 so it uses full Inferentias. The code snippet below shows how to compile a model with NeuronCore Pipeline for 16 NeuronCores (instance size inf1.6xlarge). :: import numpy as np import tensorflow.neuron as tfn example_input = np.zeros([1,224,224,3], dtype='float16') tfn.saved_model.compile("rn50_fp16", "rn50_fp16_compiled/1", model_feed_dict={'input_1:0' : example_input }, compiler_args = ['--neuroncore-pipeline-cores', '16']) .. |Image:| image:: ./images/NeuronCorePipelining.png </pre></body></html>
2023-09-29T20:55:16.194Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/neuroncore-batching.rst.txt
``` .. _neuron-batching: Neuron Batching =============== Batching refers to the process of grouping multiple samples together, and processing them as a group (i.e. passing them together through the neural network). Batching is typically used as an optimization for improving throughput at the expense of higher latency (and potentially higher memory footprint). Batching considerations are slightly different between inference and training workloads, and we thus cover them separately below. .. contents:: Table of contents :local: :depth: 2 Batching in inference workloads ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ What is batched inference? ^^^^^^^^^^^^^^^^^^^^^^^^^^ The concept of batched inference is conceptually illustrated below, with a single NeuronCore performing batched computation of a 3 layer neural network with a batch-size of 4. The NeuronCore reads the parameters for a certain layer from the external memory, and then performs the corresponding computations for all 4 inference-requests, before reading the next set of parameters (thus, performing more compute for every parameter read from memory). .. image:: /images/batched-inference.png What are the benefits of batched Inference? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For inference, batching is typically used as a trade-off knob between throughput and latency: higher batch-size typically leads to better hardware utilization and thus higher throughput, but at the same time batching requires to perform more computation until getting the first results, and hence leads to higher latency. .. image:: /images/tradeoffs.png To understand why batching tends to improve throughput (up to a certain max value), it is useful to consider an intuitive visual performance-model called ‘the roofline model’, which provides with a theoretical bound on the system’s performance: .. image:: /images/memoryvscompute.png The X-axis indicates the arithmetic intensity (AI) of the workload, which is the ratio between the number of operations and the number of bytes read-from/written-to memory. The Y-axis indicates the theoretical extractable performance. For small(large) AI values, the workload is expected to be memory(compute) bound. For inference workloads, AI is often approximated by dividing the model’s number of operations by its memory footprint (#params x dtype_size). To a first order approximate, the AI value is linearly dependent on the batch-size, which means that the workloads performance (throughput) is expected to increase with the batch-size. To understand this more intuitively, for a larger batch size, Neuron can better amortize the cost of reading parameters from the external memory, and thus improve the overall hardware efficiency. It should be noted that while the roofline model can be very useful, it is not perfectly accurate (e.g. it doesn’t take into account spill/fills from/to on-chip SRAM memories), and thus users are encouraged to use it as a tool for **estimating** the optimal batch-size for their workloads. How to determine the optimal batch-size for inference workloads? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The optimal batch size is dependent on the application-level requirements: some applications require strict latency guarantees (in which case, check out the :ref:`neuroncore-pipeline` technology), while other applications strictly aim to maximize throughput. We thus encourage our users to try out multiple batch-sizes, and compare performance between them. A good starting for batch-size exploration can be identified using the roofline model: we can choose a batch-size that achieves an Arithmetic Intensity which is at the edge of the compute bound region. By doing that, we aim to achieve max throughput with a minimal batch-size, and thus minimal impact to latency. .. image:: /images/memoryvscompute2.png This can be expressed via the following equation: ``batch-size(Inference) = ceiling[0.5 x (<NeuronDevice PeakFLOPS>/<NeuronDevice MemBW>) /`` ``(<model FLOPs>/(<#model-dense-params> x <dtype_size>))]`` (for NeuronDevice PeakFLOPS and MemBW, see the :ref:`trainium-arch`, :ref:`inferentia-arch` and :ref:`inferentia2-arch` pages. For example, a BF16 BERT-Large model, with a sequence length of 128, will have the following approximated batch sizes: .. list-table:: :widths: auto :header-rows: 1 :stub-columns: 1 :align: left * - Model - NeuronDevice - Peak TFLOPS (BF16) - MemBW (GB/sec) - Model GFLOPs - Model Dense Params (Millions) - Data-type size (BF16) - Approximated optimal batch-size * - BERT-Large (SeqLen=128) - Inferentia - 64 - 50 - 77.3 - 302 - 2 - 6 * - BERT-Large (SeqLen=128) - Trainium - 210 - 820 - 77.3 - 302 - 2 - 2 * - ResNet-50 - Inferentia - 64 - 50 - 7.8 - 25 - 2 - 5 * - ResNet-50 - Trainium - 210 - 820 - 7.8 - 25 - 2 - 1 We recommend to evaluate multiple batch sizes and compare the performance between them, in order to determine the optimal latency/throughput deployment-point. How to set the batch-size? ^^^^^^^^^^^^^^^^^^^^^^^^^^ The Neuron compiler takes a model and its sample input, as inputs for the compilation process. For example, the code snippet below will compile a model with a batch-size of 4: .. code:: import torch import torch_neuron from torchvision import models # Load the model and set it to evaluation mode model = models.resnet50(pretrained=True) model.eval() # Compile with an example input of batch size 4 image = torch.rand([4, 3, 224, 224]) model_neuron = torch.neuron.trace(model, image, dynamic_batch_size=True) # Execute with a batch of 12 images batch = torch.rand([12, 3, 224, 224]) results = model_neuron(batch) For ahead-of-time compiled inference graphs (i.e. Inf1), dynamic batching can be used (as shown in the above code snippet) to process a larger client-side inference batch-size, and allow the framework to automatically break up the user-batch (12 in our case) into smaller batch sizes, to match the compiled batch-size (4 in our case). This technique increases the achievable throughput by hiding the framework-to-neuron overhead, and amortizing it over a larger batch size. During inference, dynamic batching can be used to process a larger client-side inference batch-size, and allow the framework to automatically break up the user-batch into smaller batch sizes, to match the compiled batch-size. This technique increases the achievable throughput by hiding the framework-to-neuron overhead, and amortizing it over a larger batch size. See :ref:`torch-neuronx-dynamic-batching` in ``torch-neuronx`` and :ref:`tensorflow-neuronx-special-flags` in ``tensorflow-neuronx``. Batching in training workloads ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Unlike inference workloads, training is inherently an offline process, and thus doesn’t have latency requirements. This means that training is almost always batched to some degree. How to determine the optimal batch-size for training workloads? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Determining the optimal batch-size for training workloads can be a non-trivial task. In most cases, we’d want to choose the largest batch-size that we can get away with. The most dominant factor for determining the optimal batch-size in training workloads is memory footprint: training workloads have higher memory footprint compared to inference, as they require saving more tensors aside from the model parameters, such as gradients, intermediate activations (passed between forward-pass and backward-pass), and optimizer-state. If the batch-size is increased beyond a certain point, one can run out of device memory (indicated by an ‘Out of device memory’ error, typically abbreviated as OOM). To estimate the memory footprint of a model, we look at the different contributors: 1. Weights and gradients: 1. typically 2B each, thus 4B per parameter 2. Optimizer state: 1. typically 4B - 12B per parameter 3. Intermediate activations: 1. sum of all tensor sizes for forward pass 2. for example, for a transformer neural network, this is roughly 16 x x <num_layers> x x x = 100MB x For training workloads, determining the optimal batch size can be a little more tricky, due to two reasons: 1. *Higher memory footprint:* Training workloads have higher memory footprint compared to inference, as they require saving more tensors aside from the model parameters, such as gradients, intermediate-state and optimizer-state. If the batch-size is increased too much, one can run out of device memory (indicated by an ‘Out of memory’ error, typically abbreviated as OOM). 2. *Arithmetic intensity estimation:* Arithmetic intensity is harder to estimate in training workloads, compared to inference workloads, as the majority of the external memory access are due to reads/writes of intermediate activation state (rather than parameters), which requires lower level familiarity with the model to estimate correctly. A good first order approximate for the optimal batch-size in a training workload, is the largest one that can fit in the device’s memory (i.e. won’t lead to OOM error). :literal:`batch-size(Training) = 0.6 x (<TP-Rank> x <PP-Rank> x ``<NeuronCore MemoryCapacity>)` :literal:`/ ``(<#model-dense-params> x ``<model-state-bytes-per-parameter>)` Note TP-rank stands for Tensor-Parallelism rank, i.e. how many NeuronCores participate in a single Tensor-Parallelism group. Similarly, PP-rank stands for Pipeline-Parallelism rank, i.e. how many NeuronCores participate in a single Pipeline-Parallelism group. For example, for BERT-Large Ph1 training, with a model-state of 4B per parameter (2B weights, 2B parameters), and TP-rank = PP-rank = 1, the approximated optimal per-NeuronCore training batch-size would be: :literal:`batch-size(Training/Trainium) = 0.6 x (1 x 1 x 16e+9``) / ``(300e+6 x 4``) = 8` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-batching: Neuron Batching =============== Batching refers to the process of grouping multiple samples together, and processing them as a group (i.e. passing them together through the neural network). Batching is typically used as an optimization for improving throughput at the expense of higher latency (and potentially higher memory footprint). Batching considerations are slightly different between inference and training workloads, and we thus cover them separately below. .. contents:: Table of contents :local: :depth: 2 Batching in inference workloads ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ What is batched inference? ^^^^^^^^^^^^^^^^^^^^^^^^^^ The concept of batched inference is conceptually illustrated below, with a single NeuronCore performing batched computation of a 3 layer neural network with a batch-size of 4. The NeuronCore reads the parameters for a certain layer from the external memory, and then performs the corresponding computations for all 4 inference-requests, before reading the next set of parameters (thus, performing more compute for every parameter read from memory). .. image:: /images/batched-inference.png What are the benefits of batched Inference? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For inference, batching is typically used as a trade-off knob between throughput and latency: higher batch-size typically leads to better hardware utilization and thus higher throughput, but at the same time batching requires to perform more computation until getting the first results, and hence leads to higher latency. .. image:: /images/tradeoffs.png To understand why batching tends to improve throughput (up to a certain max value), it is useful to consider an intuitive visual performance-model called ‘the roofline model’, which provides with a theoretical bound on the system’s performance: .. image:: /images/memoryvscompute.png The X-axis indicates the arithmetic intensity (AI) of the workload, which is the ratio between the number of operations and the number of bytes read-from/written-to memory. The Y-axis indicates the theoretical extractable performance. For small(large) AI values, the workload is expected to be memory(compute) bound. For inference workloads, AI is often approximated by dividing the model’s number of operations by its memory footprint (#params x dtype_size). To a first order approximate, the AI value is linearly dependent on the batch-size, which means that the workloads performance (throughput) is expected to increase with the batch-size. To understand this more intuitively, for a larger batch size, Neuron can better amortize the cost of reading parameters from the external memory, and thus improve the overall hardware efficiency. It should be noted that while the roofline model can be very useful, it is not perfectly accurate (e.g. it doesn’t take into account spill/fills from/to on-chip SRAM memories), and thus users are encouraged to use it as a tool for **estimating** the optimal batch-size for their workloads. How to determine the optimal batch-size for inference workloads? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The optimal batch size is dependent on the application-level requirements: some applications require strict latency guarantees (in which case, check out the :ref:`neuroncore-pipeline` technology), while other applications strictly aim to maximize throughput. We thus encourage our users to try out multiple batch-sizes, and compare performance between them. A good starting for batch-size exploration can be identified using the roofline model: we can choose a batch-size that achieves an Arithmetic Intensity which is at the edge of the compute bound region. By doing that, we aim to achieve max throughput with a minimal batch-size, and thus minimal impact to latency. .. image:: /images/memoryvscompute2.png This can be expressed via the following equation: ``batch-size(Inference) = ceiling[0.5 x (&lt;NeuronDevice PeakFLOPS&gt;/&lt;NeuronDevice MemBW&gt;) /`` ``(&lt;model FLOPs&gt;/(&lt;#model-dense-params&gt; x &lt;dtype_size&gt;))]`` (for NeuronDevice PeakFLOPS and MemBW, see the :ref:`trainium-arch`, :ref:`inferentia-arch` and :ref:`inferentia2-arch` pages. For example, a BF16 BERT-Large model, with a sequence length of 128, will have the following approximated batch sizes: .. list-table:: :widths: auto :header-rows: 1 :stub-columns: 1 :align: left * - Model - NeuronDevice - Peak TFLOPS (BF16) - MemBW (GB/sec) - Model GFLOPs - Model Dense Params (Millions) - Data-type size (BF16) - Approximated optimal batch-size * - BERT-Large (SeqLen=128) - Inferentia - 64 - 50 - 77.3 - 302 - 2 - 6 * - BERT-Large (SeqLen=128) - Trainium - 210 - 820 - 77.3 - 302 - 2 - 2 * - ResNet-50 - Inferentia - 64 - 50 - 7.8 - 25 - 2 - 5 * - ResNet-50 - Trainium - 210 - 820 - 7.8 - 25 - 2 - 1 We recommend to evaluate multiple batch sizes and compare the performance between them, in order to determine the optimal latency/throughput deployment-point. How to set the batch-size? ^^^^^^^^^^^^^^^^^^^^^^^^^^ The Neuron compiler takes a model and its sample input, as inputs for the compilation process. For example, the code snippet below will compile a model with a batch-size of 4: .. code:: import torch import torch_neuron from torchvision import models # Load the model and set it to evaluation mode model = models.resnet50(pretrained=True) model.eval() # Compile with an example input of batch size 4 image = torch.rand([4, 3, 224, 224]) model_neuron = torch.neuron.trace(model, image, dynamic_batch_size=True) # Execute with a batch of 12 images batch = torch.rand([12, 3, 224, 224]) results = model_neuron(batch) For ahead-of-time compiled inference graphs (i.e. Inf1), dynamic batching can be used (as shown in the above code snippet) to process a larger client-side inference batch-size, and allow the framework to automatically break up the user-batch (12 in our case) into smaller batch sizes, to match the compiled batch-size (4 in our case). This technique increases the achievable throughput by hiding the framework-to-neuron overhead, and amortizing it over a larger batch size. During inference, dynamic batching can be used to process a larger client-side inference batch-size, and allow the framework to automatically break up the user-batch into smaller batch sizes, to match the compiled batch-size. This technique increases the achievable throughput by hiding the framework-to-neuron overhead, and amortizing it over a larger batch size. See :ref:`torch-neuronx-dynamic-batching` in ``torch-neuronx`` and :ref:`tensorflow-neuronx-special-flags` in ``tensorflow-neuronx``. Batching in training workloads ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Unlike inference workloads, training is inherently an offline process, and thus doesn’t have latency requirements. This means that training is almost always batched to some degree. How to determine the optimal batch-size for training workloads? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Determining the optimal batch-size for training workloads can be a non-trivial task. In most cases, we’d want to choose the largest batch-size that we can get away with. The most dominant factor for determining the optimal batch-size in training workloads is memory footprint: training workloads have higher memory footprint compared to inference, as they require saving more tensors aside from the model parameters, such as gradients, intermediate activations (passed between forward-pass and backward-pass), and optimizer-state. If the batch-size is increased beyond a certain point, one can run out of device memory (indicated by an ‘Out of device memory’ error, typically abbreviated as OOM). To estimate the memory footprint of a model, we look at the different contributors: 1. Weights and gradients: 1. typically 2B each, thus 4B per parameter 2. Optimizer state: 1. typically 4B - 12B per parameter 3. Intermediate activations: 1. sum of all tensor sizes for forward pass 2. for example, for a transformer neural network, this is roughly 16 x x &lt;num_layers&gt; x x x = 100MB x For training workloads, determining the optimal batch size can be a little more tricky, due to two reasons: 1. *Higher memory footprint:* Training workloads have higher memory footprint compared to inference, as they require saving more tensors aside from the model parameters, such as gradients, intermediate-state and optimizer-state. If the batch-size is increased too much, one can run out of device memory (indicated by an ‘Out of memory’ error, typically abbreviated as OOM). 2. *Arithmetic intensity estimation:* Arithmetic intensity is harder to estimate in training workloads, compared to inference workloads, as the majority of the external memory access are due to reads/writes of intermediate activation state (rather than parameters), which requires lower level familiarity with the model to estimate correctly. A good first order approximate for the optimal batch-size in a training workload, is the largest one that can fit in the device’s memory (i.e. won’t lead to OOM error). :literal:`batch-size(Training) = 0.6 x (&lt;TP-Rank&gt; x &lt;PP-Rank&gt; x ``&lt;NeuronCore MemoryCapacity&gt;)` :literal:`/ ``(&lt;#model-dense-params&gt; x ``&lt;model-state-bytes-per-parameter&gt;)` Note TP-rank stands for Tensor-Parallelism rank, i.e. how many NeuronCores participate in a single Tensor-Parallelism group. Similarly, PP-rank stands for Pipeline-Parallelism rank, i.e. how many NeuronCores participate in a single Pipeline-Parallelism group. For example, for BERT-Large Ph1 training, with a model-state of 4B per parameter (2B weights, 2B parameters), and TP-rank = PP-rank = 1, the approximated optimal per-NeuronCore training batch-size would be: :literal:`batch-size(Training/Trainium) = 0.6 x (1 x 1 x 16e+9``) / ``(300e+6 x 4``) = 8` </pre></body></html>
2023-09-29T20:55:16.242Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/rounding-modes.rst.txt
``` .. _neuron-rounding-modes: Neuron Rounding Modes ===================== .. contents:: Table of contents :local: :depth: 1 .. _neuron-rounding-mode-rne: Round Nearest, ties to Even (RNE) --------------------------------- When the exact result of a floating point operation cannot be exactly represented as a floating point value, it must be rounded. The IEEE 754-2008 standard defines the default rounding mode to be ‘Round Nearest, ties to Even’ (RNE for short). Under this scheme, numbers are rounded to the nearest representable value, and in case of a ‘tie’ (i.e. the number is exactly between the two nearest representable values) numbers will be rounded to the nearest even number. All NeuronCore generations support the RNE rounding scheme, which is the most commonly used rounding scheme for Machine Learning workloads. Below is an illustration of the RNE rounding scheme: .. image:: /images/rne1.png :width: 700 .. image:: /images/rne2.png :width: 700 .. image:: /images/rne3.png :width: 700 .. _neuron-rounding-mode-sr: Stochastic Rounding (SR) ------------------------ One downside of the RNE rounding scheme (and other rounding schemes described in the IEEE 754-2008 standard), is that when adding floating point values of significantly different magnitudes, rounding can squash small values and prevent them from accumulating over time. To improve this, starting from the second generation of the NeuronCore (NeuronCore-v2), customers can choose between the RNE rounding scheme described above, and a second rounding scheme called ‘Stochastic Rounding’ (SR for short). Stochastic rounding prevents the computation precision-loss described above, by performing the rounding operations in a probabilistic manner, according to the relative distance from the two nearest representable values, as illustrated below: .. image:: /images/sr.png :width: 700 By performing the rounding in a probabilistic manner, this scheme allows for small increments to accumulate over time, even when added to numbers of significantly higher magnitude, which leads to more precise results when performing large floating point computations (as done for machine learning). Quick Tests ----------- As an example, we examine the code-snippet below: :: import torch import torch_xla import torch_xla.core.xla_model as xm device = xm.xla_device() a = torch.tensor(1024.0).half().to(device) for i in range(2048) : a = (a + 0.5) xm.mark_step() print(a) This code shows that rounding can significantly impact the calculation’s precision over time. To use standard RNE rounding, use the environment variable ``NEURON_RT_STOCHASTIC_ROUNDING_EN=0``. To enable stochastic rounding, use the environment variable ``NEURON_RT_STOCHASTIC_ROUNDING_EN=1``. NOTE: Stochastic rounding mode is enabled by default in PyTorch-Neuron when XLA_USE_BF16=1. The first test continues to show 1024 due to RNE rounding after each addition, and the second test shows result that is mostly in line with expectation. :: $ NEURON_RT_STOCHASTIC_ROUNDING_EN=0 python3 rounding_mode_test.py tensor(1024., device='xla:1', dtype=torch.float16) $ NEURON_RT_STOCHASTIC_ROUNDING_EN=1 python3 rounding_mode_test.py tensor(2056., device='xla:1', dtype=torch.float16) ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-rounding-modes: Neuron Rounding Modes ===================== .. contents:: Table of contents :local: :depth: 1 .. _neuron-rounding-mode-rne: Round Nearest, ties to Even (RNE) --------------------------------- When the exact result of a floating point operation cannot be exactly represented as a floating point value, it must be rounded. The IEEE 754-2008 standard defines the default rounding mode to be ‘Round Nearest, ties to Even’ (RNE for short). Under this scheme, numbers are rounded to the nearest representable value, and in case of a ‘tie’ (i.e. the number is exactly between the two nearest representable values) numbers will be rounded to the nearest even number. All NeuronCore generations support the RNE rounding scheme, which is the most commonly used rounding scheme for Machine Learning workloads. Below is an illustration of the RNE rounding scheme: .. image:: /images/rne1.png :width: 700 .. image:: /images/rne2.png :width: 700 .. image:: /images/rne3.png :width: 700 .. _neuron-rounding-mode-sr: Stochastic Rounding (SR) ------------------------ One downside of the RNE rounding scheme (and other rounding schemes described in the IEEE 754-2008 standard), is that when adding floating point values of significantly different magnitudes, rounding can squash small values and prevent them from accumulating over time. To improve this, starting from the second generation of the NeuronCore (NeuronCore-v2), customers can choose between the RNE rounding scheme described above, and a second rounding scheme called ‘Stochastic Rounding’ (SR for short). Stochastic rounding prevents the computation precision-loss described above, by performing the rounding operations in a probabilistic manner, according to the relative distance from the two nearest representable values, as illustrated below: .. image:: /images/sr.png :width: 700 By performing the rounding in a probabilistic manner, this scheme allows for small increments to accumulate over time, even when added to numbers of significantly higher magnitude, which leads to more precise results when performing large floating point computations (as done for machine learning). Quick Tests ----------- As an example, we examine the code-snippet below: :: import torch import torch_xla import torch_xla.core.xla_model as xm device = xm.xla_device() a = torch.tensor(1024.0).half().to(device) for i in range(2048) : a = (a + 0.5) xm.mark_step() print(a) This code shows that rounding can significantly impact the calculation’s precision over time. To use standard RNE rounding, use the environment variable ``NEURON_RT_STOCHASTIC_ROUNDING_EN=0``. To enable stochastic rounding, use the environment variable ``NEURON_RT_STOCHASTIC_ROUNDING_EN=1``. NOTE: Stochastic rounding mode is enabled by default in PyTorch-Neuron when XLA_USE_BF16=1. The first test continues to show 1024 due to RNE rounding after each addition, and the second test shows result that is mostly in line with expectation. :: $ NEURON_RT_STOCHASTIC_ROUNDING_EN=0 python3 rounding_mode_test.py tensor(1024., device='xla:1', dtype=torch.float16) $ NEURON_RT_STOCHASTIC_ROUNDING_EN=1 python3 rounding_mode_test.py tensor(2056., device='xla:1', dtype=torch.float16) </pre></body></html>
2023-09-29T20:55:16.275Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-hardware/inferentia2.rst.txt
``` .. _inferentia2-arch: Inferentia2 Architecture ------------------------ At the heart of the Inf2 instance are up to 12 Inferentia2 devices (each Inferentia2 include 2 :ref:`NeuronCore-v2 <neuroncores-v2-arch>`). Inferentia2 is the second generation purpose built Machine Learning inference accelerator from AWS. The Inferentia2 device architecture is depicted below: .. image:: /images/inferentia2.jpg Each Inferentia2 device consists of: - Compute: * 2x :ref:`NeuronCore-v2 <neuroncores-v2-arch>` cores, delivering 380 INT8 TOPS, 190 FP16/BF16/cFP8/TF32 TFLOPS, and 47.5 FP32 TFLOPS. - Device Memory: * 32GiB of HBM of device memory (for storing model state), with 820 GiB/sec of bandwidth. - Data movement: * 1 TB/sec of DMA bandwidth, with inline memory compression/decompression. - NeuronLink: * NeuronLink-v2 for device-to-device interconnect enables high performance collective compute for co-optimization of latency and throughput. - Programmability: * Inferentia2 supports dynamic shapes and control flow, via ISA extensions of NeuronCore-v2 and :ref:`custom-operators <feature-custom-c++-operators>` via the deeply embedded GPSIMD engines. More detailed description of all the hardware engines can be seen at :ref:`NeuronCore-v2 <neuroncores-v2-arch>` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _inferentia2-arch: Inferentia2 Architecture ------------------------ At the heart of the Inf2 instance are up to 12 Inferentia2 devices (each Inferentia2 include 2 :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;`). Inferentia2 is the second generation purpose built Machine Learning inference accelerator from AWS. The Inferentia2 device architecture is depicted below: .. image:: /images/inferentia2.jpg Each Inferentia2 device consists of: - Compute: * 2x :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;` cores, delivering 380 INT8 TOPS, 190 FP16/BF16/cFP8/TF32 TFLOPS, and 47.5 FP32 TFLOPS. - Device Memory: * 32GiB of HBM of device memory (for storing model state), with 820 GiB/sec of bandwidth. - Data movement: * 1 TB/sec of DMA bandwidth, with inline memory compression/decompression. - NeuronLink: * NeuronLink-v2 for device-to-device interconnect enables high performance collective compute for co-optimization of latency and throughput. - Programmability: * Inferentia2 supports dynamic shapes and control flow, via ISA extensions of NeuronCore-v2 and :ref:`custom-operators &lt;feature-custom-c++-operators&gt;` via the deeply embedded GPSIMD engines. More detailed description of all the hardware engines can be seen at :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;`</pre></body></html>
2023-09-29T20:55:16.310Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/glossary.rst.txt
``` .. _neuron_hw_glossary: Neuron Glossary =============== .. contents:: Table of contents :local: :depth: 2 Terms ----- Neuron Devices (Accelerated Machine Learning chips) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. list-table:: :widths: auto :header-rows: 1 :align: left * - Term - Description * - .. glossary:: Inferentia - AWS first generation accelerated machine learning chip supporting inference only * - .. glossary:: Trainium - AWS second generation accelerated machine learning chip supporting training and inference * - .. glossary:: Neuron Device - Accelerated machine learning chip (e.g. Inferentia or Trainium) Neuron powered Instances ^^^^^^^^^^^^^^^^^^^^^^^^ .. list-table:: :widths: auto :header-rows: 1 :align: left * - Term - Description * - .. glossary:: Inf1 - Inferentia powered accelerated compute EC2 instance * - .. glossary:: Trn1 - Trainium powered accelerated compute EC2 instance NeuronCore terms ^^^^^^^^^^^^^^^^ .. list-table:: :widths: auto :header-rows: 1 :align: left * - Term - Description * - .. glossary:: NeuronCore - The machine learning compute cores within Inferentia/Trainium * - .. glossary:: NeuronCore-v1 - Neuron Core within Inferentia * - .. glossary:: NeuronCore-v2 - Neuron Core within Trainium * - .. glossary:: Tensor Engine - 2D systolic array (within the NeuronCore), used for matrix computations * - .. glossary:: Scalar Engine - A scalar-engine within each NeuronCore, which can accelerate element-wise operations (e.g. GELU, ReLU, reciprocal, etc) * - .. glossary:: Vector Engine - A vector-engine with each NeuronCore, which can accelerate spatial operations (e.g. layerNorm, TopK, pooling, etc) * - .. glossary:: GPSIMD Engine - Embedded General Purpose SIMD cores, within each NeuronCore, to accelerate custom-operators * - .. glossary:: Sync Engine - The SP engine, which is integrated inside NeuronCore. Used for synchronization and DMA triggering. * - .. glossary:: Collective Communication Engine - Dedicated engine for collective communication, allows for overlapping computation and communication * - .. glossary:: NeuronLink - Interconnect between NeuronCores * - .. glossary:: NeuronLink-v1 - Interconnect between NeuronCores in Inferentia device * - .. glossary:: NeuronLink-v2 - Interconnect between NeuronCores in Trainium device Abbreviations ------------- .. list-table:: :widths: auto :header-rows: 1 :align: left * - Abbreviation - Description * - .. glossary:: NC - Neuron Core * - .. glossary:: NeuronCore - Neuron Core * - .. glossary:: ND - Neuron Device * - .. glossary:: NeuronDevice - Neuron Device * - .. glossary:: TensEng - Tensor Engine * - .. glossary:: ScalEng - Scalar Engine * - .. glossary:: VecEng - Vector Engine * - .. glossary:: SyncEng - Sync Engine * - .. glossary:: CCE - Collective Communication Engine * - .. glossary:: FP32 - Float32 * - .. glossary:: TF32 - TensorFloat32 * - .. glossary:: FP16 - Float16 * - .. glossary:: BF16 - Bfloat16 * - .. glossary:: cFP8 - Configurable Float8 * - .. glossary:: RNE - Round Nearest Even * - .. glossary:: SR - Stochastic Rounding * - .. glossary:: CustomOps - Custom Operators * - .. glossary:: RT - Neuron Runtime * - .. glossary:: DP - Data Parallel * - .. glossary:: DPr - Data Parallel degree * - .. glossary:: TP - Tensor Parallel * - .. glossary:: TPr - Tensor Parallel degree * - .. glossary:: PP - Pipeline Parallel * - .. glossary:: PPr - Pipeline Parallel degree ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_hw_glossary: Neuron Glossary =============== .. contents:: Table of contents :local: :depth: 2 Terms ----- Neuron Devices (Accelerated Machine Learning chips) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. list-table:: :widths: auto :header-rows: 1 :align: left * - Term - Description * - .. glossary:: Inferentia - AWS first generation accelerated machine learning chip supporting inference only * - .. glossary:: Trainium - AWS second generation accelerated machine learning chip supporting training and inference * - .. glossary:: Neuron Device - Accelerated machine learning chip (e.g. Inferentia or Trainium) Neuron powered Instances ^^^^^^^^^^^^^^^^^^^^^^^^ .. list-table:: :widths: auto :header-rows: 1 :align: left * - Term - Description * - .. glossary:: Inf1 - Inferentia powered accelerated compute EC2 instance * - .. glossary:: Trn1 - Trainium powered accelerated compute EC2 instance NeuronCore terms ^^^^^^^^^^^^^^^^ .. list-table:: :widths: auto :header-rows: 1 :align: left * - Term - Description * - .. glossary:: NeuronCore - The machine learning compute cores within Inferentia/Trainium * - .. glossary:: NeuronCore-v1 - Neuron Core within Inferentia * - .. glossary:: NeuronCore-v2 - Neuron Core within Trainium * - .. glossary:: Tensor Engine - 2D systolic array (within the NeuronCore), used for matrix computations * - .. glossary:: Scalar Engine - A scalar-engine within each NeuronCore, which can accelerate element-wise operations (e.g. GELU, ReLU, reciprocal, etc) * - .. glossary:: Vector Engine - A vector-engine with each NeuronCore, which can accelerate spatial operations (e.g. layerNorm, TopK, pooling, etc) * - .. glossary:: GPSIMD Engine - Embedded General Purpose SIMD cores, within each NeuronCore, to accelerate custom-operators * - .. glossary:: Sync Engine - The SP engine, which is integrated inside NeuronCore. Used for synchronization and DMA triggering. * - .. glossary:: Collective Communication Engine - Dedicated engine for collective communication, allows for overlapping computation and communication * - .. glossary:: NeuronLink - Interconnect between NeuronCores * - .. glossary:: NeuronLink-v1 - Interconnect between NeuronCores in Inferentia device * - .. glossary:: NeuronLink-v2 - Interconnect between NeuronCores in Trainium device Abbreviations ------------- .. list-table:: :widths: auto :header-rows: 1 :align: left * - Abbreviation - Description * - .. glossary:: NC - Neuron Core * - .. glossary:: NeuronCore - Neuron Core * - .. glossary:: ND - Neuron Device * - .. glossary:: NeuronDevice - Neuron Device * - .. glossary:: TensEng - Tensor Engine * - .. glossary:: ScalEng - Scalar Engine * - .. glossary:: VecEng - Vector Engine * - .. glossary:: SyncEng - Sync Engine * - .. glossary:: CCE - Collective Communication Engine * - .. glossary:: FP32 - Float32 * - .. glossary:: TF32 - TensorFloat32 * - .. glossary:: FP16 - Float16 * - .. glossary:: BF16 - Bfloat16 * - .. glossary:: cFP8 - Configurable Float8 * - .. glossary:: RNE - Round Nearest Even * - .. glossary:: SR - Stochastic Rounding * - .. glossary:: CustomOps - Custom Operators * - .. glossary:: RT - Neuron Runtime * - .. glossary:: DP - Data Parallel * - .. glossary:: DPr - Data Parallel degree * - .. glossary:: TP - Tensor Parallel * - .. glossary:: TPr - Tensor Parallel degree * - .. glossary:: PP - Pipeline Parallel * - .. glossary:: PPr - Pipeline Parallel degree </pre></body></html>
2023-09-29T20:55:16.317Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-runtime/misc-runtime.rst.txt
``` Misc(Neuron Runtime) ==================== .. toctree:: :maxdepth: 1 Troubleshooting on Inf1 and Trn1 </neuron-runtime/nrt-troubleshoot> FAQ </neuron-runtime/faq> /release-notes/runtime/aws-neuronx-runtime-lib/index /release-notes/runtime/aws-neuronx-dkms/index /release-notes/runtime/aws-neuronx-collectives/index ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc(Neuron Runtime) ==================== .. toctree:: :maxdepth: 1 Troubleshooting on Inf1 and Trn1 &lt;/neuron-runtime/nrt-troubleshoot&gt; FAQ &lt;/neuron-runtime/faq&gt; /release-notes/runtime/aws-neuronx-runtime-lib/index /release-notes/runtime/aws-neuronx-dkms/index /release-notes/runtime/aws-neuronx-collectives/index </pre></body></html>
2023-09-29T20:55:16.392Z
PyTorch Neuron Tutorials — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuron/tutorials/index.html#pytorch-nlp
# PyTorch Neuron Tutorials — AWS Neuron Documentation Toggle in-page Table of Contents Contents - [Before running a tutorial](#before-running-a-tutorial) - [Computer Vision](#computer-vision) - [Natural Language Processing](#natural-language-processing) - [Utilizing Neuron Capabilities](#utilizing-neuron-capabilities) ## Contents - [Before running a tutorial](#before-running-a-tutorial) - [Computer Vision](#computer-vision) - [Natural Language Processing](#natural-language-processing) - [Utilizing Neuron Capabilities](#utilizing-neuron-capabilities) _This document is relevant for_: `Inf1` ## PyTorch Neuron Tutorials[#](#pytorch-neuron-tutorials "Permalink to this headline") ## Before running a tutorial[#](#before-running-a-tutorial "Permalink to this headline") You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs. Follow instructions at [PyTorch Tutorial Setup](pytorch-tutorial-setup.html#pytorch-tutorial-setup) before running a PyTorch tutorial on Inferentia . We recommend new users start with the ResNet-50 tutorial. ## Utilizing Neuron Capabilities[#](#utilizing-neuron-capabilities "Permalink to this headline") - BERT TorchServe tutorial [\[html\]](tutorial-torchserve.html#pytorch-tutorials-torchserve) - NeuronCore Pipeline tutorial [\[html\]](../../../../src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb) _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>PyTorch Neuron Tutorials — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css"> <link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script> <script src="../../../../_static/jquery.js"></script> <script src="../../../../_static/underscore.js"></script> <script src="../../../../_static/doctools.js"></script> <script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../../_static/contentui.js"></script> <script src="../../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../../genindex.html"> <link rel="search" title="Search" href="../../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/torch/torch-neuron/tutorials/index", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/torch/torch-neuron/tutorials/index.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/torch/torch-neuron/tutorials/index.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../../_sources/frameworks/torch/torch-neuron/tutorials/index.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#before-running-a-tutorial"> Before running a tutorial </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#computer-vision"> Computer Vision </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#natural-language-processing"> Natural Language Processing </a> </li> <li class="toc-h2 nav-item toc-entry active"> <a class="reference internal nav-link active" href="#utilizing-neuron-capabilities"> Utilizing Neuron Capabilities </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>PyTorch Neuron Tutorials</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#before-running-a-tutorial"> Before running a tutorial </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#computer-vision"> Computer Vision </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#natural-language-processing"> Natural Language Processing </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#utilizing-neuron-capabilities"> Utilizing Neuron Capabilities </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="pytorch-neuron-tutorials"> <span id="pytorch-tutorials"></span><h1>PyTorch Neuron Tutorials<a class="headerlink" href="#pytorch-neuron-tutorials" title="Permalink to this headline">#</a></h1> <div class="section" id="before-running-a-tutorial"> <h2>Before running a tutorial<a class="headerlink" href="#before-running-a-tutorial" title="Permalink to this headline">#</a></h2> <p>You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.</p> <p>Follow instructions at <a class="reference internal" href="pytorch-tutorial-setup.html#pytorch-tutorial-setup"><span class="std std-ref">PyTorch Tutorial Setup</span></a> before running a PyTorch tutorial on Inferentia . We recommend new users start with the ResNet-50 tutorial.</p> <div class="toctree-wrapper compound"> </div> </div> <div class="section" id="computer-vision"> <span id="pytorch-computervision"></span><h2>Computer Vision<a class="headerlink" href="#computer-vision" title="Permalink to this headline">#</a></h2> <ul class="simple"> <li><p>ResNet-50 tutorial <a class="reference internal" href="../../../../src/examples/pytorch/resnet50.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/resnet50.ipynb">[notebook]</a></p></li> <li><p>PyTorch YOLOv4 tutorial <a class="reference internal" href="../../../../src/examples/pytorch/yolo_v4.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/yolo_v4.ipynb">[notebook]</a></p></li> </ul> <div class="toctree-wrapper compound"> </div> </div> <div class="section" id="natural-language-processing"> <span id="pytorch-nlp"></span><h2>Natural Language Processing<a class="headerlink" href="#natural-language-processing" title="Permalink to this headline">#</a></h2> <ul class="simple"> <li><p>HuggingFace pretrained BERT tutorial <a class="reference internal" href="../../../../src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb">[notebook]</a></p></li> <li><p>Bring your own HuggingFace pretrained BERT container to Sagemaker Tutorial <a class="reference internal" href="../../../../src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb">[notebook]</a></p></li> <li><p>LibTorch C++ tutorial <a class="reference internal" href="tutorial-libtorch.html#pytorch-tutorials-libtorch"><span class="std std-ref">[html]</span></a></p></li> <li><p>TorchServe tutorial <a class="reference internal" href="tutorial-torchserve.html#pytorch-tutorials-torchserve"><span class="std std-ref">[html]</span></a></p></li> <li><p>HuggingFace MarianMT tutorial <a class="reference internal" href="../../../../src/examples/pytorch/transformers-marianmt.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/transformers-marianmt.ipynb">[notebook]</a></p></li> </ul> <div class="toctree-wrapper compound"> </div> </div> <div class="section" id="utilizing-neuron-capabilities"> <span id="pytorch-utilize-neuron"></span><h2>Utilizing Neuron Capabilities<a class="headerlink" href="#utilizing-neuron-capabilities" title="Permalink to this headline">#</a></h2> <ul class="simple"> <li><p>BERT TorchServe tutorial <a class="reference internal" href="tutorial-torchserve.html#pytorch-tutorials-torchserve"><span class="std std-ref">[html]</span></a></p></li> <li><p>NeuronCore Pipeline tutorial <a class="reference internal" href="../../../../src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb">[notebook]</a></p></li> </ul> <div class="toctree-wrapper compound"> </div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:16.507Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/custom-c++-operators.rst.txt
``` .. _feature-custom-c++-operators: Neuron Custom C++ Operators =========================== .. include:: /neuron-customops/customops-intro.txt For more details see :ref:`neuron_c++customops` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _feature-custom-c++-operators: Neuron Custom C++ Operators =========================== .. include:: /neuron-customops/customops-intro.txt For more details see :ref:`neuron_c++customops`</pre></body></html>
2023-09-29T20:55:16.598Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/control-flow.rst.txt
``` .. _feature-control-flow: Neuron Control Flow =================== .. note:: This feature is supported in :ref:`neuroncores-v2-arch`, the NeuronCore that exists in :ref:`Trainium <trainium-arch>`, however it is still not implemented by the Neuron Compiler. Stay tuned and follow the :ref:`Neuron Roadmap <neuron_roadmap>` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _feature-control-flow: Neuron Control Flow =================== .. note:: This feature is supported in :ref:`neuroncores-v2-arch`, the NeuronCore that exists in :ref:`Trainium &lt;trainium-arch&gt;`, however it is still not implemented by the Neuron Compiler. Stay tuned and follow the :ref:`Neuron Roadmap &lt;neuron_roadmap&gt;`</pre></body></html>
2023-09-29T20:55:16.665Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/dynamic-shapes.rst.txt
``` .. _dynamic-shapes: Neuron Dynamic Shapes ===================== .. note:: This feature is supported in :ref:`neuroncores-v2-arch`, the NeuronCore that exists in :ref:`Trainium <trainium-arch>`, however it is still not implemented by the Neuron Compiler. Stay tuned and follow the :ref:`Neuron Roadmap <neuron_roadmap>` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _dynamic-shapes: Neuron Dynamic Shapes ===================== .. note:: This feature is supported in :ref:`neuroncores-v2-arch`, the NeuronCore that exists in :ref:`Trainium &lt;trainium-arch&gt;`, however it is still not implemented by the Neuron Compiler. Stay tuned and follow the :ref:`Neuron Roadmap &lt;neuron_roadmap&gt;`</pre></body></html>
2023-09-29T20:55:16.674Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/neuron1x/introducing-libnrt.rst.txt
``` .. _introduce-libnrt: Introducing Neuron Runtime 2.x (libnrt.so) ========================================== .. contents:: Table of contents :local: :depth: 2 What are we changing? --------------------- Starting with *Neuron 1.16.0* release, *Neuron Runtime 1.x* (``neuron-rtd``) is entering maintenance mode and is replaced by *Neuron Runtime 2.x*, a shared library named (``libnrt.so``). For more information on Runtime 1.x see :ref:`maintenance_rtd`. Upgrading to ``libnrt.so`` simplifies Neuron installation and upgrade process, introduces new capabilities for allocating NeuronCores to applications, streamlines container creation, and deprecates tools that are no longer needed. This document describes the capabilities of *Neuron Runtime 2.x* in detail, provides information needed for successful installation and upgrade, and provides information needed for successful upgrade of Neuron applications using *Neuron Runtime 1.x* (included in releases before *Neuron 1.16.0*) to *Neuron Runtime 2.x* (included in releases *Neuron 1.16.0* or newer). .. _introduce-libnrt-why: Why are we making this change? ------------------------------ Before *Neuron 1.16.0*, Neuron Runtime was delivered as a daemon (``neuron-rtd``), and communicated with Neuron framework extensions through a ``gRPC`` interface. ``neuron-rtd`` was packaged as an ``rpm`` or ``debian`` package (``aws-neuron-runtime``) and required a separate installation step. Starting with *Neuron 1.16.0*, *Neuron Runtime 2.x* is delivered as a shared library (``libnrt.so``) and is directly linked to Neuron framework extensions. ``libnrt.so`` is packaged and installed as part of Neuron framework extensions (e.g. TensorFlow Neuron, PyTorch Neuron or MXNet Neuron), and does not require a separate installation step. Installing Neuron Runtime as part of the Neuron framework extensions simplifies installation and improves the user experience. In addition, since ``libnrt.so`` is directly linked to Neuron framework extensions, it enables faster communication between the Neuron Runtime and Neuron Frameworks by eliminating the ``gRPC`` interface overhead. For more information please see :ref:`introduce-libnrt-how-sdk` and :ref:`neuron-migrating-apps-neuron-to-libnrt`. .. _libnrt-neuron-cmponents: .. _introduce-libnrt-how-sdk: How will this change affect the Neuron SDK? ------------------------------------------- Neuron Driver ^^^^^^^^^^^^^ You need to use latest Neuron Driver. For successful installation and upgrade to *Neuron 1.16.0* or newer, you must install or upgrade to Neuron Driver (``aws-neuron-dkms``) *version 2.1.5.0* or newer. Neuron applications using *Neuron 1.16.0* will fail if they do not detect *Neuron Driver version 2.1.5.0* or newer. For installation and upgrade instructions see :ref:`install-guide-index`. .. include:: ./important-neuronx-dkms.txt To see details of Neuron component versions please see :ref:`neuron-release-content`. .. important :: For successful installation or update to Neuron 1.16.0 and newer from previous releases: * Stop Neuron Runtime 1.x daemon (``neuron-rtd``) by running: ``sudo systemctl stop neuron-rtd`` * Uninstall ``neuron-rtd`` by running: ``sudo apt remove aws-neuron-runtime`` or ``sudo yum remove aws-neuron-runtime`` * Install or upgrade to latest Neuron Driver (``aws-neuron-dkms``) by following the :ref:`install-guide-index` instructions. * Starting Neuron version 2.3, ``aws-neuron-dkms`` package name is changed to ``aws-neuronx-dkms``, see :ref:`neuron2-intro` Neuron Runtime ^^^^^^^^^^^^^^ * Installation Starting from *Neuron 1.16.0*, Neuron releases will no longer include the ``aws-neuron-runtime packages``, and the Neuron Runtime will be part of the Neuron framework extension of choice (TensorFlow Neuron, PyTorch Neuron or MXNet Neuron). Installing any Neuron framework package will install the Neuron Runtime library (``libnrt.so``). * For installation and upgrade instructions see :ref:`install-guide-index`. * Configuring *Neuron Runtime* Before *Neuron 1.16.0*, configuring *Neuron Runtime 1.x* was performed through configuration files (e.g. /opt/aws/neuron/config/neuron-rtd.config). Starting from *Neuron 1.16.0*, configuring *Neuron Runtime 2.x* can be done through environment variables, see :ref:`nrt-configuration` for details. * Starting and Stopping *Neuron Runtime* Before introducing ``libnrt.so``, ``neuron-rtd`` ran as a daemon that communicated through a ``gRPC`` interface. Whenever ``neuron-rtd`` took ownership of a Neuron device, it continued owning that device until it was stopped. This created the need to stop ``neuron-rtd`` in certain cases. With the introduction of ``libnrt.so``, stopping and starting the *Neuron Runtime* is no longer needed as it runs inside the context of the application. With *Neuron Runtime 2.x*, the act of starting and stopping a Neuron application will cause ``libnrt.so`` to automatically claim or release the ownership of the required Neuron devices. * NeuronCore Groups (NCG) deprecation Before the introduction of *Neuron Runtime 2.x*, NeuronCore Group (NCG) has been used to define an execution group of one or more NeuronCores where models can be loaded and executed. It also provided separation between processes. With the introduction of *Neuron Runtime 2.x*, the strict separation of NeuronCores into groups is no longer needed and NeuronCore Groups (NCG) is deprecated. see :ref:`eol-ncg` for more information. * Running multiple *Neuron Runtimes* Before the introduction of ``libnrt.so``, you needed to run multiple ``neuron-rtd`` daemons to allocate Neuron devices for each ``neuron-rtd`` using configuration files. After the introduction of ``libnrt.so``, you will no longer need to run multiple ``neuron-rtd`` daemons to allocate Neuron devices to specific Neuron application . With ``libnrt.so`` allocation of NeuronCores (Neuron device include multiple NeuronCores) to a particular application is done by using ``NEURON_RT_VISIBLE_CORES`` or ``NEURON_RT_NUM_CORES`` environment variables, for example: .. code :: NEURON_RT_VISIBLE_CORES=0-3 myapp1.py NEURON_RT_VISIBLE_CORES=4-11 myapp2.py Or .. code :: NEURON_RT_NUM_CORES=3 myapp1.py & NEURON_RT_NUM_CORES=4 myapp2.py & See :ref:`nrt-configuration` for details. * Logging Similar to *Neuron Runtime 1.x*, *Neuron Runtime 2.x* logs to syslog (verbose logging). To make debugging easier, *Neuron Runtime 2.x* also logs to the console (error-only logging). Refer to :ref:`nrt-configuration` to see how to increase or decrease logging verbosity. * Multi-process access to NeuronCores With the introduction of ``libnrt.so``, it's no longer possible to load models on the same NeuronCore from multiple processes. Access to the same NeuronCore should be done from the same process. Instead you can load models on the same NeuronCore using multiple threads from the same process. .. note:: For optimal performance of multi-model execution, each NeuronCore should execute single model. * Neuron Runtime architecture *Neuron Runtime 2.x* is delivered as a shared library (``libnrt.so``) and is directly linked to Neuron framework extensions. ``libnrt.so`` is packaged and installed as part of Neuron framework extensions (e.g. TensorFlow Neuron, PyTorch Neuron or MXNet Neuron), and does not require a separate installation step. Installing Neuron Runtime as part of the Neuron framework extensions simplifies installation and improves the user experience. In addition, since ``libnrt.so`` is directly linked to Neuron framework extensions, it enables faster communication between the Neuron Runtime and Neuron Frameworks by eliminating the ``gRPC`` interface overhead. Neuron framework extensions ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Starting from *Neuron 1.16.0*, Neuron framework extensions (TensorFlow Neuron, PyTorch Neuron or MXNet Neuron) will be packaged together with ``libnrt.so``. It is required to install the ``aws-neuron-dkms`` Driver version 2.1.5.0 or newer for proper operation. The ``neuron-rtd`` daemon that was installed in previous releases no longer works starting with Neuron 1.16.0. To see details of Neuron component versions please see :ref:`neuron-release-content`. .. :important: Starting Neuron version 2.3, ``aws-neuron-dkms`` package name is changed to ``aws-neuronx-dkms``, see :ref:`neuron2-intro` TensorFlow model server ^^^^^^^^^^^^^^^^^^^^^^^ Starting from *Neuron 1.16.0*, TensorFlow Neuron model server will be packaged together with ``libnrt.so`` and will expect ``aws-neuron-dkms`` *version 2.1.5.0* or newer for proper operation. .. note:: The TensorFlow Neuron model server included in *Neuron 1.16.0* should run from the directory in which it was installed, as it will not run properly if copied to a different location due to its dependency on ``libnrt.so``. .. include:: ./important-neuronx-dkms.txt Neuron tools ^^^^^^^^^^^^ * ``neuron-cli`` - Starting from *Neuron 1.16.0*, ``neuron-cli`` enters maintenance mode, see :ref:`maintenance_neuron-cli` for more information. * ``neuron-top`` - Starting from *Neuron 1.16.0*, ``neuron-top`` has a new user interface, see :ref:`neuron-top-ug` for more information. * ``neuron-monitor`` - ``neuron-monitor`` was updated to support Neuron Runtime 2.x (``libnrt.so``) * See :ref:`neuron-monitor-ug` for a updated user guide of ``neuron-monitor``. * See :ref:`neuron-monitor-upg` for a list of changes between *Neuron Monitor 2.x* and *Neuron Monitor 1.0* * See :ref:`neuron-monitor-bwc` for how you can use *Neuron Monitor 2.x* with *Neuron Runtime 1.x* (``neuron-rtd``) . .. _introduce-libnrt-how-user: How will this change affect me? ------------------------------- Neuron installation and upgrade ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As explained in ":ref:`libnrt-neuron-cmponents`", starting from *Neuron 1.16.0*, ``libnrt.so`` requires the latest Neuron Driver (``aws-neuron-dkms``), in addition there is no longer the need to install ``aws-neuron-runtime``. To install Neuron or upgrade to latest Neuron version, please follow the installation and upgrade instructions below: * PyTorch Neuron * :ref:`install-neuron-pytorch`. * :ref:`update-neuron-pytorch`. * TensorFlow Neuron * :ref:`install-neuron-tensorflow`. * :ref:`update-neuron-tensorflow`. * MXNet Neuron * :ref:`install-neuron-mxnet`. * :ref:`update-neuron-mxnet`. .. include:: ./important-neuronx-dkms.txt .. _neuron-migrating-apps-neuron-to-libnrt: Migrate your application to Neuron Runtime 2.x (libnrt.so) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For a successful migration of your application to *Neuron 1.16.0* or newer from previous releases, please make sure you perform the following: #. Prerequisite Please read ":ref:`libnrt-neuron-cmponents`" section. #. Make sure you are not using *Neuron Runtime 1.x* (``aws-neuron-runtime``) * Remove any code that install ``aws-neuron-runtime`` from any CI/CD scripts. * Stop ``neuron-rtd`` by running: ``sudo systemctl stop neuron-rtd`` * Uninstall ``neuron-rtd`` by running: ``sudo apt remove aws-neuron-runtime`` or ``sudo yum remove aws-neuron-runtime`` #. Upgrade to your Neuron Framework of choice: * :ref:`update-neuron-pytorch`. * :ref:`update-neuron-tensorflow`. * :ref:`update-neuron-mxnet`. #. If you have a code that start and/or stop ``neuron-rtd`` Remove any code that start or stop ``neuron-rtd`` from any CI/CD scripts. #. Application running multiple ``neuron-rtd`` If your application runs multiple processes and required running multiple ``neuron-rtd`` daemons: * Remove the code that runs multiple ``neuron-rtd`` daemons. * Instead of allocating Neuron devices to ``neuron-rtd`` through configuration files, use ``NEURON_RT_VISIBLE_CORES`` or ``NEURON_RT_NUM_CORES`` environment variables to allocate NeuronCores. See :ref:`nrt-configuration` for details. If you application uses ``NEURONCORE_GROUP_SIZES``, see next item. .. note:: ``NEURON_RT_VISIBLE_CORES`` and ``NEURON_RT_NUM_CORES`` environment variables enables you to allocate NeuronCores to an application. Allocating NeuronCores improves application granularity because Neuron device include multiple NeuronCores. #. Application running multiple processes using ``NEURONCORE_GROUP_SIZES`` * Please consider using ``NEURON_RT_VISIBLE_CORES`` or ``NEURON_RT_NUM_CORES`` environment variables instead of ``NEURONCORE_GROUP_SIZES`` as it is being deprecated, see :ref:`nrt-configuration` for details. * If you are using TensorFlow Neuron (``tensorflow-neuron (TF2.x)``) and you are replacing ``NEURONCORE_GROUP_SIZES=AxB`` which enables auto multicore replication, please see the new api :ref:`tensorflow-ref-auto-replication-python-api` for usage and documentation. * Your application behavior will remain the same as before if you do not set ``NEURON_RT_VISIBLE_CORES`` and do not set ``NEURON_RT_NUM_CORES``. * If you are considering migrating to ``NEURON_RT_VISIBLE_CORES`` or ``NEURON_RT_NUM_CORES``, please use the following guidelines: * ``NEURON_RT_VISIBLE_CORES`` takes precedence over ``NEURON_RT_NUM_CORES``. * If you are migrating to ``NEURON_RT_VISIBLE_CORES``: * For TensorFlow applications or PyTorch applications make sure that ``NEURONCORE_GROUP_SIZES`` is unset, or that ``NEURONCORE_GROUP_SIZES`` allocate the same or less number of NeuronCores allocated by ``NEURON_RT_VISIBLE_CORES``. * For MXNet applications, setting ``NEURONCORE_GROUP_SIZES`` and ``NEURON_RT_VISIBLE_CORES`` environment variables at the same time is not supported. Please use ``NEURON_RT_VISIBLE_CORES`` only. * See :ref:`nrt-configuration` for more details of how to use ``NEURON_RT_VISIBLE_CORES``. * If you are migrating to ``NEURON_RT_NUM_CORES``: * Make sure that ``NEURONCORE_GROUP_SIZES`` is unset. * See :ref:`nrt-configuration` for more details of how to use ``NEURON_RT_NUM_CORES``. #. Application running multiple processes accessing same NeuronCore If your application accesses the same NeuronCore from multiple processes, this is no longer possible with ``libnrt.so``. Instead, please modify your application to access the same NeuronCore from multiple threads. .. note:: For optimal performance of multi-model execution, each NeuronCore should execute a single model. #. Neuron Tools * If you are using Neuron Monitor, see :ref:`neuron-monitor-upg` for details. * If you are using ``neuron-cli`` please remove any call to ``neuron-cli``. For more information, see :ref:`maintenance_neuron-cli`. #. Containers If your application is running within a container, and it previously executed ``neuron-rtd`` within the container, you need to re-build your container so it will not include or install ``aws-neuron-runtime``. See :ref:`neuron-containers` and :ref:`containers-migration-to-runtime2` for details. Troubleshooting --------------- Application fails to start ^^^^^^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ Starting from *Neuron 1.16.0* release, Neuron Runtime (``libnrt.so``) requires *Neuron Driver 2.0* or greater (``aws-neuron-dkms``). Neuron Runtime requires Neuron Driver(``aws-neuron-dkms`` package) to access Neuron devices. If ``aws-neuron-dkms`` is not installed then the application will fail with an error message on console and syslog that look like the following: .. code:: NRT:nrt_init Unable to determine Neuron Driver version. Please check aws-neuron-dkms package is installed. If an old ``aws-neuron-dkms`` is installed then the application will fail with an error message on console and syslog that look like the following: .. code:: NRT:nrt_init This runtime requires Neuron Driver version 2.0 or greater. Please upgrade aws-neuron-dkms package. Solution ~~~~~~~~ Please follow the installation steps in :ref:`install-guide-index` to install ``aws-neuron-dkms``. .. include:: ./important-neuronx-dkms.txt Application fails to start although I installed latest ``aws-neuron-dkms`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ Starting from *Neuron 1.16.0* release, Neuron Runtime (``libnrt.so``) require *Neuron Driver 2.0* or greater (``aws-neuron-dkms``). If an old ``aws-neuron-dkms`` is installed, the application will fail. You may try to install ``aws-neuron-dkms`` and still face application failure, this may happen because the ``aws-neuron-dkms`` installation failed as a result of ``neuron-rtd`` daemon that is still running . Solution ~~~~~~~~ * Stop ``neuron-rtd`` by running: ``sudo systemctl stop neuron-rtd`` * Uninstall ``neuron-rtd`` by running: ``sudo apt remove aws-neuron-runtime`` or sudo ``yum remove aws-neuron-runtime`` * Install ``aws-neuron-dkms`` by following steps in :ref:`install-guide-index` .. include:: ./important-neuronx-dkms.txt Application unexpected behavior when upgrading to release *Neuron 1.16.0* or newer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ When upgrading to release *Neuron 1.16.0* or newer from previous releases, the OS may include two different versions of *Neuron Runtime*: the ``libnrt.so`` shared library and ``neuron-rtd`` daemon. This can happen if the user didn't stop ``neuron-rtd`` daemon or didn't make sure to uninstall the existing Neuron version before upgrade. In this case the user application may behave unexpectedly. Solution ~~~~~~~~ If the OS includes two different versions of *Neuron Runtime*, ``libnrt.so`` shared library and ``neuron-rtd`` daemon: * Before running applications that use ``neuron-rtd``, restart ``neuron-rtd`` by calling ``sudo systemctl restart neuron-rtd``. * Before running applications linked with ``libnrt.so``, stop ``neuron-rtd`` by calling ``sudo systemctl stop neuron-rtd``. Application unexpected behavior when downgrading to releases before *Neuron 1.6.0* (from *Neuron 1.16.0* or newer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ When upgrading to release *Neuron 1.16.0* or newer from previous releases, and then downgrading back to releases before *Neuron 1.6.0*, the OS may include two different versions of *Neuron Runtime*: the ``libnrt.so`` shared library and ``neuron-rtd`` daemon. This can happen if the user didn't make sure to uninstall the existing Neuron version before upgrade or downgrade. In this case the user application may behave unexpectedly. Solution ~~~~~~~~ If the OS include two different versions of *Neuron Runtime*, ``libnrt.so`` shared library and ``neuron-rtd`` daemon: * Before running applications that use ``neuron-rtd``, restart ``neuron-rtd`` by calling ``sudo systemctl restart neuron-rtd``. * Before running applications linked with ``libnrt.so``, stop ``neuron-rtd`` by calling ``sudo systemctl stop neuron-rtd``. Neuron Core is in use ^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ A Neuron Core can't be shared between two applications. If an application started using a Neuron Core all other applications trying to use the NeuronCore would fail during runtime initialization with the following message in the console and in syslog: .. code:: bash ERROR NRT:nrt_allocate_neuron_cores NeuronCore(s) not available - Requested:nc1-nc1 Available:0 Solution ~~~~~~~~ Terminate the the process using NeuronCore and then try launching the application again. Frequently Asked Questions (FAQ) -------------------------------- Do I need to recompile my model to run it with Neuron Runtime 2.x (``libnrt.so``)? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ No. Do I need to change my application launch command? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ No. Can ``libnrt.so`` and ``neuron-rtd`` co-exist in the same environment? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Although we recommend upgrading to the latest Neuron release, we understand that for a transition period you may continue using ``neuron-rtd`` for old releases. If you are using Neuron Framework (PyTorch,TensorFlow or MXNet) from releases before *Neuron 1.16.0*: * Install the latest Neuron Driver (``aws-neuron-dkms``) .. include:: ./important-neuronx-dkms.txt * For development, we recommend using different environments for Neuron Framework (PyTorch,TensorFlow or MXNet) from releases before *Neuron 1.16.0* and for Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer, if that is not possible, please make sure to stop ``neuron-rtd`` before executing models using Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer. * For deployment, when you are ready to upgrade, please upgrade to Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer. see :ref:`neuron-migrating-apps-neuron-to-libnrt` for more information. .. warning :: Executing models using Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer in an environment where ``neuron-rtd`` is running may cause undefined behavior. Please make sure to stop ``neuron-rtd`` before executing models using Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer. Are there Neuron framework versions that will not support Neuron Runtime 2.x (``libnrt.so``)? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ All supported PyTorch Neuron and TensorFlow framework extensions in addition to Neuron MXnet 1.8.0 framework extensions support Neuron Runtime 2.x. Neuron MxNet 1.5.1 does not support Neuron Runtime 2.x (``libnrt.so``) and has now entered maintenance mode. Please see :ref:`maintenance_mxnet_1_5` for details. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _introduce-libnrt: Introducing Neuron Runtime 2.x (libnrt.so) ========================================== .. contents:: Table of contents :local: :depth: 2 What are we changing? --------------------- Starting with *Neuron 1.16.0* release, *Neuron Runtime 1.x* (``neuron-rtd``) is entering maintenance mode and is replaced by *Neuron Runtime 2.x*, a shared library named (``libnrt.so``). For more information on Runtime 1.x see :ref:`maintenance_rtd`. Upgrading to ``libnrt.so`` simplifies Neuron installation and upgrade process, introduces new capabilities for allocating NeuronCores to applications, streamlines container creation, and deprecates tools that are no longer needed. This document describes the capabilities of *Neuron Runtime 2.x* in detail, provides information needed for successful installation and upgrade, and provides information needed for successful upgrade of Neuron applications using *Neuron Runtime 1.x* (included in releases before *Neuron 1.16.0*) to *Neuron Runtime 2.x* (included in releases *Neuron 1.16.0* or newer). .. _introduce-libnrt-why: Why are we making this change? ------------------------------ Before *Neuron 1.16.0*, Neuron Runtime was delivered as a daemon (``neuron-rtd``), and communicated with Neuron framework extensions through a ``gRPC`` interface. ``neuron-rtd`` was packaged as an ``rpm`` or ``debian`` package (``aws-neuron-runtime``) and required a separate installation step. Starting with *Neuron 1.16.0*, *Neuron Runtime 2.x* is delivered as a shared library (``libnrt.so``) and is directly linked to Neuron framework extensions. ``libnrt.so`` is packaged and installed as part of Neuron framework extensions (e.g. TensorFlow Neuron, PyTorch Neuron or MXNet Neuron), and does not require a separate installation step. Installing Neuron Runtime as part of the Neuron framework extensions simplifies installation and improves the user experience. In addition, since ``libnrt.so`` is directly linked to Neuron framework extensions, it enables faster communication between the Neuron Runtime and Neuron Frameworks by eliminating the ``gRPC`` interface overhead. For more information please see :ref:`introduce-libnrt-how-sdk` and :ref:`neuron-migrating-apps-neuron-to-libnrt`. .. _libnrt-neuron-cmponents: .. _introduce-libnrt-how-sdk: How will this change affect the Neuron SDK? ------------------------------------------- Neuron Driver ^^^^^^^^^^^^^ You need to use latest Neuron Driver. For successful installation and upgrade to *Neuron 1.16.0* or newer, you must install or upgrade to Neuron Driver (``aws-neuron-dkms``) *version 2.1.5.0* or newer. Neuron applications using *Neuron 1.16.0* will fail if they do not detect *Neuron Driver version 2.1.5.0* or newer. For installation and upgrade instructions see :ref:`install-guide-index`. .. include:: ./important-neuronx-dkms.txt To see details of Neuron component versions please see :ref:`neuron-release-content`. .. important :: For successful installation or update to Neuron 1.16.0 and newer from previous releases: * Stop Neuron Runtime 1.x daemon (``neuron-rtd``) by running: ``sudo systemctl stop neuron-rtd`` * Uninstall ``neuron-rtd`` by running: ``sudo apt remove aws-neuron-runtime`` or ``sudo yum remove aws-neuron-runtime`` * Install or upgrade to latest Neuron Driver (``aws-neuron-dkms``) by following the :ref:`install-guide-index` instructions. * Starting Neuron version 2.3, ``aws-neuron-dkms`` package name is changed to ``aws-neuronx-dkms``, see :ref:`neuron2-intro` Neuron Runtime ^^^^^^^^^^^^^^ * Installation Starting from *Neuron 1.16.0*, Neuron releases will no longer include the ``aws-neuron-runtime packages``, and the Neuron Runtime will be part of the Neuron framework extension of choice (TensorFlow Neuron, PyTorch Neuron or MXNet Neuron). Installing any Neuron framework package will install the Neuron Runtime library (``libnrt.so``). * For installation and upgrade instructions see :ref:`install-guide-index`. * Configuring *Neuron Runtime* Before *Neuron 1.16.0*, configuring *Neuron Runtime 1.x* was performed through configuration files (e.g. /opt/aws/neuron/config/neuron-rtd.config). Starting from *Neuron 1.16.0*, configuring *Neuron Runtime 2.x* can be done through environment variables, see :ref:`nrt-configuration` for details. * Starting and Stopping *Neuron Runtime* Before introducing ``libnrt.so``, ``neuron-rtd`` ran as a daemon that communicated through a ``gRPC`` interface. Whenever ``neuron-rtd`` took ownership of a Neuron device, it continued owning that device until it was stopped. This created the need to stop ``neuron-rtd`` in certain cases. With the introduction of ``libnrt.so``, stopping and starting the *Neuron Runtime* is no longer needed as it runs inside the context of the application. With *Neuron Runtime 2.x*, the act of starting and stopping a Neuron application will cause ``libnrt.so`` to automatically claim or release the ownership of the required Neuron devices. * NeuronCore Groups (NCG) deprecation Before the introduction of *Neuron Runtime 2.x*, NeuronCore Group (NCG) has been used to define an execution group of one or more NeuronCores where models can be loaded and executed. It also provided separation between processes. With the introduction of *Neuron Runtime 2.x*, the strict separation of NeuronCores into groups is no longer needed and NeuronCore Groups (NCG) is deprecated. see :ref:`eol-ncg` for more information. * Running multiple *Neuron Runtimes* Before the introduction of ``libnrt.so``, you needed to run multiple ``neuron-rtd`` daemons to allocate Neuron devices for each ``neuron-rtd`` using configuration files. After the introduction of ``libnrt.so``, you will no longer need to run multiple ``neuron-rtd`` daemons to allocate Neuron devices to specific Neuron application . With ``libnrt.so`` allocation of NeuronCores (Neuron device include multiple NeuronCores) to a particular application is done by using ``NEURON_RT_VISIBLE_CORES`` or ``NEURON_RT_NUM_CORES`` environment variables, for example: .. code :: NEURON_RT_VISIBLE_CORES=0-3 myapp1.py NEURON_RT_VISIBLE_CORES=4-11 myapp2.py Or .. code :: NEURON_RT_NUM_CORES=3 myapp1.py &amp; NEURON_RT_NUM_CORES=4 myapp2.py &amp; See :ref:`nrt-configuration` for details. * Logging Similar to *Neuron Runtime 1.x*, *Neuron Runtime 2.x* logs to syslog (verbose logging). To make debugging easier, *Neuron Runtime 2.x* also logs to the console (error-only logging). Refer to :ref:`nrt-configuration` to see how to increase or decrease logging verbosity. * Multi-process access to NeuronCores With the introduction of ``libnrt.so``, it's no longer possible to load models on the same NeuronCore from multiple processes. Access to the same NeuronCore should be done from the same process. Instead you can load models on the same NeuronCore using multiple threads from the same process. .. note:: For optimal performance of multi-model execution, each NeuronCore should execute single model. * Neuron Runtime architecture *Neuron Runtime 2.x* is delivered as a shared library (``libnrt.so``) and is directly linked to Neuron framework extensions. ``libnrt.so`` is packaged and installed as part of Neuron framework extensions (e.g. TensorFlow Neuron, PyTorch Neuron or MXNet Neuron), and does not require a separate installation step. Installing Neuron Runtime as part of the Neuron framework extensions simplifies installation and improves the user experience. In addition, since ``libnrt.so`` is directly linked to Neuron framework extensions, it enables faster communication between the Neuron Runtime and Neuron Frameworks by eliminating the ``gRPC`` interface overhead. Neuron framework extensions ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Starting from *Neuron 1.16.0*, Neuron framework extensions (TensorFlow Neuron, PyTorch Neuron or MXNet Neuron) will be packaged together with ``libnrt.so``. It is required to install the ``aws-neuron-dkms`` Driver version 2.1.5.0 or newer for proper operation. The ``neuron-rtd`` daemon that was installed in previous releases no longer works starting with Neuron 1.16.0. To see details of Neuron component versions please see :ref:`neuron-release-content`. .. :important: Starting Neuron version 2.3, ``aws-neuron-dkms`` package name is changed to ``aws-neuronx-dkms``, see :ref:`neuron2-intro` TensorFlow model server ^^^^^^^^^^^^^^^^^^^^^^^ Starting from *Neuron 1.16.0*, TensorFlow Neuron model server will be packaged together with ``libnrt.so`` and will expect ``aws-neuron-dkms`` *version 2.1.5.0* or newer for proper operation. .. note:: The TensorFlow Neuron model server included in *Neuron 1.16.0* should run from the directory in which it was installed, as it will not run properly if copied to a different location due to its dependency on ``libnrt.so``. .. include:: ./important-neuronx-dkms.txt Neuron tools ^^^^^^^^^^^^ * ``neuron-cli`` - Starting from *Neuron 1.16.0*, ``neuron-cli`` enters maintenance mode, see :ref:`maintenance_neuron-cli` for more information. * ``neuron-top`` - Starting from *Neuron 1.16.0*, ``neuron-top`` has a new user interface, see :ref:`neuron-top-ug` for more information. * ``neuron-monitor`` - ``neuron-monitor`` was updated to support Neuron Runtime 2.x (``libnrt.so``) * See :ref:`neuron-monitor-ug` for a updated user guide of ``neuron-monitor``. * See :ref:`neuron-monitor-upg` for a list of changes between *Neuron Monitor 2.x* and *Neuron Monitor 1.0* * See :ref:`neuron-monitor-bwc` for how you can use *Neuron Monitor 2.x* with *Neuron Runtime 1.x* (``neuron-rtd``) . .. _introduce-libnrt-how-user: How will this change affect me? ------------------------------- Neuron installation and upgrade ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As explained in ":ref:`libnrt-neuron-cmponents`", starting from *Neuron 1.16.0*, ``libnrt.so`` requires the latest Neuron Driver (``aws-neuron-dkms``), in addition there is no longer the need to install ``aws-neuron-runtime``. To install Neuron or upgrade to latest Neuron version, please follow the installation and upgrade instructions below: * PyTorch Neuron * :ref:`install-neuron-pytorch`. * :ref:`update-neuron-pytorch`. * TensorFlow Neuron * :ref:`install-neuron-tensorflow`. * :ref:`update-neuron-tensorflow`. * MXNet Neuron * :ref:`install-neuron-mxnet`. * :ref:`update-neuron-mxnet`. .. include:: ./important-neuronx-dkms.txt .. _neuron-migrating-apps-neuron-to-libnrt: Migrate your application to Neuron Runtime 2.x (libnrt.so) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For a successful migration of your application to *Neuron 1.16.0* or newer from previous releases, please make sure you perform the following: #. Prerequisite Please read ":ref:`libnrt-neuron-cmponents`" section. #. Make sure you are not using *Neuron Runtime 1.x* (``aws-neuron-runtime``) * Remove any code that install ``aws-neuron-runtime`` from any CI/CD scripts. * Stop ``neuron-rtd`` by running: ``sudo systemctl stop neuron-rtd`` * Uninstall ``neuron-rtd`` by running: ``sudo apt remove aws-neuron-runtime`` or ``sudo yum remove aws-neuron-runtime`` #. Upgrade to your Neuron Framework of choice: * :ref:`update-neuron-pytorch`. * :ref:`update-neuron-tensorflow`. * :ref:`update-neuron-mxnet`. #. If you have a code that start and/or stop ``neuron-rtd`` Remove any code that start or stop ``neuron-rtd`` from any CI/CD scripts. #. Application running multiple ``neuron-rtd`` If your application runs multiple processes and required running multiple ``neuron-rtd`` daemons: * Remove the code that runs multiple ``neuron-rtd`` daemons. * Instead of allocating Neuron devices to ``neuron-rtd`` through configuration files, use ``NEURON_RT_VISIBLE_CORES`` or ``NEURON_RT_NUM_CORES`` environment variables to allocate NeuronCores. See :ref:`nrt-configuration` for details. If you application uses ``NEURONCORE_GROUP_SIZES``, see next item. .. note:: ``NEURON_RT_VISIBLE_CORES`` and ``NEURON_RT_NUM_CORES`` environment variables enables you to allocate NeuronCores to an application. Allocating NeuronCores improves application granularity because Neuron device include multiple NeuronCores. #. Application running multiple processes using ``NEURONCORE_GROUP_SIZES`` * Please consider using ``NEURON_RT_VISIBLE_CORES`` or ``NEURON_RT_NUM_CORES`` environment variables instead of ``NEURONCORE_GROUP_SIZES`` as it is being deprecated, see :ref:`nrt-configuration` for details. * If you are using TensorFlow Neuron (``tensorflow-neuron (TF2.x)``) and you are replacing ``NEURONCORE_GROUP_SIZES=AxB`` which enables auto multicore replication, please see the new api :ref:`tensorflow-ref-auto-replication-python-api` for usage and documentation. * Your application behavior will remain the same as before if you do not set ``NEURON_RT_VISIBLE_CORES`` and do not set ``NEURON_RT_NUM_CORES``. * If you are considering migrating to ``NEURON_RT_VISIBLE_CORES`` or ``NEURON_RT_NUM_CORES``, please use the following guidelines: * ``NEURON_RT_VISIBLE_CORES`` takes precedence over ``NEURON_RT_NUM_CORES``. * If you are migrating to ``NEURON_RT_VISIBLE_CORES``: * For TensorFlow applications or PyTorch applications make sure that ``NEURONCORE_GROUP_SIZES`` is unset, or that ``NEURONCORE_GROUP_SIZES`` allocate the same or less number of NeuronCores allocated by ``NEURON_RT_VISIBLE_CORES``. * For MXNet applications, setting ``NEURONCORE_GROUP_SIZES`` and ``NEURON_RT_VISIBLE_CORES`` environment variables at the same time is not supported. Please use ``NEURON_RT_VISIBLE_CORES`` only. * See :ref:`nrt-configuration` for more details of how to use ``NEURON_RT_VISIBLE_CORES``. * If you are migrating to ``NEURON_RT_NUM_CORES``: * Make sure that ``NEURONCORE_GROUP_SIZES`` is unset. * See :ref:`nrt-configuration` for more details of how to use ``NEURON_RT_NUM_CORES``. #. Application running multiple processes accessing same NeuronCore If your application accesses the same NeuronCore from multiple processes, this is no longer possible with ``libnrt.so``. Instead, please modify your application to access the same NeuronCore from multiple threads. .. note:: For optimal performance of multi-model execution, each NeuronCore should execute a single model. #. Neuron Tools * If you are using Neuron Monitor, see :ref:`neuron-monitor-upg` for details. * If you are using ``neuron-cli`` please remove any call to ``neuron-cli``. For more information, see :ref:`maintenance_neuron-cli`. #. Containers If your application is running within a container, and it previously executed ``neuron-rtd`` within the container, you need to re-build your container so it will not include or install ``aws-neuron-runtime``. See :ref:`neuron-containers` and :ref:`containers-migration-to-runtime2` for details. Troubleshooting --------------- Application fails to start ^^^^^^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ Starting from *Neuron 1.16.0* release, Neuron Runtime (``libnrt.so``) requires *Neuron Driver 2.0* or greater (``aws-neuron-dkms``). Neuron Runtime requires Neuron Driver(``aws-neuron-dkms`` package) to access Neuron devices. If ``aws-neuron-dkms`` is not installed then the application will fail with an error message on console and syslog that look like the following: .. code:: NRT:nrt_init Unable to determine Neuron Driver version. Please check aws-neuron-dkms package is installed. If an old ``aws-neuron-dkms`` is installed then the application will fail with an error message on console and syslog that look like the following: .. code:: NRT:nrt_init This runtime requires Neuron Driver version 2.0 or greater. Please upgrade aws-neuron-dkms package. Solution ~~~~~~~~ Please follow the installation steps in :ref:`install-guide-index` to install ``aws-neuron-dkms``. .. include:: ./important-neuronx-dkms.txt Application fails to start although I installed latest ``aws-neuron-dkms`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ Starting from *Neuron 1.16.0* release, Neuron Runtime (``libnrt.so``) require *Neuron Driver 2.0* or greater (``aws-neuron-dkms``). If an old ``aws-neuron-dkms`` is installed, the application will fail. You may try to install ``aws-neuron-dkms`` and still face application failure, this may happen because the ``aws-neuron-dkms`` installation failed as a result of ``neuron-rtd`` daemon that is still running . Solution ~~~~~~~~ * Stop ``neuron-rtd`` by running: ``sudo systemctl stop neuron-rtd`` * Uninstall ``neuron-rtd`` by running: ``sudo apt remove aws-neuron-runtime`` or sudo ``yum remove aws-neuron-runtime`` * Install ``aws-neuron-dkms`` by following steps in :ref:`install-guide-index` .. include:: ./important-neuronx-dkms.txt Application unexpected behavior when upgrading to release *Neuron 1.16.0* or newer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ When upgrading to release *Neuron 1.16.0* or newer from previous releases, the OS may include two different versions of *Neuron Runtime*: the ``libnrt.so`` shared library and ``neuron-rtd`` daemon. This can happen if the user didn't stop ``neuron-rtd`` daemon or didn't make sure to uninstall the existing Neuron version before upgrade. In this case the user application may behave unexpectedly. Solution ~~~~~~~~ If the OS includes two different versions of *Neuron Runtime*, ``libnrt.so`` shared library and ``neuron-rtd`` daemon: * Before running applications that use ``neuron-rtd``, restart ``neuron-rtd`` by calling ``sudo systemctl restart neuron-rtd``. * Before running applications linked with ``libnrt.so``, stop ``neuron-rtd`` by calling ``sudo systemctl stop neuron-rtd``. Application unexpected behavior when downgrading to releases before *Neuron 1.6.0* (from *Neuron 1.16.0* or newer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ When upgrading to release *Neuron 1.16.0* or newer from previous releases, and then downgrading back to releases before *Neuron 1.6.0*, the OS may include two different versions of *Neuron Runtime*: the ``libnrt.so`` shared library and ``neuron-rtd`` daemon. This can happen if the user didn't make sure to uninstall the existing Neuron version before upgrade or downgrade. In this case the user application may behave unexpectedly. Solution ~~~~~~~~ If the OS include two different versions of *Neuron Runtime*, ``libnrt.so`` shared library and ``neuron-rtd`` daemon: * Before running applications that use ``neuron-rtd``, restart ``neuron-rtd`` by calling ``sudo systemctl restart neuron-rtd``. * Before running applications linked with ``libnrt.so``, stop ``neuron-rtd`` by calling ``sudo systemctl stop neuron-rtd``. Neuron Core is in use ^^^^^^^^^^^^^^^^^^^^^ Description ~~~~~~~~~~~ A Neuron Core can't be shared between two applications. If an application started using a Neuron Core all other applications trying to use the NeuronCore would fail during runtime initialization with the following message in the console and in syslog: .. code:: bash ERROR NRT:nrt_allocate_neuron_cores NeuronCore(s) not available - Requested:nc1-nc1 Available:0 Solution ~~~~~~~~ Terminate the the process using NeuronCore and then try launching the application again. Frequently Asked Questions (FAQ) -------------------------------- Do I need to recompile my model to run it with Neuron Runtime 2.x (``libnrt.so``)? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ No. Do I need to change my application launch command? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ No. Can ``libnrt.so`` and ``neuron-rtd`` co-exist in the same environment? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Although we recommend upgrading to the latest Neuron release, we understand that for a transition period you may continue using ``neuron-rtd`` for old releases. If you are using Neuron Framework (PyTorch,TensorFlow or MXNet) from releases before *Neuron 1.16.0*: * Install the latest Neuron Driver (``aws-neuron-dkms``) .. include:: ./important-neuronx-dkms.txt * For development, we recommend using different environments for Neuron Framework (PyTorch,TensorFlow or MXNet) from releases before *Neuron 1.16.0* and for Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer, if that is not possible, please make sure to stop ``neuron-rtd`` before executing models using Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer. * For deployment, when you are ready to upgrade, please upgrade to Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer. see :ref:`neuron-migrating-apps-neuron-to-libnrt` for more information. .. warning :: Executing models using Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer in an environment where ``neuron-rtd`` is running may cause undefined behavior. Please make sure to stop ``neuron-rtd`` before executing models using Neuron Framework (PyTorch,TensorFlow or MXNet) from *Neuron 1.16.0* and newer. Are there Neuron framework versions that will not support Neuron Runtime 2.x (``libnrt.so``)? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ All supported PyTorch Neuron and TensorFlow framework extensions in addition to Neuron MXnet 1.8.0 framework extensions support Neuron Runtime 2.x. Neuron MxNet 1.5.1 does not support Neuron Runtime 2.x (``libnrt.so``) and has now entered maintenance mode. Please see :ref:`maintenance_mxnet_1_5` for details. </pre></body></html>
2023-09-29T20:55:17.451Z
Install TensorFlow Neuron — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow
# Install TensorFlow Neuron — AWS Neuron Documentation _This document is relevant for_: `Inf1` ## Install TensorFlow Neuron[#](#install-tensorflow-neuron "Permalink to this headline") Note - Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI. - For an example of how to install Neuron components in a container, see [Tutorial Docker environment setup](../../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup) and our neuron-containers documentation for more details. Table of contents - [Develop on AWS ML accelerator instance](#develop-on-aws-ml-accelerator-instance) - [Compile on compute instance](#compile-on-compute-instance) - [Deploy on AWS ML accelerator instance](#deploy-on-aws-ml-accelerator-instance) ## [Develop on AWS ML accelerator instance](#id1)[#](#develop-on-aws-ml-accelerator-instance "Permalink to this headline") The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. TensorFlow 2.10.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y ``` TensorFlow 2.9.3 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y ``` TensorFlow 2.8.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y ``` TensorFlow 2.7.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y ``` TensorFlow 1.15.5 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` ## [Compile on compute instance](#id2)[#](#compile-on-compute-instance "Permalink to this headline") If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment. TensorFlow 2.10.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron ``` TensorFlow 2.9.3 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron ``` TensorFlow 2.8.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron ``` TensorFlow 2.7.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron ``` TensorFlow 1.15.5 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` ## [Deploy on AWS ML accelerator instance](#id3)[#](#deploy-on-aws-ml-accelerator-instance "Permalink to this headline") During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. TensorFlow 2.10.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y ``` TensorFlow 2.9.3 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y ``` TensorFlow 2.8.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y ``` TensorFlow 2.7.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y ``` TensorFlow 1.15.5 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Install TensorFlow Neuron — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css"> <link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script> <script src="../../../../_static/jquery.js"></script> <script src="../../../../_static/underscore.js"></script> <script src="../../../../_static/doctools.js"></script> <script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../../_static/contentui.js"></script> <script src="../../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../../genindex.html"> <link rel="search" title="Search" href="../../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../../_sources/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Install TensorFlow Neuron</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="install-tensorflow-neuron"> <span id="install-neuron-tensorflow"></span><h1>Install TensorFlow Neuron<a class="headerlink" href="#install-tensorflow-neuron" title="Permalink to this headline">#</a></h1> <div class="admonition note"> <p class="admonition-title">Note</p> <ul class="simple"> <li><p>Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI.</p></li> <li><p>For an example of how to install Neuron components in a container, see <a class="reference internal" href="../../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup"><span class="std std-ref">Tutorial Docker environment setup</span></a> and our <span class="xref std std-ref">neuron-containers</span> documentation for more details.</p></li> </ul> </div> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#develop-on-aws-ml-accelerator-instance" id="id1">Develop on AWS ML accelerator instance</a></p></li> <li><p><a class="reference internal" href="#compile-on-compute-instance" id="id2">Compile on compute instance</a></p></li> <li><p><a class="reference internal" href="#deploy-on-aws-ml-accelerator-instance" id="id3">Deploy on AWS ML accelerator instance</a></p></li> </ul> </div> <div class="section" id="develop-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id1">Develop on AWS ML accelerator instance</a><a class="headerlink" href="#develop-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-0"> TensorFlow 2.10.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-5" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-5"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-6" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-6"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-1" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-1"> TensorFlow 2.9.3</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-7" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-7"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-8" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-8"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-2" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-2"> TensorFlow 2.8.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-9" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-9"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-10" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-10"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-3" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-3"> TensorFlow 2.7.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-11" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-11"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-12" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-12"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-4" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-4"> TensorFlow 1.15.5</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-13" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-13"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> <input id="sd-tab-item-14" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-14"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="compile-on-compute-instance"> <h2><a class="toc-backref" href="#id2">Compile on compute instance</a><a class="headerlink" href="#compile-on-compute-instance" title="Permalink to this headline">#</a></h2> <p>If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment.</p> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-15" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-15"> TensorFlow 2.10.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-20" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-20"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron </pre></div> </div> </div> <input id="sd-tab-item-21" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-21"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-16" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-16"> TensorFlow 2.9.3</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-22" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-22"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron </pre></div> </div> </div> <input id="sd-tab-item-23" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-23"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-17" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-17"> TensorFlow 2.8.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-24" name="sd-tab-set-9" type="radio"> <label class="sd-tab-label" for="sd-tab-item-24"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron </pre></div> </div> </div> <input id="sd-tab-item-25" name="sd-tab-set-9" type="radio"> <label class="sd-tab-label" for="sd-tab-item-25"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-18" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-18"> TensorFlow 2.7.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-26" name="sd-tab-set-10" type="radio"> <label class="sd-tab-label" for="sd-tab-item-26"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron </pre></div> </div> </div> <input id="sd-tab-item-27" name="sd-tab-set-10" type="radio"> <label class="sd-tab-label" for="sd-tab-item-27"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-19" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-19"> TensorFlow 1.15.5</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-28" name="sd-tab-set-11" type="radio"> <label class="sd-tab-label" for="sd-tab-item-28"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> <input id="sd-tab-item-29" name="sd-tab-set-11" type="radio"> <label class="sd-tab-label" for="sd-tab-item-29"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="deploy-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id3">Deploy on AWS ML accelerator instance</a><a class="headerlink" href="#deploy-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-30" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-30"> TensorFlow 2.10.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-35" name="sd-tab-set-13" type="radio"> <label class="sd-tab-label" for="sd-tab-item-35"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-36" name="sd-tab-set-13" type="radio"> <label class="sd-tab-label" for="sd-tab-item-36"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc] "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-31" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-31"> TensorFlow 2.9.3</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-37" name="sd-tab-set-14" type="radio"> <label class="sd-tab-label" for="sd-tab-item-37"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-38" name="sd-tab-set-14" type="radio"> <label class="sd-tab-label" for="sd-tab-item-38"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.9.3.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-32" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-32"> TensorFlow 2.8.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-39" name="sd-tab-set-15" type="radio"> <label class="sd-tab-label" for="sd-tab-item-39"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-40" name="sd-tab-set-15" type="radio"> <label class="sd-tab-label" for="sd-tab-item-40"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.8.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-33" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-33"> TensorFlow 2.7.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-41" name="sd-tab-set-16" type="radio"> <label class="sd-tab-label" for="sd-tab-item-41"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-42" name="sd-tab-set-16" type="radio"> <label class="sd-tab-label" for="sd-tab-item-42"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_tensorflow_inf1 # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install TensorFlow Neuron python -m pip install tensorflow-neuron[cc]==2.7.4.* "protobuf" # Install Neuron TensorBoard python -m pip install tensorboard-plugin-neuron # Optional: Install Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-34" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-34"> TensorFlow 1.15.5</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-43" name="sd-tab-set-17" type="radio"> <label class="sd-tab-label" for="sd-tab-item-43"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> <input id="sd-tab-item-44" name="sd-tab-set-17" type="radio"> <label class="sd-tab-label" for="sd-tab-item-44"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> </div> </div> </div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:18.743Z
Update to latest TensorFlow Neuron — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html#update-neuron-tensorflow
# Update to latest TensorFlow Neuron — AWS Neuron Documentation _This document is relevant for_: `Inf1` ## Update to latest TensorFlow Neuron[#](#update-to-latest-tensorflow-neuron "Permalink to this headline") Note - Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI. - For an example of how to install Neuron components in a container, see [Tutorial Docker environment setup](../../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup) and our neuron-containers documentation for more details. Table of contents - [Develop on AWS ML accelerator instance](#develop-on-aws-ml-accelerator-instance) - [Compile on compute instance](#compile-on-compute-instance) - [Deploy on AWS ML accelerator instance](#deploy-on-aws-ml-accelerator-instance) ## [Develop on AWS ML accelerator instance](#id1)[#](#develop-on-aws-ml-accelerator-instance "Permalink to this headline") The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. TensorFlow 2.10.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y ``` TensorFlow 2.9.3 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y ``` TensorFlow 2.8.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y ``` TensorFlow 2.7.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y ``` TensorFlow 1.15.5 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` ## [Compile on compute instance](#id2)[#](#compile-on-compute-instance "Permalink to this headline") If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment. TensorFlow 2.10.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron ``` TensorFlow 2.9.3 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron ``` TensorFlow 2.8.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron ``` TensorFlow 2.7.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron ``` TensorFlow 1.15.5 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` ## [Deploy on AWS ML accelerator instance](#id3)[#](#deploy-on-aws-ml-accelerator-instance "Permalink to this headline") During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. TensorFlow 2.10.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y ``` TensorFlow 2.9.3 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y ``` TensorFlow 2.8.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y ``` TensorFlow 2.7.4 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y ``` TensorFlow 1.15.5 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in <module> print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 ``` _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Update to latest TensorFlow Neuron — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css"> <link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script> <script src="../../../../_static/jquery.js"></script> <script src="../../../../_static/underscore.js"></script> <script src="../../../../_static/doctools.js"></script> <script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../../_static/contentui.js"></script> <script src="../../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../../genindex.html"> <link rel="search" title="Search" href="../../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"><!-- Inserted RTD Footer --> <div class="injected"> <div class="rst-versions rst-badge" data-toggle="rst-versions"> <span class="rst-current-version" data-toggle="rst-current-version"> <span class="fa fa-book">&nbsp;</span> v: v2.14.1 <span class="fa fa-caret-down"></span> </span> <div class="rst-other-versions"> <dl> <dt>Versions</dt> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">latest</a> </dd> <dd class="rtd-current-item"> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.14.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.14.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.13.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.13.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.13.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.12.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.12.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.12.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.11.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.10.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.9.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.9.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.8.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.7.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.6.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.5.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.4.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v2.3.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.19.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.19.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.19.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.18.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.17.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.17.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.17.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.16.3</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.16.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.16.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html">v1.16.0</a> </dd> </dl> <dl> <dt>Downloads</dt> <dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd> </dl> <dl> <dt>On GitHub</dt> <dd> <a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.rst">View</a> </dd> </dl> <hr> <div> <div> Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a> </div> </div> </div> </div> </div> </div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../../_sources/frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-update.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Update to latest TensorFlow Neuron</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="update-to-latest-tensorflow-neuron"> <span id="update-neuron-tensorflow"></span><h1>Update to latest TensorFlow Neuron<a class="headerlink" href="#update-to-latest-tensorflow-neuron" title="Permalink to this headline">#</a></h1> <div class="admonition note"> <p class="admonition-title">Note</p> <ul class="simple"> <li><p>Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI.</p></li> <li><p>For an example of how to install Neuron components in a container, see <a class="reference internal" href="../../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup"><span class="std std-ref">Tutorial Docker environment setup</span></a> and our <span class="xref std std-ref">neuron-containers</span> documentation for more details.</p></li> </ul> </div> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#develop-on-aws-ml-accelerator-instance" id="id1">Develop on AWS ML accelerator instance</a></p></li> <li><p><a class="reference internal" href="#compile-on-compute-instance" id="id2">Compile on compute instance</a></p></li> <li><p><a class="reference internal" href="#deploy-on-aws-ml-accelerator-instance" id="id3">Deploy on AWS ML accelerator instance</a></p></li> </ul> </div> <div class="section" id="develop-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id1">Develop on AWS ML accelerator instance</a><a class="headerlink" href="#develop-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-0"> TensorFlow 2.10.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-5" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-5"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-6" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-6"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-1" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-1"> TensorFlow 2.9.3</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-7" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-7"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-8" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-8"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-2" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-2"> TensorFlow 2.8.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-9" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-9"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-10" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-10"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-3" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-3"> TensorFlow 2.7.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-11" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-11"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-12" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-12"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-4" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-4"> TensorFlow 1.15.5</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-13" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-13"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> <input id="sd-tab-item-14" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-14"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="compile-on-compute-instance"> <h2><a class="toc-backref" href="#id2">Compile on compute instance</a><a class="headerlink" href="#compile-on-compute-instance" title="Permalink to this headline">#</a></h2> <p>If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment.</p> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-15" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-15"> TensorFlow 2.10.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-20" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-20"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron </pre></div> </div> </div> <input id="sd-tab-item-21" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-21"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-16" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-16"> TensorFlow 2.9.3</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-22" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-22"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron </pre></div> </div> </div> <input id="sd-tab-item-23" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-23"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-17" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-17"> TensorFlow 2.8.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-24" name="sd-tab-set-9" type="radio"> <label class="sd-tab-label" for="sd-tab-item-24"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron </pre></div> </div> </div> <input id="sd-tab-item-25" name="sd-tab-set-9" type="radio"> <label class="sd-tab-label" for="sd-tab-item-25"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-18" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-18"> TensorFlow 2.7.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-26" name="sd-tab-set-10" type="radio"> <label class="sd-tab-label" for="sd-tab-item-26"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron </pre></div> </div> </div> <input id="sd-tab-item-27" name="sd-tab-set-10" type="radio"> <label class="sd-tab-label" for="sd-tab-item-27"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-19" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-19"> TensorFlow 1.15.5</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-28" name="sd-tab-set-11" type="radio"> <label class="sd-tab-label" for="sd-tab-item-28"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> <input id="sd-tab-item-29" name="sd-tab-set-11" type="radio"> <label class="sd-tab-label" for="sd-tab-item-29"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="deploy-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id3">Deploy on AWS ML accelerator instance</a><a class="headerlink" href="#deploy-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-30" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-30"> TensorFlow 2.10.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-35" name="sd-tab-set-13" type="radio"> <label class="sd-tab-label" for="sd-tab-item-35"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-36" name="sd-tab-set-13" type="radio"> <label class="sd-tab-label" for="sd-tab-item-36"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc] "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.10.1.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-31" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-31"> TensorFlow 2.9.3</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-37" name="sd-tab-set-14" type="radio"> <label class="sd-tab-label" for="sd-tab-item-37"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-38" name="sd-tab-set-14" type="radio"> <label class="sd-tab-label" for="sd-tab-item-38"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.9.3.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.9.3.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-32" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-32"> TensorFlow 2.8.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-39" name="sd-tab-set-15" type="radio"> <label class="sd-tab-label" for="sd-tab-item-39"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-40" name="sd-tab-set-15" type="radio"> <label class="sd-tab-label" for="sd-tab-item-40"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.8.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.8.4.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-33" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-33"> TensorFlow 2.7.4</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-41" name="sd-tab-set-16" type="radio"> <label class="sd-tab-label" for="sd-tab-item-41"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo apt-get install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y </pre></div> </div> </div> <input id="sd-tab-item-42" name="sd-tab-set-16" type="radio"> <label class="sd-tab-label" for="sd-tab-item-42"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_tensorflow_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_tensorflow_inf1 --display-name "Python (tensorflow-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update TensorFlow Neuron python -m pip install --upgrade tensorflow-neuron[cc]==2.7.4.* "protobuf" # Update Neuron TensorBoard python -m pip install --upgrade tensorboard-plugin-neuron # Optional: Update Tensorflow Neuron model server sudo yum install tensorflow-model-server-neuronx=2.7.4.2.10.1.0 -y </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-34" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-34"> TensorFlow 1.15.5</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-43" name="sd-tab-set-17" type="radio"> <label class="sd-tab-label" for="sd-tab-item-43"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> <input id="sd-tab-item-44" name="sd-tab-set-17" type="radio"> <label class="sd-tab-label" for="sd-tab-item-44"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last): File "src/helperscripts/n2-helper.py", line 1015, in &lt;module&gt; print(n2_manifest.generate_script(args)) File "src/helperscripts/n2-helper.py", line 136, in generate_script str_python = self.set_python_venv(args) File "src/helperscripts/n2-helper.py", line 506, in set_python_venv packages_supporting_python_versions = self.get_pip_packages_supporting_python_versions(args) File "src/helperscripts/n2-helper.py", line 84, in get_pip_packages_supporting_python_versions 'supported_python_versions'].values[0] IndexError: index 0 is out of bounds for axis 0 with size 0 </pre></div> </div> </div> </div> </div> </div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:18.913Z
Release Content — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/release-notes/releasecontent.html#neuron-release-content
# Release Content — AWS Neuron Documentation ## Release Content[#](#release-content "Permalink to this headline") Table of contents - [Neuron 2.9.1 (04/19/2023)](#neuron-2-9-1-04-19-2023) - [Trn1 packages](#trn1-packages) - [Inf2 packages](#inf2-packages) - [Inf1 packages](#inf1-packages) - [Previous Neuron Releases Content](#previous-neuron-releases-content) ## [Neuron 2.9.1 (04/19/2023)](#id1)[#](#neuron-2-9-1-04-19-2023 "Permalink to this headline") ### [Trn1 packages](#id2)[#](#trn1-packages "Permalink to this headline") ``` List of packages in Neuron 2.9.1: Component Package Collective Communication Library aws-neuronx-collectives-2.12.35.0 Driver aws-neuronx-dkms-2.8.4.0 CustomOps aws-neuronx-gpsimd-customop-0.2.3.0 CustomOps Tools aws-neuronx-gpsimd-tools-0.2.1.0 General aws-neuronx-runtime-discovery-2.9 Runtime Library aws-neuronx-runtime-lib-2.12.23.0 System Tools aws-neuronx-tools-2.9.5.0 General libneuronxla-0.5.205 General neuronx_hwm-2.5.0.0 PyTorch torch_xla-1.13.0 Compiler neuronx-cc-2.5.0.28 Kubernetes Plugin aws-neuronx-k8-plugin-2.12.5.0 Kubernetes Scheduler aws-neuronx-k8-scheduler-2.12.5.0 OCI Hooks aws-neuronx-oci-hooks-2.1.97.0 TensorFlow tensorflow-neuronx-2.10.1.2.0.0 TensorFlow Model Server tensorflow-model-server-neuronx-1.15.0.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.10.1.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.7.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.8.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.9.3.2.7.4.0 TensorBoard tensorboard-plugin-neuronx-2.5.25.0 PyTorch torch-neuronx-1.13.0.1.6.1 ``` ### [Inf2 packages](#id3)[#](#inf2-packages "Permalink to this headline") ``` List of packages in Neuron 2.9.1: Component Package Collective Communication Library aws-neuronx-collectives-2.12.35.0 Driver aws-neuronx-dkms-2.8.4.0 CustomOps aws-neuronx-gpsimd-customop-0.2.3.0 CustomOps Tools aws-neuronx-gpsimd-tools-0.2.1.0 General aws-neuronx-runtime-discovery-2.9 Runtime Library aws-neuronx-runtime-lib-2.12.23.0 System Tools aws-neuronx-tools-2.9.5.0 General libneuronxla-0.5.205 General neuronx_hwm-2.5.0.0 PyTorch torch_xla-1.13.0 Compiler neuronx-cc-2.5.0.28 Kubernetes Plugin aws-neuronx-k8-plugin-2.12.5.0 Kubernetes Scheduler aws-neuronx-k8-scheduler-2.12.5.0 OCI Hooks aws-neuronx-oci-hooks-2.1.97.0 TensorFlow tensorflow-neuronx-2.10.1.2.0.0 TensorFlow Model Server tensorflow-model-server-neuronx-1.15.0.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.10.1.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.7.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.8.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.9.3.2.7.4.0 TensorBoard tensorboard-plugin-neuronx-2.5.25.0 PyTorch torch-neuronx-1.13.0.1.6.1 ``` ### [Inf1 packages](#id4)[#](#inf1-packages "Permalink to this headline") ``` List of packages in Neuron 2.9.1: Component Package Driver aws-neuronx-dkms-2.8.4.0 System Tools aws-neuronx-tools-2.9.5.0 Compiler neuron-cc-1.14.3.0 Kubernetes Plugin aws-neuronx-k8-plugin-2.12.5.0 Kubernetes Scheduler aws-neuronx-k8-scheduler-2.12.5.0 OCI Hooks aws-neuronx-oci-hooks-2.1.97.0 TensorFlow tensorflow-neuron-1.15.5.2.7.4.0 TensorFlow tensorflow-neuron-2.10.1.2.7.4.0 TensorFlow tensorflow-neuron-2.7.4.2.7.4.0 TensorFlow tensorflow-neuron-2.8.4.2.7.4.0 TensorFlow tensorflow-neuron-2.9.3.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-1.15.0.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.10.1.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.7.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.8.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.9.3.2.7.4.0 PyTorch torch-neuron-1.10.2.2.6.6.0 PyTorch torch-neuron-1.11.0.2.6.6.0 PyTorch torch-neuron-1.12.1.2.6.6.0 PyTorch torch-neuron-1.13.1.2.6.6.0 PyTorch torch-neuron-1.9.1.2.6.6.0 MXNet mxnet_neuron-1.5.1.1.10.37.0 MXNet mx_neuron-1.8.0.2.2.127.0 Perf Tools neuronperf-1.7.1.0 Runtime Library libnrt.so (Version 2.12.16.0) ```
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Release Content — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../_static/pygments.css"> <link rel="stylesheet" href="../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script> <script src="../_static/jquery.js"></script> <script src="../_static/underscore.js"></script> <script src="../_static/doctools.js"></script> <script src="../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../_static/contentui.js"></script> <script src="../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../genindex.html"> <link rel="search" title="Search" href="../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "release-notes/releasecontent", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Frelease-notes/releasecontent.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/release-notes/releasecontent.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../_sources/release-notes/releasecontent.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#neuron-2-9-1-04-19-2023"> Neuron 2.9.1 (04/19/2023) </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#trn1-packages"> Trn1 packages </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#inf2-packages"> Inf2 packages </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#inf1-packages"> Inf1 packages </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#previous-neuron-releases-content"> Previous Neuron Releases Content </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Release Content</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#neuron-2-9-1-04-19-2023"> Neuron 2.9.1 (04/19/2023) </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#trn1-packages"> Trn1 packages </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#inf2-packages"> Inf2 packages </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#inf1-packages"> Inf1 packages </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#previous-neuron-releases-content"> Previous Neuron Releases Content </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> <div class="section" id="release-content"> <span id="latest-neuron-release-content"></span><span id="neuron-release-content"></span><h1>Release Content<a class="headerlink" href="#release-content" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#neuron-2-9-1-04-19-2023" id="id1">Neuron 2.9.1 (04/19/2023)</a></p> <ul> <li><p><a class="reference internal" href="#trn1-packages" id="id2">Trn1 packages</a></p></li> <li><p><a class="reference internal" href="#inf2-packages" id="id3">Inf2 packages</a></p></li> <li><p><a class="reference internal" href="#inf1-packages" id="id4">Inf1 packages</a></p></li> </ul> </li> <li><p><a class="reference internal" href="#previous-neuron-releases-content" id="id5">Previous Neuron Releases Content</a></p></li> </ul> </div> <div class="section" id="neuron-2-9-1-04-19-2023"> <h2><a class="toc-backref" href="#id1">Neuron 2.9.1 (04/19/2023)</a><a class="headerlink" href="#neuron-2-9-1-04-19-2023" title="Permalink to this headline">#</a></h2> <div class="section" id="trn1-packages"> <h3><a class="toc-backref" href="#id2">Trn1 packages</a><a class="headerlink" href="#trn1-packages" title="Permalink to this headline">#</a></h3> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of packages in Neuron 2.9.1: Component Package Collective Communication Library aws-neuronx-collectives-2.12.35.0 Driver aws-neuronx-dkms-2.8.4.0 CustomOps aws-neuronx-gpsimd-customop-0.2.3.0 CustomOps Tools aws-neuronx-gpsimd-tools-0.2.1.0 General aws-neuronx-runtime-discovery-2.9 Runtime Library aws-neuronx-runtime-lib-2.12.23.0 System Tools aws-neuronx-tools-2.9.5.0 General libneuronxla-0.5.205 General neuronx_hwm-2.5.0.0 PyTorch torch_xla-1.13.0 Compiler neuronx-cc-2.5.0.28 Kubernetes Plugin aws-neuronx-k8-plugin-2.12.5.0 Kubernetes Scheduler aws-neuronx-k8-scheduler-2.12.5.0 OCI Hooks aws-neuronx-oci-hooks-2.1.97.0 TensorFlow tensorflow-neuronx-2.10.1.2.0.0 TensorFlow Model Server tensorflow-model-server-neuronx-1.15.0.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.10.1.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.7.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.8.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.9.3.2.7.4.0 TensorBoard tensorboard-plugin-neuronx-2.5.25.0 PyTorch torch-neuronx-1.13.0.1.6.1 </pre></div> </div> </div> <div class="section" id="inf2-packages"> <h3><a class="toc-backref" href="#id3">Inf2 packages</a><a class="headerlink" href="#inf2-packages" title="Permalink to this headline">#</a></h3> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of packages in Neuron 2.9.1: Component Package Collective Communication Library aws-neuronx-collectives-2.12.35.0 Driver aws-neuronx-dkms-2.8.4.0 CustomOps aws-neuronx-gpsimd-customop-0.2.3.0 CustomOps Tools aws-neuronx-gpsimd-tools-0.2.1.0 General aws-neuronx-runtime-discovery-2.9 Runtime Library aws-neuronx-runtime-lib-2.12.23.0 System Tools aws-neuronx-tools-2.9.5.0 General libneuronxla-0.5.205 General neuronx_hwm-2.5.0.0 PyTorch torch_xla-1.13.0 Compiler neuronx-cc-2.5.0.28 Kubernetes Plugin aws-neuronx-k8-plugin-2.12.5.0 Kubernetes Scheduler aws-neuronx-k8-scheduler-2.12.5.0 OCI Hooks aws-neuronx-oci-hooks-2.1.97.0 TensorFlow tensorflow-neuronx-2.10.1.2.0.0 TensorFlow Model Server tensorflow-model-server-neuronx-1.15.0.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.10.1.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.7.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.8.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.9.3.2.7.4.0 TensorBoard tensorboard-plugin-neuronx-2.5.25.0 PyTorch torch-neuronx-1.13.0.1.6.1 </pre></div> </div> </div> <div class="section" id="inf1-packages"> <h3><a class="toc-backref" href="#id4">Inf1 packages</a><a class="headerlink" href="#inf1-packages" title="Permalink to this headline">#</a></h3> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>List of packages in Neuron 2.9.1: Component Package Driver aws-neuronx-dkms-2.8.4.0 System Tools aws-neuronx-tools-2.9.5.0 Compiler neuron-cc-1.14.3.0 Kubernetes Plugin aws-neuronx-k8-plugin-2.12.5.0 Kubernetes Scheduler aws-neuronx-k8-scheduler-2.12.5.0 OCI Hooks aws-neuronx-oci-hooks-2.1.97.0 TensorFlow tensorflow-neuron-1.15.5.2.7.4.0 TensorFlow tensorflow-neuron-2.10.1.2.7.4.0 TensorFlow tensorflow-neuron-2.7.4.2.7.4.0 TensorFlow tensorflow-neuron-2.8.4.2.7.4.0 TensorFlow tensorflow-neuron-2.9.3.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-1.15.0.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.10.1.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.7.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.8.4.2.7.4.0 TensorFlow Model Server tensorflow-model-server-neuronx-2.9.3.2.7.4.0 PyTorch torch-neuron-1.10.2.2.6.6.0 PyTorch torch-neuron-1.11.0.2.6.6.0 PyTorch torch-neuron-1.12.1.2.6.6.0 PyTorch torch-neuron-1.13.1.2.6.6.0 PyTorch torch-neuron-1.9.1.2.6.6.0 MXNet mxnet_neuron-1.5.1.1.10.37.0 MXNet mx_neuron-1.8.0.2.2.127.0 Perf Tools neuronperf-1.7.1.0 Runtime Library libnrt.so (Version 2.12.16.0) </pre></div> </div> </div> </div> <div class="section" id="previous-neuron-releases-content"> <h2><a class="toc-backref" href="#id5">Previous Neuron Releases Content</a><a class="headerlink" href="#previous-neuron-releases-content" title="Permalink to this headline">#</a></h2> <ul class="simple"> <li><p><a class="reference internal" href="prev/content.html#pre-release-content"><span class="std std-ref">Previous Releases Artifacts (Neuron 2.x)</span></a></p></li> <li><p><a class="reference internal" href="neuron1/prev/content.html#pre-n1-release-content"><span class="std std-ref">Previous Releases’ Content (Neuron 1.x)</span></a></p></li> </ul> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:19.237Z
Install PyTorch Neuron (torch-neuron) — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuron/setup/pytorch-install.html#install-neuron-pytorch
# Install PyTorch Neuron (torch-neuron) — AWS Neuron Documentation _This document is relevant for_: `Inf1` ## Install PyTorch Neuron (`torch-neuron`)[#](#install-pytorch-neuron-torch-neuron "Permalink to this headline") Note - Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI. - For an example of how to install Neuron components in a container, see [Tutorial Docker environment setup](../../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup) and our neuron-containers documentation for more details. Table of contents - [Develop on AWS ML accelerator instance](#develop-on-aws-ml-accelerator-instance) - [Compile on compute instance](#compile-on-compute-instance) - [Deploy on AWS ML accelerator instance](#deploy-on-aws-ml-accelerator-instance) ## [Develop on AWS ML accelerator instance](#id1)[#](#develop-on-aws-ml-accelerator-instance "Permalink to this headline") The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. PyTorch 1.13.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision ``` PyTorch 1.12.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* neuron-cc[tensorflow] "protobuf" torchvision ``` PyTorch 1.11.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf" torchvision ``` PyTorch 1.10.2 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* neuron-cc[tensorflow] "protobuf" torchvision ``` PyTorch 1.9.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] "protobuf" torchvision ``` ## [Compile on compute instance](#id2)[#](#compile-on-compute-instance "Permalink to this headline") If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment. PyTorch 1.13.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision ``` PyTorch 1.12.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* neuron-cc[tensorflow] "protobuf" torchvision ``` PyTorch 1.11.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf" torchvision ``` PyTorch 1.10.2 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* neuron-cc[tensorflow] "protobuf" torchvision ``` PyTorch 1.9.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] "protobuf" torchvision ``` ## [Deploy on AWS ML accelerator instance](#id3)[#](#deploy-on-aws-ml-accelerator-instance "Permalink to this headline") During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. PyTorch 1.13.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron torchvision ``` PyTorch 1.12.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* torchvision ``` PyTorch 1.11.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* torchvision ``` PyTorch 1.10.2 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* torchvision ``` PyTorch 1.9.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* torchvision ``` _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Install PyTorch Neuron (torch-neuron) — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css"> <link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script> <script src="../../../../_static/jquery.js"></script> <script src="../../../../_static/underscore.js"></script> <script src="../../../../_static/doctools.js"></script> <script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../../_static/contentui.js"></script> <script src="../../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../../genindex.html"> <link rel="search" title="Search" href="../../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/torch/torch-neuron/setup/pytorch-install", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"><!-- Inserted RTD Footer --> <div class="injected"> <div class="rst-versions rst-badge" data-toggle="rst-versions"> <span class="rst-current-version" data-toggle="rst-current-version"> <span class="fa fa-book">&nbsp;</span> v: v2.14.1 <span class="fa fa-caret-down"></span> </span> <div class="rst-other-versions"> <dl> <dt>Versions</dt> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuron/setup/pytorch-install.html">latest</a> </dd> <dd class="rtd-current-item"> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.14.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.14.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.13.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.13.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.13.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.12.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.12.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.12.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.11.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.10.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.9.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.9.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.8.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.7.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.6.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.5.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.4.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v2.3.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.19.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.19.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.19.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.18.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.17.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.17.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.17.0</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.16.3</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.16.2</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.16.1</a> </dd> <dd> <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/frameworks/torch/torch-neuron/setup/pytorch-install.html">v1.16.0</a> </dd> </dl> <dl> <dt>Downloads</dt> <dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd> </dl> <dl> <dt>On GitHub</dt> <dd> <a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//frameworks/torch/torch-neuron/setup/pytorch-install.rst">View</a> </dd> </dl> <hr> <div> <div> Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a> </div> </div> </div> </div> </div> </div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/torch/torch-neuron/setup/pytorch-install.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/torch/torch-neuron/setup/pytorch-install.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../../_sources/frameworks/torch/torch-neuron/setup/pytorch-install.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Install PyTorch Neuron (torch-neuron)</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="install-pytorch-neuron-torch-neuron"> <span id="install-neuron-pytorch"></span><h1>Install PyTorch Neuron (<code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code>)<a class="headerlink" href="#install-pytorch-neuron-torch-neuron" title="Permalink to this headline">#</a></h1> <div class="admonition note"> <p class="admonition-title">Note</p> <ul class="simple"> <li><p>Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI.</p></li> <li><p>For an example of how to install Neuron components in a container, see <a class="reference internal" href="../../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup"><span class="std std-ref">Tutorial Docker environment setup</span></a> and our <span class="xref std std-ref">neuron-containers</span> documentation for more details.</p></li> </ul> </div> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#develop-on-aws-ml-accelerator-instance" id="id1">Develop on AWS ML accelerator instance</a></p></li> <li><p><a class="reference internal" href="#compile-on-compute-instance" id="id2">Compile on compute instance</a></p></li> <li><p><a class="reference internal" href="#deploy-on-aws-ml-accelerator-instance" id="id3">Deploy on AWS ML accelerator instance</a></p></li> </ul> </div> <div class="section" id="develop-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id1">Develop on AWS ML accelerator instance</a><a class="headerlink" href="#develop-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-0"> PyTorch 1.13.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-5" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-5"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-6" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-6"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-1" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-1"> PyTorch 1.12.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-7" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-7"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-8" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-8"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-2" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-2"> PyTorch 1.11.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-9" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-9"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-10" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-10"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-3" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-3"> PyTorch 1.10.2</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-11" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-11"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-12" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-12"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-4" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-4"> PyTorch 1.9.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-13" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-13"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-14" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-14"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="compile-on-compute-instance"> <h2><a class="toc-backref" href="#id2">Compile on compute instance</a><a class="headerlink" href="#compile-on-compute-instance" title="Permalink to this headline">#</a></h2> <p>If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment.</p> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-15" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-15"> PyTorch 1.13.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-20" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-20"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-21" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-21"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-16" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-16"> PyTorch 1.12.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-22" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-22"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-23" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-23"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-17" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-17"> PyTorch 1.11.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-24" name="sd-tab-set-9" type="radio"> <label class="sd-tab-label" for="sd-tab-item-24"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-25" name="sd-tab-set-9" type="radio"> <label class="sd-tab-label" for="sd-tab-item-25"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-18" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-18"> PyTorch 1.10.2</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-26" name="sd-tab-set-10" type="radio"> <label class="sd-tab-label" for="sd-tab-item-26"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-27" name="sd-tab-set-10" type="radio"> <label class="sd-tab-label" for="sd-tab-item-27"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-19" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-19"> PyTorch 1.9.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-28" name="sd-tab-set-11" type="radio"> <label class="sd-tab-label" for="sd-tab-item-28"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-29" name="sd-tab-set-11" type="radio"> <label class="sd-tab-label" for="sd-tab-item-29"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="deploy-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id3">Deploy on AWS ML accelerator instance</a><a class="headerlink" href="#deploy-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-30" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-30"> PyTorch 1.13.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-35" name="sd-tab-set-13" type="radio"> <label class="sd-tab-label" for="sd-tab-item-35"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron torchvision </pre></div> </div> </div> <input id="sd-tab-item-36" name="sd-tab-set-13" type="radio"> <label class="sd-tab-label" for="sd-tab-item-36"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-31" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-31"> PyTorch 1.12.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-37" name="sd-tab-set-14" type="radio"> <label class="sd-tab-label" for="sd-tab-item-37"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* torchvision </pre></div> </div> </div> <input id="sd-tab-item-38" name="sd-tab-set-14" type="radio"> <label class="sd-tab-label" for="sd-tab-item-38"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.12.1.* torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-32" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-32"> PyTorch 1.11.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-39" name="sd-tab-set-15" type="radio"> <label class="sd-tab-label" for="sd-tab-item-39"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* torchvision </pre></div> </div> </div> <input id="sd-tab-item-40" name="sd-tab-set-15" type="radio"> <label class="sd-tab-label" for="sd-tab-item-40"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.11.0.* torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-33" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-33"> PyTorch 1.10.2</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-41" name="sd-tab-set-16" type="radio"> <label class="sd-tab-label" for="sd-tab-item-41"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* torchvision </pre></div> </div> </div> <input id="sd-tab-item-42" name="sd-tab-set-16" type="radio"> <label class="sd-tab-label" for="sd-tab-item-42"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.10.2.* torchvision </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-34" name="sd-tab-set-12" type="radio"> <label class="sd-tab-label" for="sd-tab-item-34"> PyTorch 1.9.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-43" name="sd-tab-set-17" type="radio"> <label class="sd-tab-label" for="sd-tab-item-43"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* torchvision </pre></div> </div> </div> <input id="sd-tab-item-44" name="sd-tab-set-17" type="radio"> <label class="sd-tab-label" for="sd-tab-item-44"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_pytorch_inf1 # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install PyTorch Neuron python -m pip install torch-neuron==1.9.1.* torchvision </pre></div> </div> </div> </div> </div> </div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:19.391Z
Update to latest PyTorch Neuron (torch-neuron) — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuron/setup/pytorch-update.html#update-neuron-pytorch
# Update to latest PyTorch Neuron (torch-neuron) — AWS Neuron Documentation _This document is relevant for_: `Inf1` ## Update to latest PyTorch Neuron (`torch-neuron`)[#](#update-to-latest-pytorch-neuron-torch-neuron "Permalink to this headline") Note - Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI. - For an example of how to install Neuron components in a container, see [Tutorial Docker environment setup](../../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup) and our neuron-containers documentation for more details. Table of contents - [Develop on AWS ML accelerator instance](#develop-on-aws-ml-accelerator-instance) - [Compile on compute instance](#compile-on-compute-instance) - [Deploy on AWS ML accelerator instance](#deploy-on-aws-ml-accelerator-instance) ## [Develop on AWS ML accelerator instance](#id1)[#](#develop-on-aws-ml-accelerator-instance "Permalink to this headline") The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. PyTorch 1.13.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf" torchvision ``` ## [Compile on compute instance](#id2)[#](#compile-on-compute-instance "Permalink to this headline") If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment. PyTorch 1.13.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf" torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf" torchvision ``` ## [Deploy on AWS ML accelerator instance](#id3)[#](#deploy-on-aws-ml-accelerator-instance "Permalink to this headline") During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. PyTorch 1.13.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron torchvision ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron torchvision ``` _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Update to latest PyTorch Neuron (torch-neuron) — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css"> <link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script> <script src="../../../../_static/jquery.js"></script> <script src="../../../../_static/underscore.js"></script> <script src="../../../../_static/doctools.js"></script> <script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../../_static/contentui.js"></script> <script src="../../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../../genindex.html"> <link rel="search" title="Search" href="../../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/torch/torch-neuron/setup/pytorch-update", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/torch/torch-neuron/setup/pytorch-update.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/torch/torch-neuron/setup/pytorch-update.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../../_sources/frameworks/torch/torch-neuron/setup/pytorch-update.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Update to latest PyTorch Neuron (torch-neuron)</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="update-to-latest-pytorch-neuron-torch-neuron"> <span id="update-neuron-pytorch"></span><h1>Update to latest PyTorch Neuron (<code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code>)<a class="headerlink" href="#update-to-latest-pytorch-neuron-torch-neuron" title="Permalink to this headline">#</a></h1> <div class="admonition note"> <p class="admonition-title">Note</p> <ul class="simple"> <li><p>Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI.</p></li> <li><p>For an example of how to install Neuron components in a container, see <a class="reference internal" href="../../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup"><span class="std std-ref">Tutorial Docker environment setup</span></a> and our <span class="xref std std-ref">neuron-containers</span> documentation for more details.</p></li> </ul> </div> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#develop-on-aws-ml-accelerator-instance" id="id1">Develop on AWS ML accelerator instance</a></p></li> <li><p><a class="reference internal" href="#compile-on-compute-instance" id="id2">Compile on compute instance</a></p></li> <li><p><a class="reference internal" href="#deploy-on-aws-ml-accelerator-instance" id="id3">Deploy on AWS ML accelerator instance</a></p></li> </ul> </div> <div class="section" id="develop-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id1">Develop on AWS ML accelerator instance</a><a class="headerlink" href="#develop-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-0"> PyTorch 1.13.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-1" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-1"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-2" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-2"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="compile-on-compute-instance"> <h2><a class="toc-backref" href="#id2">Compile on compute instance</a><a class="headerlink" href="#compile-on-compute-instance" title="Permalink to this headline">#</a></h2> <p>If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment.</p> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-3" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-3"> PyTorch 1.13.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-4" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-4"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> <input id="sd-tab-item-5" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-5"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron neuron-cc[tensorflow] "protobuf" torchvision </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="deploy-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id3">Deploy on AWS ML accelerator instance</a><a class="headerlink" href="#deploy-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-6" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-6"> PyTorch 1.13.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-7" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-7"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron torchvision </pre></div> </div> </div> <input id="sd-tab-item-8" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-8"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_pytorch_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update PyTorch Neuron python -m pip install --upgrade torch-neuron torchvision </pre></div> </div> </div> </div> </div> </div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:19.560Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/index.rst.txt
``` .. _neuron-appnotes-index: .. _neuron-appnotes: Neuron Application Notes ======================== .. dropdown:: Neuron 2.x :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/announcements/neuron2.x/neuron2-intro .. dropdown:: Neuron Runtime library :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/appnotes/neuron1x/introducing-libnrt .. dropdown:: Performance (Inf1) :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/appnotes/perf/neuron-cc/performance-tuning /general/appnotes/perf/neuron-cc/parallel-ncgs .. dropdown:: PyTorch Neuron (torch-neuron) :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/appnotes/torch-neuron/rcnn-app-note .. dropdown:: Transformers Neuron (transformers-neuronx) :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-appnotes-index: .. _neuron-appnotes: Neuron Application Notes ======================== .. dropdown:: Neuron 2.x :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/announcements/neuron2.x/neuron2-intro .. dropdown:: Neuron Runtime library :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/appnotes/neuron1x/introducing-libnrt .. dropdown:: Performance (Inf1) :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/appnotes/perf/neuron-cc/performance-tuning /general/appnotes/perf/neuron-cc/parallel-ncgs .. dropdown:: PyTorch Neuron (torch-neuron) :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/appnotes/torch-neuron/rcnn-app-note .. dropdown:: Transformers Neuron (transformers-neuronx) :class-title: sphinx-design-class-title-med :class-body: sphinx-design-class-body-small :animate: fade-in .. toctree:: :maxdepth: 1 /general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron </pre></body></html>
2023-09-29T20:55:19.586Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/perf/neuron-cc/performance-tuning.rst.txt
``` .. _appnote-performance-tuning: Performance Tuning ================== .. important :: NeuronCore Groups (NCG) is deprecated, please see :ref:`eol-ncg` and :ref:`neuron-migrating-apps-neuron-to-libnrt` for more details. This guide is intended to provide the reader with an in-depth understanding on how to optimize neural network performance on Inferentia for both throughput and latency. For simplicity, the guide uses TensorFlow and ResNet-50 model as a teaching example to learn how choosing between different compile-time optimizations (e.g. Batching and NeuronCore Pipeline), as well as model-serving optimizations (e.g. multi-threading and dynamic-batching) improves inference performance. The following guides are considered prerequisites for this tutorial: - :ref:`/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb` - :ref:`tensorflow-serving-neurocore-group` - :ref:`neuron-batching` - :ref:`neuroncore-pipeline` Batching and pipelining (technical background) ---------------------------------------------- Neuron provides developers with various performance optimization features. Two of the most widely used features are batching and pipelining. Both techniques aim to keep the data close to the compute engines, but achieve this data locality in different ways. In batching it is achieved by loading the data into an on-chip cache and reusing it multiple times for multiple different model-inputs, while in pipelining this is achieved by caching all model parameters into the on-chip cache across multiple NeuronCores and streaming the calculation across them. As a general rule of thumb, batching is preferred for applications that aim to optimize throughput and cost at the expense of latency, while pipelining is preferred for applications with high-throughput requirement under a strict latency budget. Compiling for batching optimization ----------------------------------- To enable the batching optimization, we first need to compile the model for a target batch-size. This is done by specifying the batch size in the input tensor's batch dimension during compilation. Users are encouraged to evaluate multiple batch sizes in order to determine the optimal latency/throughput deployment-point, which is application dependent. For example, the code snippet below enables batching on a ResNet50 model, with a batch-size of 5: .. code:: python import numpy as np import tensorflow.neuron as tfn # To change the batch size, change the first dimension in example_input batch_size = 5 example_input = np.zeros([batch_size,224,224,3], dtype='float16') tfn.saved_model.compile("rn50_fp16", "rn50_fp16_compiled/1", model_feed_dict={'input_1:0': example_input }, dynamic_batch_size=True) .. note:: Depending on the neural network size, Neuron will have a maximum batch size that works optimally on Inferentia. If an unsupported batch size is used, an internal compiler error message will be displayed. A simple way to explore optimal batch size for your specific model is to increment the batch size from 1 upward, one at a time, and test application performance. Compiling for pipeline optimization ----------------------------------- With NeuronCore Pipeline mode, Neuron stores the model parameters onto the Inferentias' local caches, and streams the inference requests across the available NeuronCores, as specified by the ``--neuroncore-pipeline-cores`` compiler argument. For example, to compile the model to fit pipeline size of four Inferentia devices (16 NeuronCores) avaliable in the inf1.6xlarge instance size: .. code:: python import numpy as np import tensorflow.neuron as tfn compiler_args = ['--neuroncore-pipeline-cores', '16'] example_input = np.zeros([1,224,224,3], dtype='float16') tfn.saved_model.compile("rn50_fp16", "rn50_fp16_compiled/1", model_feed_dict={'input_1:0': example_input }, compiler_args=compiler_args) The minimum number of NeuronCores needed to run a compiled model can be found using Neuron Check Model tool. Please see :ref:`neuron_check_model`. Model-serving inference optimizations ------------------------------------- In order to fully realize the maximum throughput of the compiled model (for either batching and pipelining), users need to launch multiple host CPU threads to feed inputs into the Neuron pipeline. The number of threads need to be larger than the specified maximum number of NeuronCores. Additionally, dynamic batching can be used to process a larger client-side inference batch-size and the framework automatically breaks up the user-batch into smaller batch sizes to match the compiled batch-size. This technique increases the achievable throughput by hiding the framework-to-neuron overhead, and amortizing it over a larger batch size. To use dynamic batching, set the argument ``--dynamic_batch_size=True`` during compilation and send larger inference batch size (user inference batch size) that is equal to a multiple of the compiled batch size. Both of methods can be applied together if that shows improvement in performance. However, multi-threading is always needed as a first step to achieve high throughput. You may need to experiment in order to find the right optimization settings for your application. By default, the framework sets the number of outstanding inference requests to the total number of NeuronCores plus three. This can be changed by setting the NEURON_MAX_NUM_INFERS environment variable. For example, if the compiled model includes some CPU partitions (as when Neuron compiler decided some operations are more efficient to execute on CPU), the number of threads should be increased to account for the additional compute performed on the CPU. Note that the available instance host memory size should be taken into consideration to avoid out-of-memory errors. As above, you need to experiment in order to find the right optimization settings for your application. .. note:: By default the framework allocates NeuronCore Group size to match the size of the compiled model. The size of the model is the number of NeuronCores limit passed to compiler during compilation (``--neuroncore-pipeline-cores`` option). For more information see :ref:`tensorflow-serving-neurocore-group`. Other considerations -------------------- Mixed Precision ~~~~~~~~~~~~~~~ You can find more defails about performance and accuracy trade offs in :ref:`neuron-cc-training-mixed-precision`. Operator support ~~~~~~~~~~~~~~~~ The Neuron Compiler maintains an evolving list of supported operators for each framework: :ref:`neuron-supported-operators` AWS Neuron handles unsupported operators by partitioning the graph into subgraph, and executing them on different targets (e.g. NeuronCore partition, CPU partition). If the entire model can run on Inferentia (i.e. all operators are supported), then the model will be compiled into a single subgraph which will be executed by a NeuronCore Group. Debug ~~~~~ You can examine the post-compiled model to view the compilation results using the Neuron plugin for TensorBoard. See :ref:`tensorboard-plugin-visualize-graph`. ResNet-50 optimization example ------------------------------ For an example demonstrating the concepts described here, see :ref:`/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _appnote-performance-tuning: Performance Tuning ================== .. important :: NeuronCore Groups (NCG) is deprecated, please see :ref:`eol-ncg` and :ref:`neuron-migrating-apps-neuron-to-libnrt` for more details. This guide is intended to provide the reader with an in-depth understanding on how to optimize neural network performance on Inferentia for both throughput and latency. For simplicity, the guide uses TensorFlow and ResNet-50 model as a teaching example to learn how choosing between different compile-time optimizations (e.g. Batching and NeuronCore Pipeline), as well as model-serving optimizations (e.g. multi-threading and dynamic-batching) improves inference performance. The following guides are considered prerequisites for this tutorial: - :ref:`/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb` - :ref:`tensorflow-serving-neurocore-group` - :ref:`neuron-batching` - :ref:`neuroncore-pipeline` Batching and pipelining (technical background) ---------------------------------------------- Neuron provides developers with various performance optimization features. Two of the most widely used features are batching and pipelining. Both techniques aim to keep the data close to the compute engines, but achieve this data locality in different ways. In batching it is achieved by loading the data into an on-chip cache and reusing it multiple times for multiple different model-inputs, while in pipelining this is achieved by caching all model parameters into the on-chip cache across multiple NeuronCores and streaming the calculation across them. As a general rule of thumb, batching is preferred for applications that aim to optimize throughput and cost at the expense of latency, while pipelining is preferred for applications with high-throughput requirement under a strict latency budget. Compiling for batching optimization ----------------------------------- To enable the batching optimization, we first need to compile the model for a target batch-size. This is done by specifying the batch size in the input tensor's batch dimension during compilation. Users are encouraged to evaluate multiple batch sizes in order to determine the optimal latency/throughput deployment-point, which is application dependent. For example, the code snippet below enables batching on a ResNet50 model, with a batch-size of 5: .. code:: python import numpy as np import tensorflow.neuron as tfn # To change the batch size, change the first dimension in example_input batch_size = 5 example_input = np.zeros([batch_size,224,224,3], dtype='float16') tfn.saved_model.compile("rn50_fp16", "rn50_fp16_compiled/1", model_feed_dict={'input_1:0': example_input }, dynamic_batch_size=True) .. note:: Depending on the neural network size, Neuron will have a maximum batch size that works optimally on Inferentia. If an unsupported batch size is used, an internal compiler error message will be displayed. A simple way to explore optimal batch size for your specific model is to increment the batch size from 1 upward, one at a time, and test application performance. Compiling for pipeline optimization ----------------------------------- With NeuronCore Pipeline mode, Neuron stores the model parameters onto the Inferentias' local caches, and streams the inference requests across the available NeuronCores, as specified by the ``--neuroncore-pipeline-cores`` compiler argument. For example, to compile the model to fit pipeline size of four Inferentia devices (16 NeuronCores) avaliable in the inf1.6xlarge instance size: .. code:: python import numpy as np import tensorflow.neuron as tfn compiler_args = ['--neuroncore-pipeline-cores', '16'] example_input = np.zeros([1,224,224,3], dtype='float16') tfn.saved_model.compile("rn50_fp16", "rn50_fp16_compiled/1", model_feed_dict={'input_1:0': example_input }, compiler_args=compiler_args) The minimum number of NeuronCores needed to run a compiled model can be found using Neuron Check Model tool. Please see :ref:`neuron_check_model`. Model-serving inference optimizations ------------------------------------- In order to fully realize the maximum throughput of the compiled model (for either batching and pipelining), users need to launch multiple host CPU threads to feed inputs into the Neuron pipeline. The number of threads need to be larger than the specified maximum number of NeuronCores. Additionally, dynamic batching can be used to process a larger client-side inference batch-size and the framework automatically breaks up the user-batch into smaller batch sizes to match the compiled batch-size. This technique increases the achievable throughput by hiding the framework-to-neuron overhead, and amortizing it over a larger batch size. To use dynamic batching, set the argument ``--dynamic_batch_size=True`` during compilation and send larger inference batch size (user inference batch size) that is equal to a multiple of the compiled batch size. Both of methods can be applied together if that shows improvement in performance. However, multi-threading is always needed as a first step to achieve high throughput. You may need to experiment in order to find the right optimization settings for your application. By default, the framework sets the number of outstanding inference requests to the total number of NeuronCores plus three. This can be changed by setting the NEURON_MAX_NUM_INFERS environment variable. For example, if the compiled model includes some CPU partitions (as when Neuron compiler decided some operations are more efficient to execute on CPU), the number of threads should be increased to account for the additional compute performed on the CPU. Note that the available instance host memory size should be taken into consideration to avoid out-of-memory errors. As above, you need to experiment in order to find the right optimization settings for your application. .. note:: By default the framework allocates NeuronCore Group size to match the size of the compiled model. The size of the model is the number of NeuronCores limit passed to compiler during compilation (``--neuroncore-pipeline-cores`` option). For more information see :ref:`tensorflow-serving-neurocore-group`. Other considerations -------------------- Mixed Precision ~~~~~~~~~~~~~~~ You can find more defails about performance and accuracy trade offs in :ref:`neuron-cc-training-mixed-precision`. Operator support ~~~~~~~~~~~~~~~~ The Neuron Compiler maintains an evolving list of supported operators for each framework: :ref:`neuron-supported-operators` AWS Neuron handles unsupported operators by partitioning the graph into subgraph, and executing them on different targets (e.g. NeuronCore partition, CPU partition). If the entire model can run on Inferentia (i.e. all operators are supported), then the model will be compiled into a single subgraph which will be executed by a NeuronCore Group. Debug ~~~~~ You can examine the post-compiled model to view the compilation results using the Neuron plugin for TensorBoard. See :ref:`tensorboard-plugin-visualize-graph`. ResNet-50 optimization example ------------------------------ For an example demonstrating the concepts described here, see :ref:`/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb` </pre></body></html>
2023-09-29T20:55:19.596Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/neuron2-intro.rst.txt
``` .. post:: Oct 10, 2022 04:00 :language: en :tags: neuron2.x .. _neuron2-intro: Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) =================================================================================== :ref:`Neuron release 2.3 <neuron2x-trn1ga>` is the first release of Neuron 2.x, and which enables GA of the new EC2 Trn1 instances. Neuron release 2.3 extends latest release of Neuron 1.x (Neuron 1.19.2) to add support for deep-learning training on the AWS Trainium chips. Starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, developers can run deep learning training workloads on Trn1 instances to save training costs by up to 50% over equivalent GPU-based EC2 instances, while getting the highest training performance in AWS cloud for popular NLP models. Neuron 2.x introduces the new capabilities and major architectural updates to support training neural-networks with the Trn1 instances. In addition, starting with this release, Neuron introduces new packages, renames several packages, and updates Neuron installation and update instructions. This release also ends the support for Neuron Runtime 1.x. More about the release ---------------------- .. include:: /release-notes/templates/n2.x-trn1-ga-quick.txt ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Oct 10, 2022 04:00 :language: en :tags: neuron2.x .. _neuron2-intro: Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) =================================================================================== :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;` is the first release of Neuron 2.x, and which enables GA of the new EC2 Trn1 instances. Neuron release 2.3 extends latest release of Neuron 1.x (Neuron 1.19.2) to add support for deep-learning training on the AWS Trainium chips. Starting with :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;`, developers can run deep learning training workloads on Trn1 instances to save training costs by up to 50% over equivalent GPU-based EC2 instances, while getting the highest training performance in AWS cloud for popular NLP models. Neuron 2.x introduces the new capabilities and major architectural updates to support training neural-networks with the Trn1 instances. In addition, starting with this release, Neuron introduces new packages, renames several packages, and updates Neuron installation and update instructions. This release also ends the support for Neuron Runtime 1.x. More about the release ---------------------- .. include:: /release-notes/templates/n2.x-trn1-ga-quick.txt </pre></body></html>
2023-09-29T20:55:19.605Z
Install MXNet Neuron — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/mxnet-neuron/setup/mxnet-install.html#install-neuron-mxnet
# Install MXNet Neuron — AWS Neuron Documentation _This document is relevant for_: `Inf1` ## Install MXNet Neuron[#](#install-mxnet-neuron "Permalink to this headline") Note - Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI. - For an example of how to install Neuron components in a container, see [Tutorial Docker environment setup](../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup) and our neuron-containers documentation for more details. Table of contents - [Develop on AWS ML accelerator instance](#develop-on-aws-ml-accelerator-instance) - [Compile on compute instance](#compile-on-compute-instance) - [Deploy on AWS ML accelerator instance](#deploy-on-aws-ml-accelerator-instance) ## [Develop on AWS ML accelerator instance](#id1)[#](#develop-on-aws-ml-accelerator-instance "Permalink to this headline") The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. MXNet 1.8.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc ``` MXNet 1.5.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` ## [Compile on compute instance](#id2)[#](#compile-on-compute-instance "Permalink to this headline") If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment. MXNet 1.8.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc ``` MXNet 1.5.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` ## [Deploy on AWS ML accelerator instance](#id3)[#](#deploy-on-aws-ml-accelerator-instance "Permalink to this headline") During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. MXNet 1.8.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc ``` MXNet 1.5.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Install MXNet Neuron — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../_static/pygments.css"> <link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script> <script src="../../../_static/jquery.js"></script> <script src="../../../_static/underscore.js"></script> <script src="../../../_static/doctools.js"></script> <script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../_static/contentui.js"></script> <script src="../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../genindex.html"> <link rel="search" title="Search" href="../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/mxnet-neuron/setup/mxnet-install", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/mxnet-neuron/setup/mxnet-install.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/mxnet-neuron/setup/mxnet-install.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../_sources/frameworks/mxnet-neuron/setup/mxnet-install.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Install MXNet Neuron</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="install-mxnet-neuron"> <span id="install-neuron-mxnet"></span><h1>Install MXNet Neuron<a class="headerlink" href="#install-mxnet-neuron" title="Permalink to this headline">#</a></h1> <div class="admonition note"> <p class="admonition-title">Note</p> <ul class="simple"> <li><p>Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI.</p></li> <li><p>For an example of how to install Neuron components in a container, see <a class="reference internal" href="../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup"><span class="std std-ref">Tutorial Docker environment setup</span></a> and our <span class="xref std std-ref">neuron-containers</span> documentation for more details.</p></li> </ul> </div> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#develop-on-aws-ml-accelerator-instance" id="id1">Develop on AWS ML accelerator instance</a></p></li> <li><p><a class="reference internal" href="#compile-on-compute-instance" id="id2">Compile on compute instance</a></p></li> <li><p><a class="reference internal" href="#deploy-on-aws-ml-accelerator-instance" id="id3">Deploy on AWS ML accelerator instance</a></p></li> </ul> </div> <div class="section" id="develop-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id1">Develop on AWS ML accelerator instance</a><a class="headerlink" href="#develop-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-0"> MXNet 1.8.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-2" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-2"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc </pre></div> </div> </div> <input id="sd-tab-item-3" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-3"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-1" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-1"> MXNet 1.5.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-4" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-4"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> <input id="sd-tab-item-5" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-5"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="compile-on-compute-instance"> <h2><a class="toc-backref" href="#id2">Compile on compute instance</a><a class="headerlink" href="#compile-on-compute-instance" title="Permalink to this headline">#</a></h2> <p>If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment.</p> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-6" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-6"> MXNet 1.8.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-8" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-8"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc </pre></div> </div> </div> <input id="sd-tab-item-9" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-9"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-7" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-7"> MXNet 1.5.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-10" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-10"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> <input id="sd-tab-item-11" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-11"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="deploy-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id3">Deploy on AWS ML accelerator instance</a><a class="headerlink" href="#deploy-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-12" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-12"> MXNet 1.8.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-14" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-14"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc </pre></div> </div> </div> <input id="sd-tab-item-15" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-15"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mx_neuron neuron-cc </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-13" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-13"> MXNet 1.5.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-16" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-16"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo apt-get install -y python3.8-venv g++ # Create Python venv python3.8 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> <input id="sd-tab-item-17" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-17"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv sudo yum install -y python3.7-venv gcc-c++ # Create Python venv python3.7 -m venv aws_neuron_venv_mxnet_inf1 # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate python -m pip install -U pip # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> </div> </div> </div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:19.700Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/torch-neuron/rcnn-app-note.rst.txt
``` .. _torch-neuron-r-cnn-app-note: Running R-CNNs on Inf1 ====================== This application note demonstrates how to compile and run `Detectron2 <https://github.com/facebookresearch/detectron2>`__-based R-CNNs on Inf1. It also provides guidance on how to use profiling to improve the performance of R-CNN models on Inf1. .. contents:: Table of contents :local: R-CNN Model Overview -------------------- Region-based CNN (R-CNN) models are commonly used for object detection and image segmentation tasks. A typical R-CNN architecture is composed of the following components: - **Backbone:** The backbone extracts features from input images. In some models, the backbone is a Feature Pyramid Network (FPN), which uses a top-down architecture with lateral connections to build an in-network feature pyramid from a single-scale input. The backbone is commonly a ResNet or Vision Transformer based network. - **Region Proposal Network (RPN):** The RPN predicts region proposals with a wide range of scales and aspect ratios. RPNs are constructed using convolutional layers and anchor boxes that serve as references for multiple scales and aspect ratios. - **Region of Interest (RoI):** The RoI component is used to resize the extracted features that have varying size to the same size so that they can be consumed by a fully connected layer. RoI Align is typically used instead of RoI Pooling because RoI Align provides better alignment. The `Detectron2 <https://github.com/facebookresearch/detectron2>`__ library provides many popular PyTorch R-CNN implementations, including R-CNN, Fast R-CNN, Faster R-CNN, and Mask R-CNN. This application note will focus on the Detectron2 R-CNN models. R-CNN Limitations and Considerations on Inferentia (NeuronCore-v1) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ R-CNN models can have a few limitations and considerations on Inferentia (NeuronCore-v1). See the :ref:`Model Architecture Fit Guidelines <rcnn_limitations_inf1>` for more information. These limitatins are not applicable to NeuronCore-v2. Requirements ------------ This application note is intended to be run on an ``inf1.2xlarge``. In practice, R-CNN models can be run on any Inf1 instance size. Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the `PyTorch Installation Guide <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuron/setup/pytorch-install.html>`__. You can select the kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page. Installation ------------ This application note requires the following pip packages: 1. ``torch==1.11.0`` 2. ``torch-neuron`` 3. ``neuron-cc`` 4. ``opencv-python`` 5. ``pycocotools`` 6. ``torchvision==0.12.0`` 7. ``detectron2==0.6`` The following section builds ``torchvision`` from source and installs the ``Detectron2`` package. It also reinstalls the Neuron packages to ensure version compability. The the ``torchvision`` ``roi_align_kernel.cpp`` kernel is modified to use OMP threading for multithreaded inference on CPU. This significantly improves the performance of RoI Align kernels on Inf1: OMP threading leads to a 2 - 3x RoI Align latency reduction compared to the default ``roi_align_kernel.cpp`` kernel configuration. .. code:: ipython3 # Install python3.7-dev for pycocotools (a Detectron2 dependency) !sudo apt install python3.7-dev -y # Install Neuron packages !pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com !pip uninstall -y torchvision !pip install --force-reinstall torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf==3.20.1" ninja opencv-python # Change cuda to 10.2 for Detectron2 !sudo rm /usr/local/cuda !sudo ln -s /usr/local/cuda-10.2 /usr/local/cuda # Install Torchvision 0.12.0 from source !git clone -b release/0.12 https://github.com/pytorch/vision.git # Update the RoI Align kernel to use OMP multithreading with open('vision/torchvision/csrc/ops/cpu/roi_align_kernel.cpp', 'r') as file: content = file.read() # Enable OMP Multithreading and set the number of threads to 4 old = "// #pragma omp parallel for num_threads(32)" new = "#pragma omp parallel for num_threads(4)" content = content.replace(old, new) # Re-write the file with open('vision/torchvision/csrc/ops/cpu/roi_align_kernel.cpp', 'w') as file: file.write(content) # Build Torchvision with OMP threading !cd vision && CFLAGS="-fopenmp" python setup.py bdist_wheel %pip install vision/dist/*.whl # Install Detectron2 release v0.6 !python -m pip install 'git+https://github.com/facebookresearch/[email protected]' Compiling an R-CNN for Inf1 --------------------------- By default, R-CNN models are not compilable on Inf1 because they cannot be traced with ``torch.jit.trace``, which is a requisite for inference on Inf1. The following section demonstrates techniques for compiling a Detectron2 R-CNN model for inference on Inf1. Specifically, this section creates a standard Detectron2 R-CNN model using a ResNet-101 backbone. It demonstrates how to use profiling to identify the most compute intensive parts of the R-CNN that should be compiled for accelerated inference on Inf1. It then explains how to manually extract and compile the ResNet backbone (the dominant compute component) and inject the compiled backbone back into the full model for improved performance. Create a Detectron2 R-CNN Model ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We create a Detectron2 R-CNN model using the ``COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml`` pretrained weights and config file. We also download a sample image from the COCO dataset and run an example inference. .. code:: ipython3 from detectron2 import model_zoo from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg def get_model(): # Configure the R-CNN model CONFIG_FILE = "COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml" WEIGHTS_FILE = "COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml" cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file(CONFIG_FILE)) cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(WEIGHTS_FILE) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 cfg.MODEL.DEVICE = 'cpu' # Send to CPU for Neuron Tracing # Create the R-CNN predictor wrapper predictor = DefaultPredictor(cfg) return predictor .. code:: ipython3 import os import urllib.request # Define a function to get a sample image def get_image(): filename = 'input.jpg' if not os.path.exists(filename): url = "http://images.cocodataset.org/val2017/000000439715.jpg" urllib.request.urlretrieve(url, filename) return filename .. code:: ipython3 import time import cv2 # Create an R-CNN model predictor = get_model() # Get a sample image from the COCO dataset image_filename = get_image() image = cv2.imread(image_filename) # Run inference and print inference latency start = time.time() outputs = predictor(image) print(f'Inference time: {(time.time() - start):0.3f} s') Profile the Model ~~~~~~~~~~~~~~~~~ We can use the `PyTorch Profiler <https://pytorch.org/docs/stable/profiler.html>`__ to identify which operators contribute the most to the model’s runtime on CPU. Ideally, we can compile these compute intensive operators onto Inf1 for accelerated inference. .. code:: ipython3 import torch.autograd.profiler as profiler with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): predictor(image) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) We see that convolution operators (``aten::convolution``) contribute the most to the inference time. By compiling these convolution operators to Inf1, we can improve performance of the R-CNN model. We can print the R-CNN model architecture to see which layers contain the ``aten::convolution`` operators: .. code:: ipython3 print(predictor.model) We observe that the ResNet FPN backbone (`predictor.model.backbone <https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/backbone/fpn.py>`__ L17-L162) contains the majority of convolution operators in the model. The RPN (`predictor.model.proposal_generator <https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/proposal_generator/rpn.py>`__ L181-L533) also contains a few convolutions. Based on this, we should try to compile the ResNet backbone and RPN onto Inf1 to maximize performance. Compiling the ResNet backbone to Inf1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this section we demonstrate how to compile the ResNet backbone to Inf1 and use it for inference. We “extract” the backbone by accessing it using ``predictor.model.backbone``. We compile the backbone using ``strict=False`` because the backbone outputs a dictionary. We use a fixed input shape (``800 x 800``) for compilation. This will be the input shape that all inputs will be resized to during inference. This section also defines a basic preprocessing function (mostly derived from the Detectron2 R-CNN `DefaultPredictor <https://github.com/facebookresearch/detectron2/blob/45b3fcea6e76bf7a351e54e01c7d6e1a3a0100a5/detectron2/engine/defaults.py>`__ module L308-L318) that reshapes inputs to ``800 x 800``. We also create a ``NeuronRCNN`` wrapper that we use to inject the compiled backbone back into the model by dynamically replacing the ``predictor.model.backbone`` attribute with the compiled model. .. code:: ipython3 import torch import torch_neuron example = torch.rand([1, 3, 800, 800]) # Use `with torch.no_grad():` to avoid a jit tracing issue in the ResNet backbone with torch.no_grad(): neuron_backbone = torch_neuron.trace(predictor.model.backbone, example, strict=False) backbone_filename = 'backbone.pt' torch.jit.save(neuron_backbone, backbone_filename) .. code:: ipython3 from detectron2.modeling.meta_arch.rcnn import GeneralizedRCNN from torch.jit import ScriptModule class NeuronRCNN(torch.nn.Module): """ Creates a `NeuronRCNN` wrapper that injects the compiled backbone into the R-CNN model. It also stores the `size_divisibility` attribute from the original backbone. """ def __init__(self, model: GeneralizedRCNN, neuron_backbone: ScriptModule) -> None: super().__init__() # Keep track of the backbone variables size_divisibility = model.backbone.size_divisibility # Load and inject the compiled backbone model.backbone = neuron_backbone # Set backbone variables setattr(model.backbone, 'size_divisibility', size_divisibility) self.model = model def forward(self, x): return self.model(x) .. code:: ipython3 # Create the R-CNN with the compiled backbone neuron_rcnn = NeuronRCNN(predictor.model, neuron_backbone) neuron_rcnn.eval() # Print the R-CNN architecture to verify the backbone is now the # `neuron_backbone` (shows up as `RecursiveScriptModule`) print(neuron_rcnn) .. code:: ipython3 def preprocess(original_image, predictor): """ A basic preprocessing function that sets the input height=800 and input width=800. The function is derived from the preprocessing steps in the Detectron2 `DefaultPredictor` module. """ height, width = original_image.shape[:2] resize_func = predictor.aug.get_transform(original_image) resize_func.new_h = 800 # Override height resize_func.new_w = 800 # Override width image = resize_func.apply_image(original_image) image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) inputs = {"image": image, "height": height, "width": width} return inputs .. code:: ipython3 # Get a resized input using the sample image inputs = preprocess(image, get_model()) # Run inference and print inference latency start = time.time() for _ in range(10): outputs = neuron_rcnn([inputs])[0] print(f'Inference time: {((time.time() - start)/10):0.3f} s') .. code:: ipython3 with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): neuron_rcnn([inputs]) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) By running the backbone on Inf1, the overall runtime is already significantly improved. The count and runtime of ``aten::convolution`` operators is also decreased. We now see a ``neuron::forward_v2`` operator which is the compiled backbone. Optimize the R-CNN model ------------------------ Compiling the RPN ~~~~~~~~~~~~~~~~~ Looking at the profiling, we see that there are still several ``aten::convolution``, ``aten::linear``, and ``aten::addmm`` operators that significantly contribute to the model’s overall latency. By inspecting the model architecture and code, we can determine that the majority of these operators are contained in the RPN module (`predictor.model.proposal_generator <https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/proposal_generator/rpn.py>`__ L181-L533). To improve the model performance, we will extract the RPN Head and compile it on Inf1 to increase the number of operators that are running on Inf1. We only compile the RPN Head because the RPN Anchor Generator contains objects that are not traceable with ``torch.jit.trace``. The RPN Head contains five layers that run inference on multiple resized inputs. In order to compile the RPN Head, we create a list of tensors that contain the input (“``features``”) shapes that the RPN Head uses at each layer. These tensor shapes can be determined by printing the input shapes in the RPN Head ``forward`` function (``predictor.model.proposal_generator.rpn_head.forward``). We also create a new ``NeuronRCNN`` wrapper that injects both the compiled backbone and RPN Head into the R-CNN model. .. code:: ipython3 import math input_shape = [1, 3, 800, 800] # Overall input shape at inference time # Create the list example of RPN inputs using the resizing logic from the RPN Head features = list() for i in [0, 1, 2, 3, 4]: ratio = 1 / (4 * 2**i) x_i_h = math.ceil(input_shape[2] * ratio) x_i_w = math.ceil(input_shape[3] * ratio) feature = torch.zeros(1, 256, x_i_h, x_i_w) features.append(feature) .. code:: ipython3 # Extract and compile the RPN Head neuron_rpn_head = torch_neuron.trace(predictor.model.proposal_generator.rpn_head, [features]) rpn_head_filename = 'rpn_head.pt' torch.jit.save(neuron_rpn_head, rpn_head_filename) .. code:: ipython3 class NeuronRCNN(torch.nn.Module): """ Creates a wrapper that injects the compiled backbone and RPN Head into the R-CNN model. """ def __init__(self, model: GeneralizedRCNN, neuron_backbone: ScriptModule, neuron_rpn_head: ScriptModule) -> None: super().__init__() # Keep track of the backbone variables size_divisibility = model.backbone.size_divisibility # Inject the compiled backbone model.backbone = neuron_backbone # Set backbone variables setattr(model.backbone, 'size_divisibility', size_divisibility) # Inject the compiled RPN Head model.proposal_generator.rpn_head = neuron_rpn_head self.model = model def forward(self, x): return self.model(x) .. code:: ipython3 # Create the R-CNN with the compiled backbone and RPN Head predictor = get_model() neuron_rcnn = NeuronRCNN(predictor.model, neuron_backbone, neuron_rpn_head) neuron_rcnn.eval() # Print the R-CNN architecture to verify the compiled modules show up print(neuron_rcnn) .. code:: ipython3 # Run inference and print inference latency start = time.time() for _ in range(10): outputs = neuron_rcnn([inputs])[0] print(f'Inference time: {((time.time() - start)/10):0.3f} s') .. code:: ipython3 with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): neuron_rcnn([inputs]) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) By running the compiled backbone and RPN Head on Inf1, the overall runtime is improved. Once again, the number and runtime of ``aten::convolution`` operators is also decreased. We now see two ``neuron::forward_v2`` operators which correspond to the compiled backbone and RPN Head. Fusing the Backbone and RPN Head ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is typically preferable to compile fewer independent models (“subgraphs”) on Inf1. Combining models and compiling them as a single subgraph enables the Neuron compiler to perform additional optimizations and reduces the I/O data transfer between CPU and NeuronCores between each subgraph. In this section, we “fuse” the ResNet backbone and RPN Head into a single model that we compile on Inf1. We create the ``NeuronFusedBackboneRPNHead`` wrapper to create a compilable model that contains both the ResNet backbone (`predictor.model.backbone <https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/backbone/fpn.py>`__ L17-L162) and RPN Head (`predictor.model.proposal_generator <https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/proposal_generator/rpn.py>`__ L181-L533). We also output the ``features`` because it is used downstream by the RoI Heads. We compile this ``NeuronFusedBackboneRPNHead`` wrapper as ``neuron_backbone_rpn``. We then create a separate ``BackboneRPN`` wrapper that we use to inject the ``neuron_backbone_rpn`` in place of the original backbone and RPN Head. We also copy the remainder of the RPN ``forward`` code (`predictor.model.proposal_generator.forward <https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/proposal_generator/rpn.py>`__ L431-L480) to create a “fused” backbone + RPN module. Lastly, we re-write the ``NeuronRCNN`` wrapper to use the fused backbone + RPN module. The ``NeuronRCNN`` wrapper also uses the ``predictor.model`` ``forward`` code to re-write the rest of the R-CNN model forward function. .. code:: ipython3 class NeuronFusedBackboneRPNHead(torch.nn.Module): """ Wrapper to compile the fused ResNet backbone and RPN Head. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() self.backbone = model.backbone self.rpn_head = model.proposal_generator.rpn_head self.in_features = model.proposal_generator.in_features def forward(self, x): features = self.backbone(x) features_ = [features[f] for f in self.in_features] return self.rpn_head(features_), features .. code:: ipython3 # Create the wrapper with the combined backbone and RPN Head predictor = get_model() backbone_rpn_wrapper = NeuronFusedBackboneRPNHead(predictor.model) backbone_rpn_wrapper.eval() # Compile the wrapper example = torch.rand([1, 3, 800, 800]) with torch.no_grad(): neuron_backbone_rpn_head = torch_neuron.trace( backbone_rpn_wrapper, example, strict=False) backbone_rpn_filename = 'backbone_rpn.pt' torch.jit.save(neuron_backbone_rpn_head, backbone_rpn_filename) .. code:: ipython3 class BackboneRPN(torch.nn.Module): """ Wrapper that uses the compiled `neuron_backbone_rpn` instead of the original backbone and RPN Head. We copy the remainder of the RPN `forward` code (`predictor.model.proposal_generator.forward`) to create a "fused" backbone + RPN module. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() self.backbone_rpn_head = NeuronFusedBackboneRPNHead(model) self._rpn = model.proposal_generator self.in_features = model.proposal_generator.in_features def forward(self, images): preds, features = self.backbone_rpn_head(images.tensor) features_ = [features[f] for f in self.in_features] pred_objectness_logits, pred_anchor_deltas = preds anchors = self._rpn.anchor_generator(features_) # Transpose the Hi*Wi*A dimension to the middle: pred_objectness_logits = [ # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A) score.permute(0, 2, 3, 1).flatten(1) for score in pred_objectness_logits ] pred_anchor_deltas = [ # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B) x.view(x.shape[0], -1, self._rpn.anchor_generator.box_dim, x.shape[-2], x.shape[-1]) .permute(0, 3, 4, 1, 2) .flatten(1, -2) for x in pred_anchor_deltas ] proposals = self._rpn.predict_proposals( anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes ) return proposals, features .. code:: ipython3 class NeuronRCNN(torch.nn.Module): """ Wrapper that uses the fused backbone + RPN module and re-writes the rest of the R-CNN `model` `forward` function. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() # Use the fused Backbone + RPN self.backbone_rpn = BackboneRPN(model) self.roi_heads = model.roi_heads self.preprocess_image = model.preprocess_image self._postprocess = model._postprocess def forward(self, batched_inputs): images = self.preprocess_image(batched_inputs) proposals, features = self.backbone_rpn(images) results, _ = self.roi_heads(images, features, proposals, None) return self._postprocess(results, batched_inputs, images.image_sizes) .. code:: ipython3 # Create the new NeuronRCNN wrapper with the combined backbone and RPN Head predictor = get_model() neuron_rcnn = NeuronRCNN(predictor.model) neuron_rcnn.eval() # Inject the Neuron compiled models neuron_rcnn.backbone_rpn.backbone_rpn_head = neuron_backbone_rpn_head # Print the R-CNN architecture to verify the compiled modules show up print(neuron_rcnn) .. code:: ipython3 # Run inference and print inference latency start = time.time() for _ in range(10): outputs = neuron_rcnn([inputs])[0] print(f'Inference time: {((time.time() - start)/10):0.3f} s') .. code:: ipython3 with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): neuron_rcnn([inputs]) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) By running the fused backbone + RPN Head on Inf1, the overall runtime is improved again. We now see a single ``neuron::forward_v2`` operator with a lower runtime than the previous combined runtime of the two separate ``neuron::forward_v2`` operators. Compiling the RoI Heads ~~~~~~~~~~~~~~~~~~~~~~~ In this section, we extract and compile part of RoI Heads module (`predictor.model.roi_heads <https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/roi_heads/roi_heads.py>`__ L530-L778). This will run most of the remaining ``aten::linear`` and ``aten::addmm`` operators on Inf1. We cannot extract the entire RoI Heads module because it contains unsupported operators. Thus, we create a ``NeuronBoxHeadBoxPredictor`` wrapper that extracts specific parts of the ``roi_heads`` for compilation. The example input for compilation is the shape of the input into the ``self.roi_heads.box_head.forward`` function. We write another wrapper, ``ROIHead`` that combines the compiled ``roi_heads`` into the rest of the RoI module. The ``_forward_box`` and ``forward`` functions are from the ``predictor.model.roi_heads`` module. We re-write the ``NeuronRCNN`` wrapper to use the optimized RoI Heads wrapper as well as the fused backbone + RPN module. .. code:: ipython3 class NeuronBoxHeadBoxPredictor(torch.nn.Module): """ Wrapper that extracts the RoI Box Head and Box Predictor for compilation. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() self.roi_heads = model.roi_heads def forward(self, box_features): box_features = self.roi_heads.box_head(box_features) predictions = self.roi_heads.box_predictor(box_features) return predictions .. code:: ipython3 # Create the NeuronBoxHeadBoxPredictor wrapper predictor = get_model() box_head_predictor = NeuronBoxHeadBoxPredictor(predictor.model) box_head_predictor.eval() # Compile the wrapper example = torch.rand([1000, 256, 7, 7]) neuron_box_head_predictor = torch_neuron.trace(box_head_predictor, example) roi_head_filename = 'box_head_predictor.pt' torch.jit.save(neuron_box_head_predictor, roi_head_filename) .. code:: ipython3 class ROIHead(torch.nn.Module): """ Wrapper that combines the compiled `roi_heads` into the rest of the RoI module. The `_forward_box` and `forward` functions are from the `predictor.model.roi_heads` module. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() self.roi_heads = model.roi_heads self.neuron_box_head_predictor = NeuronBoxHeadBoxPredictor(model) def _forward_box(self, features, proposals): features = [features[f] for f in self.roi_heads.box_in_features] box_features = self.roi_heads.box_pooler( features, [x.proposal_boxes for x in proposals]) predictions = self.neuron_box_head_predictor(box_features) pred_instances, _ = self.roi_heads.box_predictor.inference( predictions, proposals) return pred_instances def forward(self, images, features, proposals, targets=None): pred_instances = self._forward_box(features, proposals) pred_instances = self.roi_heads.forward_with_given_boxes( features, pred_instances) return pred_instances, {} .. code:: ipython3 class NeuronRCNN(torch.nn.Module): """ Wrapper that uses the fused backbone + RPN module and the optimized RoI Heads wrapper """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() # Create fused Backbone + RPN self.backbone_rpn = BackboneRPN(model) # Create Neuron RoI Head self.roi_heads = ROIHead(model) # Define pre and post-processing functions self.preprocess_image = model.preprocess_image self._postprocess = model._postprocess def forward(self, batched_inputs): images = self.preprocess_image(batched_inputs) proposals, features = self.backbone_rpn(images) results, _ = self.roi_heads(images, features, proposals, None) return self._postprocess(results, batched_inputs, images.image_sizes) .. code:: ipython3 # Initialize an R-CNN on CPU predictor = get_model() # Create the Neuron R-CNN on CPU neuron_rcnn = NeuronRCNN(predictor.model) neuron_rcnn.eval() # Inject the Neuron compiled models neuron_rcnn.backbone_rpn.backbone_rpn_head = neuron_backbone_rpn_head neuron_rcnn.roi_heads.neuron_box_head_predictor = neuron_box_head_predictor .. code:: ipython3 # Run inference and print inference latency start = time.time() for _ in range(10): outputs = neuron_rcnn([inputs])[0] print(f'CPU Inference time: {((time.time() - start)/10):0.3f} s') .. code:: ipython3 with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): neuron_rcnn([inputs]) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) Although the overall latency didn’t change significantly, running more of the model on Inf1 instead of CPU will free up CPU resources when multiple models are running in parallel. End-to-end Compilation and Inference ------------------------------------ In this section we provide standalone code that compiles and runs an optimized Detectron2 R-CNN on Inf1. Most of the code in this section is from the previous sections in this application note and it’s consolidated here for easy deployment. This section has the following main componennts: 1. Preprocessing and compilation functions 2. Wrappers that extract the R-CNN ResNet backbone, RPN Head, and RoI Head for compilation on Inf1. 3. A ``NeuronRCNN`` wrapper that creates an optimized end-to-end Detectron2 R-CNN model for inference on Inf1 4. Benchmarking code that runs parallelized inference for optimized throughput on Inf1 Benchmarking ~~~~~~~~~~~~ In the benchmarkinng section, we load multiple optimized RCNN models and run them in parallel to maximize throughput. We use the experimental NeuronCore placement API, ``torch_neuron.experimental.neuron_cores_context()``, to ensure all compiled models in an optimized RCNN model are loaded onto the same NeuronCore. Please note that the functionality and API of ``torch_neuron.experimental.neuron_cores_context()`` might change in future releases. We define a simple benchmark function that loads four optimized RCNN models onto four separate NeuronCores, runs multithreaded inference, and calculates the corresponding latency and throughput. We benchmark various numbers of loaded models to show the impact of parallelism. We observe that throughput increases (at the cost of latency) when more models are run in parallel on Inf1. Increasing the number of worker threads also improves throughput. Other improvements ~~~~~~~~~~~~~~~~~~ There are many additional optimizations that can be applied to RCNN models on Inf1 depending on the application: For latency sensitive applications: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Each of the five layers in the RPN head can be parallelized to decrease the overall latency. - The number of OMP Threads can be increased in the ROI Align kernel. Both of these optimizations will improve latency at the cost of decreasing throughput. For throughput sensitive applications: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - The input batch size can be increased to improve the NeuronCore utilization. .. code:: ipython3 import time import os import urllib.request from typing import Any, Union, Callable import cv2 import numpy as np from concurrent.futures import ThreadPoolExecutor import torch import torch_neuron from detectron2 import model_zoo from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg from detectron2.modeling.meta_arch.rcnn import GeneralizedRCNN # ----------------------------------------------------------------------------- # Helper functions # ----------------------------------------------------------------------------- def get_model(): # Configure the R-CNN model CONFIG_FILE = "COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml" WEIGHTS_FILE = "COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml" cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file(CONFIG_FILE)) cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(WEIGHTS_FILE) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 cfg.MODEL.DEVICE = 'cpu' # Send to CPU for Neuron Tracing # Create the R-CNN predictor wrapper predictor = DefaultPredictor(cfg) return predictor def get_image(): # Get a sample image filename = 'input.jpg' if not os.path.exists(filename): url = "http://images.cocodataset.org/val2017/000000439715.jpg" urllib.request.urlretrieve(url, filename) return filename def preprocess(original_image, predictor): """ A basic preprocessing function that sets the input height=800 and input width=800. The function is derived from the preprocessing steps in the Detectron2 `DefaultPredictor` module. """ height, width = original_image.shape[:2] resize_func = predictor.aug.get_transform(original_image) resize_func.new_h = 800 # Override height resize_func.new_w = 800 # Override width image = resize_func.apply_image(original_image) image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) inputs = {"image": image, "height": height, "width": width} return inputs # ----------------------------------------------------------------------------- # Neuron modules # ----------------------------------------------------------------------------- class NeuronFusedBackboneRPNHead(torch.nn.Module): """ Wrapper to compile the fused ResNet backbone and RPN Head. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() self.backbone = model.backbone self.rpn_head = model.proposal_generator.rpn_head self.in_features = model.proposal_generator.in_features def forward(self, x): features = self.backbone(x) features_ = [features[f] for f in self.in_features] return self.rpn_head(features_), features class BackboneRPN(torch.nn.Module): """ Wrapper that uses the compiled `neuron_backbone_rpn` instead of the original backbone and RPN Head. We copy the remainder of the RPN `forward` code (`predictor.model.proposal_generator.forward`) to create a "fused" backbone + RPN module. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() self.backbone_rpn_head = NeuronFusedBackboneRPNHead(model) self._rpn = model.proposal_generator self.in_features = model.proposal_generator.in_features def forward(self, images): preds, features = self.backbone_rpn_head(images.tensor) features_ = [features[f] for f in self.in_features] pred_objectness_logits, pred_anchor_deltas = preds anchors = self._rpn.anchor_generator(features_) # Transpose the Hi*Wi*A dimension to the middle: pred_objectness_logits = [ # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A) score.permute(0, 2, 3, 1).flatten(1) for score in pred_objectness_logits ] pred_anchor_deltas = [ # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B) x.view(x.shape[0], -1, self._rpn.anchor_generator.box_dim, x.shape[-2], x.shape[-1]) .permute(0, 3, 4, 1, 2) .flatten(1, -2) for x in pred_anchor_deltas ] proposals = self._rpn.predict_proposals( anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes ) return proposals, features class NeuronBoxHeadBoxPredictor(torch.nn.Module): """ Wrapper that extracts the RoI Box Head and Box Predictor for compilation. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() self.roi_heads = model.roi_heads def forward(self, box_features): box_features = self.roi_heads.box_head(box_features) predictions = self.roi_heads.box_predictor(box_features) return predictions class ROIHead(torch.nn.Module): """ Wrapper that combines the compiled `roi_heads` into the rest of the RoI module. The `_forward_box` and `forward` functions are from the `predictor.model.roi_heads` module. """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() self.roi_heads = model.roi_heads self.neuron_box_head_predictor = NeuronBoxHeadBoxPredictor(model) def _forward_box(self, features, proposals): features = [features[f] for f in self.roi_heads.box_in_features] box_features = self.roi_heads.box_pooler( features, [x.proposal_boxes for x in proposals]) predictions = self.neuron_box_head_predictor(box_features) pred_instances, _ = self.roi_heads.box_predictor.inference( predictions, proposals) return pred_instances def forward(self, images, features, proposals, targets=None): pred_instances = self._forward_box(features, proposals) pred_instances = self.roi_heads.forward_with_given_boxes( features, pred_instances) return pred_instances, {} class NeuronRCNN(torch.nn.Module): """ Wrapper that uses the fused backbone + RPN module and the optimized RoI Heads wrapper """ def __init__(self, model: GeneralizedRCNN) -> None: super().__init__() # Create fused Backbone + RPN self.backbone_rpn = BackboneRPN(model) # Create Neuron RoI Head self.roi_heads = ROIHead(model) # Define pre and post-processing functions self.preprocess_image = model.preprocess_image self._postprocess = model._postprocess def forward(self, batched_inputs): images = self.preprocess_image(batched_inputs) proposals, features = self.backbone_rpn(images) results, _ = self.roi_heads(images, features, proposals, None) return self._postprocess(results, batched_inputs, images.image_sizes) # ----------------------------------------------------------------------------- # Compilation functions # ----------------------------------------------------------------------------- def compile( model: Union[Callable, torch.nn.Module], example_inputs: Any, filename: str, **kwargs ) -> torch.nn.Module: """ Compiles the model for Inf1 if it doesn't already exist and saves it as the provided filename. model: A module or function which defines a torch model or computation. example_inputs: An example set of inputs which will be passed to the `model` during compilation. filename: Name of the compiled model kwargs: Extra `torch_neuron.trace` kwargs """ if not os.path.exists(filename): with torch.no_grad(): compiled_model = torch_neuron.trace(model, example_inputs, **kwargs) torch.jit.save(compiled_model, filename) # ----------------------------------------------------------------------------- # Benchmarking function # ----------------------------------------------------------------------------- def benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=4, batch_size=1, n_threads=4, iterations=200): """ A simple benchmarking function that loads `n_models` optimized models onto separate NeuronCores, runs multithreaded inference, and calculates the corresponding latency and throughput. """ # Load models models = list() for i in range(n_models): with torch_neuron.experimental.neuron_cores_context(i): # Create the RCNN with the fused backbone + RPN Head and compiled RoI Heads # Initialize an R-CNN on CPU predictor = get_model() # Create the Neuron R-CNN on CPU neuron_rcnn = NeuronRCNN(predictor.model) neuron_rcnn.eval() # Inject the Neuron compiled models neuron_rcnn.backbone_rpn.backbone_rpn_head = torch.jit.load(backbone_rpn_filename) neuron_rcnn.roi_heads.neuron_box_head_predictor = torch.jit.load(roi_head_filename) models.append(neuron_rcnn) # Warmup for _ in range(8): for model in models: model([inputs]) latencies = [] # Thread task def task(i): start = time.time() models[i]([inputs]) finish = time.time() latencies.append((finish - start) * 1000) begin = time.time() with ThreadPoolExecutor(max_workers=n_threads) as pool: for i in range(iterations): pool.submit(task, i % n_models) end = time.time() # Compute metrics boundaries = [50, 95, 99] names = [f'Latency P{i} (ms)' for i in boundaries] percentiles = np.percentile(latencies, boundaries) duration = end - begin # Display metrics results = { 'Samples': iterations, 'Batch Size': batch_size, 'Models': n_models, 'Threads': n_threads, 'Duration (s)': end - begin, 'Throughput (inf/s)': (batch_size * iterations) / duration, **dict(zip(names, percentiles)), } print('-' * 80) pad = max(map(len, results)) for key, value in results.items(): if isinstance(value, float): print(f'{key + ":" :<{pad + 1}} {value:0.3f}') else: print(f'{key + ":" :<{pad + 1}} {value}') print() if __name__ == "__main__": # Create and compile the combined backbone and RPN Head wrapper backbone_rpn_filename = 'backbone_rpn.pt' predictor = get_model() backbone_rpn_wrapper = NeuronFusedBackboneRPNHead(predictor.model) backbone_rpn_wrapper.eval() example = torch.rand([1, 3, 800, 800]) compile(backbone_rpn_wrapper, example, backbone_rpn_filename, strict=False) # Create and compile the RoI Head wrapper roi_head_filename = 'box_head_predictor.pt' predictor = get_model() box_head_predictor = NeuronBoxHeadBoxPredictor(predictor.model) box_head_predictor.eval() example = torch.rand([1000, 256, 7, 7]) compile(box_head_predictor, example, roi_head_filename) # Download a sample image from the COCO dataset and read it image_filename = get_image() image = cv2.imread(image_filename) inputs = preprocess(image, get_model()) # Benchmark the Neuron R-CNN model for various numbers of loaded models benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=1, n_threads=1) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=1, n_threads=2) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=2, n_threads=2) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=2, n_threads=4) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=4, n_threads=4) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=4, n_threads=8) ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-neuron-r-cnn-app-note: Running R-CNNs on Inf1 ====================== This application note demonstrates how to compile and run `Detectron2 &lt;https://github.com/facebookresearch/detectron2&gt;`__-based R-CNNs on Inf1. It also provides guidance on how to use profiling to improve the performance of R-CNN models on Inf1. .. contents:: Table of contents :local: R-CNN Model Overview -------------------- Region-based CNN (R-CNN) models are commonly used for object detection and image segmentation tasks. A typical R-CNN architecture is composed of the following components: - **Backbone:** The backbone extracts features from input images. In some models, the backbone is a Feature Pyramid Network (FPN), which uses a top-down architecture with lateral connections to build an in-network feature pyramid from a single-scale input. The backbone is commonly a ResNet or Vision Transformer based network. - **Region Proposal Network (RPN):** The RPN predicts region proposals with a wide range of scales and aspect ratios. RPNs are constructed using convolutional layers and anchor boxes that serve as references for multiple scales and aspect ratios. - **Region of Interest (RoI):** The RoI component is used to resize the extracted features that have varying size to the same size so that they can be consumed by a fully connected layer. RoI Align is typically used instead of RoI Pooling because RoI Align provides better alignment. The `Detectron2 &lt;https://github.com/facebookresearch/detectron2&gt;`__ library provides many popular PyTorch R-CNN implementations, including R-CNN, Fast R-CNN, Faster R-CNN, and Mask R-CNN. This application note will focus on the Detectron2 R-CNN models. R-CNN Limitations and Considerations on Inferentia (NeuronCore-v1) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ R-CNN models can have a few limitations and considerations on Inferentia (NeuronCore-v1). See the :ref:`Model Architecture Fit Guidelines &lt;rcnn_limitations_inf1&gt;` for more information. These limitatins are not applicable to NeuronCore-v2. Requirements ------------ This application note is intended to be run on an ``inf1.2xlarge``. In practice, R-CNN models can be run on any Inf1 instance size. Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the `PyTorch Installation Guide &lt;https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuron/setup/pytorch-install.html&gt;`__. You can select the kernel from the “Kernel -&gt; Change Kernel” option on the top of this Jupyter notebook page. Installation ------------ This application note requires the following pip packages: 1. ``torch==1.11.0`` 2. ``torch-neuron`` 3. ``neuron-cc`` 4. ``opencv-python`` 5. ``pycocotools`` 6. ``torchvision==0.12.0`` 7. ``detectron2==0.6`` The following section builds ``torchvision`` from source and installs the ``Detectron2`` package. It also reinstalls the Neuron packages to ensure version compability. The the ``torchvision`` ``roi_align_kernel.cpp`` kernel is modified to use OMP threading for multithreaded inference on CPU. This significantly improves the performance of RoI Align kernels on Inf1: OMP threading leads to a 2 - 3x RoI Align latency reduction compared to the default ``roi_align_kernel.cpp`` kernel configuration. .. code:: ipython3 # Install python3.7-dev for pycocotools (a Detectron2 dependency) !sudo apt install python3.7-dev -y # Install Neuron packages !pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com !pip uninstall -y torchvision !pip install --force-reinstall torch-neuron==1.11.0.* neuron-cc[tensorflow] "protobuf==3.20.1" ninja opencv-python # Change cuda to 10.2 for Detectron2 !sudo rm /usr/local/cuda !sudo ln -s /usr/local/cuda-10.2 /usr/local/cuda # Install Torchvision 0.12.0 from source !git clone -b release/0.12 https://github.com/pytorch/vision.git # Update the RoI Align kernel to use OMP multithreading with open('vision/torchvision/csrc/ops/cpu/roi_align_kernel.cpp', 'r') as file: content = file.read() # Enable OMP Multithreading and set the number of threads to 4 old = "// #pragma omp parallel for num_threads(32)" new = "#pragma omp parallel for num_threads(4)" content = content.replace(old, new) # Re-write the file with open('vision/torchvision/csrc/ops/cpu/roi_align_kernel.cpp', 'w') as file: file.write(content) # Build Torchvision with OMP threading !cd vision &amp;&amp; CFLAGS="-fopenmp" python setup.py bdist_wheel %pip install vision/dist/*.whl # Install Detectron2 release v0.6 !python -m pip install 'git+https://github.com/facebookresearch/[email protected]' Compiling an R-CNN for Inf1 --------------------------- By default, R-CNN models are not compilable on Inf1 because they cannot be traced with ``torch.jit.trace``, which is a requisite for inference on Inf1. The following section demonstrates techniques for compiling a Detectron2 R-CNN model for inference on Inf1. Specifically, this section creates a standard Detectron2 R-CNN model using a ResNet-101 backbone. It demonstrates how to use profiling to identify the most compute intensive parts of the R-CNN that should be compiled for accelerated inference on Inf1. It then explains how to manually extract and compile the ResNet backbone (the dominant compute component) and inject the compiled backbone back into the full model for improved performance. Create a Detectron2 R-CNN Model ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We create a Detectron2 R-CNN model using the ``COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml`` pretrained weights and config file. We also download a sample image from the COCO dataset and run an example inference. .. code:: ipython3 from detectron2 import model_zoo from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg def get_model(): # Configure the R-CNN model CONFIG_FILE = "COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml" WEIGHTS_FILE = "COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml" cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file(CONFIG_FILE)) cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(WEIGHTS_FILE) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 cfg.MODEL.DEVICE = 'cpu' # Send to CPU for Neuron Tracing # Create the R-CNN predictor wrapper predictor = DefaultPredictor(cfg) return predictor .. code:: ipython3 import os import urllib.request # Define a function to get a sample image def get_image(): filename = 'input.jpg' if not os.path.exists(filename): url = "http://images.cocodataset.org/val2017/000000439715.jpg" urllib.request.urlretrieve(url, filename) return filename .. code:: ipython3 import time import cv2 # Create an R-CNN model predictor = get_model() # Get a sample image from the COCO dataset image_filename = get_image() image = cv2.imread(image_filename) # Run inference and print inference latency start = time.time() outputs = predictor(image) print(f'Inference time: {(time.time() - start):0.3f} s') Profile the Model ~~~~~~~~~~~~~~~~~ We can use the `PyTorch Profiler &lt;https://pytorch.org/docs/stable/profiler.html&gt;`__ to identify which operators contribute the most to the model’s runtime on CPU. Ideally, we can compile these compute intensive operators onto Inf1 for accelerated inference. .. code:: ipython3 import torch.autograd.profiler as profiler with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): predictor(image) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) We see that convolution operators (``aten::convolution``) contribute the most to the inference time. By compiling these convolution operators to Inf1, we can improve performance of the R-CNN model. We can print the R-CNN model architecture to see which layers contain the ``aten::convolution`` operators: .. code:: ipython3 print(predictor.model) We observe that the ResNet FPN backbone (`predictor.model.backbone &lt;https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/backbone/fpn.py&gt;`__ L17-L162) contains the majority of convolution operators in the model. The RPN (`predictor.model.proposal_generator &lt;https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/proposal_generator/rpn.py&gt;`__ L181-L533) also contains a few convolutions. Based on this, we should try to compile the ResNet backbone and RPN onto Inf1 to maximize performance. Compiling the ResNet backbone to Inf1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this section we demonstrate how to compile the ResNet backbone to Inf1 and use it for inference. We “extract” the backbone by accessing it using ``predictor.model.backbone``. We compile the backbone using ``strict=False`` because the backbone outputs a dictionary. We use a fixed input shape (``800 x 800``) for compilation. This will be the input shape that all inputs will be resized to during inference. This section also defines a basic preprocessing function (mostly derived from the Detectron2 R-CNN `DefaultPredictor &lt;https://github.com/facebookresearch/detectron2/blob/45b3fcea6e76bf7a351e54e01c7d6e1a3a0100a5/detectron2/engine/defaults.py&gt;`__ module L308-L318) that reshapes inputs to ``800 x 800``. We also create a ``NeuronRCNN`` wrapper that we use to inject the compiled backbone back into the model by dynamically replacing the ``predictor.model.backbone`` attribute with the compiled model. .. code:: ipython3 import torch import torch_neuron example = torch.rand([1, 3, 800, 800]) # Use `with torch.no_grad():` to avoid a jit tracing issue in the ResNet backbone with torch.no_grad(): neuron_backbone = torch_neuron.trace(predictor.model.backbone, example, strict=False) backbone_filename = 'backbone.pt' torch.jit.save(neuron_backbone, backbone_filename) .. code:: ipython3 from detectron2.modeling.meta_arch.rcnn import GeneralizedRCNN from torch.jit import ScriptModule class NeuronRCNN(torch.nn.Module): """ Creates a `NeuronRCNN` wrapper that injects the compiled backbone into the R-CNN model. It also stores the `size_divisibility` attribute from the original backbone. """ def __init__(self, model: GeneralizedRCNN, neuron_backbone: ScriptModule) -&gt; None: super().__init__() # Keep track of the backbone variables size_divisibility = model.backbone.size_divisibility # Load and inject the compiled backbone model.backbone = neuron_backbone # Set backbone variables setattr(model.backbone, 'size_divisibility', size_divisibility) self.model = model def forward(self, x): return self.model(x) .. code:: ipython3 # Create the R-CNN with the compiled backbone neuron_rcnn = NeuronRCNN(predictor.model, neuron_backbone) neuron_rcnn.eval() # Print the R-CNN architecture to verify the backbone is now the # `neuron_backbone` (shows up as `RecursiveScriptModule`) print(neuron_rcnn) .. code:: ipython3 def preprocess(original_image, predictor): """ A basic preprocessing function that sets the input height=800 and input width=800. The function is derived from the preprocessing steps in the Detectron2 `DefaultPredictor` module. """ height, width = original_image.shape[:2] resize_func = predictor.aug.get_transform(original_image) resize_func.new_h = 800 # Override height resize_func.new_w = 800 # Override width image = resize_func.apply_image(original_image) image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) inputs = {"image": image, "height": height, "width": width} return inputs .. code:: ipython3 # Get a resized input using the sample image inputs = preprocess(image, get_model()) # Run inference and print inference latency start = time.time() for _ in range(10): outputs = neuron_rcnn([inputs])[0] print(f'Inference time: {((time.time() - start)/10):0.3f} s') .. code:: ipython3 with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): neuron_rcnn([inputs]) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) By running the backbone on Inf1, the overall runtime is already significantly improved. The count and runtime of ``aten::convolution`` operators is also decreased. We now see a ``neuron::forward_v2`` operator which is the compiled backbone. Optimize the R-CNN model ------------------------ Compiling the RPN ~~~~~~~~~~~~~~~~~ Looking at the profiling, we see that there are still several ``aten::convolution``, ``aten::linear``, and ``aten::addmm`` operators that significantly contribute to the model’s overall latency. By inspecting the model architecture and code, we can determine that the majority of these operators are contained in the RPN module (`predictor.model.proposal_generator &lt;https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/proposal_generator/rpn.py&gt;`__ L181-L533). To improve the model performance, we will extract the RPN Head and compile it on Inf1 to increase the number of operators that are running on Inf1. We only compile the RPN Head because the RPN Anchor Generator contains objects that are not traceable with ``torch.jit.trace``. The RPN Head contains five layers that run inference on multiple resized inputs. In order to compile the RPN Head, we create a list of tensors that contain the input (“``features``”) shapes that the RPN Head uses at each layer. These tensor shapes can be determined by printing the input shapes in the RPN Head ``forward`` function (``predictor.model.proposal_generator.rpn_head.forward``). We also create a new ``NeuronRCNN`` wrapper that injects both the compiled backbone and RPN Head into the R-CNN model. .. code:: ipython3 import math input_shape = [1, 3, 800, 800] # Overall input shape at inference time # Create the list example of RPN inputs using the resizing logic from the RPN Head features = list() for i in [0, 1, 2, 3, 4]: ratio = 1 / (4 * 2**i) x_i_h = math.ceil(input_shape[2] * ratio) x_i_w = math.ceil(input_shape[3] * ratio) feature = torch.zeros(1, 256, x_i_h, x_i_w) features.append(feature) .. code:: ipython3 # Extract and compile the RPN Head neuron_rpn_head = torch_neuron.trace(predictor.model.proposal_generator.rpn_head, [features]) rpn_head_filename = 'rpn_head.pt' torch.jit.save(neuron_rpn_head, rpn_head_filename) .. code:: ipython3 class NeuronRCNN(torch.nn.Module): """ Creates a wrapper that injects the compiled backbone and RPN Head into the R-CNN model. """ def __init__(self, model: GeneralizedRCNN, neuron_backbone: ScriptModule, neuron_rpn_head: ScriptModule) -&gt; None: super().__init__() # Keep track of the backbone variables size_divisibility = model.backbone.size_divisibility # Inject the compiled backbone model.backbone = neuron_backbone # Set backbone variables setattr(model.backbone, 'size_divisibility', size_divisibility) # Inject the compiled RPN Head model.proposal_generator.rpn_head = neuron_rpn_head self.model = model def forward(self, x): return self.model(x) .. code:: ipython3 # Create the R-CNN with the compiled backbone and RPN Head predictor = get_model() neuron_rcnn = NeuronRCNN(predictor.model, neuron_backbone, neuron_rpn_head) neuron_rcnn.eval() # Print the R-CNN architecture to verify the compiled modules show up print(neuron_rcnn) .. code:: ipython3 # Run inference and print inference latency start = time.time() for _ in range(10): outputs = neuron_rcnn([inputs])[0] print(f'Inference time: {((time.time() - start)/10):0.3f} s') .. code:: ipython3 with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): neuron_rcnn([inputs]) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) By running the compiled backbone and RPN Head on Inf1, the overall runtime is improved. Once again, the number and runtime of ``aten::convolution`` operators is also decreased. We now see two ``neuron::forward_v2`` operators which correspond to the compiled backbone and RPN Head. Fusing the Backbone and RPN Head ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is typically preferable to compile fewer independent models (“subgraphs”) on Inf1. Combining models and compiling them as a single subgraph enables the Neuron compiler to perform additional optimizations and reduces the I/O data transfer between CPU and NeuronCores between each subgraph. In this section, we “fuse” the ResNet backbone and RPN Head into a single model that we compile on Inf1. We create the ``NeuronFusedBackboneRPNHead`` wrapper to create a compilable model that contains both the ResNet backbone (`predictor.model.backbone &lt;https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/backbone/fpn.py&gt;`__ L17-L162) and RPN Head (`predictor.model.proposal_generator &lt;https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/proposal_generator/rpn.py&gt;`__ L181-L533). We also output the ``features`` because it is used downstream by the RoI Heads. We compile this ``NeuronFusedBackboneRPNHead`` wrapper as ``neuron_backbone_rpn``. We then create a separate ``BackboneRPN`` wrapper that we use to inject the ``neuron_backbone_rpn`` in place of the original backbone and RPN Head. We also copy the remainder of the RPN ``forward`` code (`predictor.model.proposal_generator.forward &lt;https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/proposal_generator/rpn.py&gt;`__ L431-L480) to create a “fused” backbone + RPN module. Lastly, we re-write the ``NeuronRCNN`` wrapper to use the fused backbone + RPN module. The ``NeuronRCNN`` wrapper also uses the ``predictor.model`` ``forward`` code to re-write the rest of the R-CNN model forward function. .. code:: ipython3 class NeuronFusedBackboneRPNHead(torch.nn.Module): """ Wrapper to compile the fused ResNet backbone and RPN Head. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() self.backbone = model.backbone self.rpn_head = model.proposal_generator.rpn_head self.in_features = model.proposal_generator.in_features def forward(self, x): features = self.backbone(x) features_ = [features[f] for f in self.in_features] return self.rpn_head(features_), features .. code:: ipython3 # Create the wrapper with the combined backbone and RPN Head predictor = get_model() backbone_rpn_wrapper = NeuronFusedBackboneRPNHead(predictor.model) backbone_rpn_wrapper.eval() # Compile the wrapper example = torch.rand([1, 3, 800, 800]) with torch.no_grad(): neuron_backbone_rpn_head = torch_neuron.trace( backbone_rpn_wrapper, example, strict=False) backbone_rpn_filename = 'backbone_rpn.pt' torch.jit.save(neuron_backbone_rpn_head, backbone_rpn_filename) .. code:: ipython3 class BackboneRPN(torch.nn.Module): """ Wrapper that uses the compiled `neuron_backbone_rpn` instead of the original backbone and RPN Head. We copy the remainder of the RPN `forward` code (`predictor.model.proposal_generator.forward`) to create a "fused" backbone + RPN module. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() self.backbone_rpn_head = NeuronFusedBackboneRPNHead(model) self._rpn = model.proposal_generator self.in_features = model.proposal_generator.in_features def forward(self, images): preds, features = self.backbone_rpn_head(images.tensor) features_ = [features[f] for f in self.in_features] pred_objectness_logits, pred_anchor_deltas = preds anchors = self._rpn.anchor_generator(features_) # Transpose the Hi*Wi*A dimension to the middle: pred_objectness_logits = [ # (N, A, Hi, Wi) -&gt; (N, Hi, Wi, A) -&gt; (N, Hi*Wi*A) score.permute(0, 2, 3, 1).flatten(1) for score in pred_objectness_logits ] pred_anchor_deltas = [ # (N, A*B, Hi, Wi) -&gt; (N, A, B, Hi, Wi) -&gt; (N, Hi, Wi, A, B) -&gt; (N, Hi*Wi*A, B) x.view(x.shape[0], -1, self._rpn.anchor_generator.box_dim, x.shape[-2], x.shape[-1]) .permute(0, 3, 4, 1, 2) .flatten(1, -2) for x in pred_anchor_deltas ] proposals = self._rpn.predict_proposals( anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes ) return proposals, features .. code:: ipython3 class NeuronRCNN(torch.nn.Module): """ Wrapper that uses the fused backbone + RPN module and re-writes the rest of the R-CNN `model` `forward` function. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() # Use the fused Backbone + RPN self.backbone_rpn = BackboneRPN(model) self.roi_heads = model.roi_heads self.preprocess_image = model.preprocess_image self._postprocess = model._postprocess def forward(self, batched_inputs): images = self.preprocess_image(batched_inputs) proposals, features = self.backbone_rpn(images) results, _ = self.roi_heads(images, features, proposals, None) return self._postprocess(results, batched_inputs, images.image_sizes) .. code:: ipython3 # Create the new NeuronRCNN wrapper with the combined backbone and RPN Head predictor = get_model() neuron_rcnn = NeuronRCNN(predictor.model) neuron_rcnn.eval() # Inject the Neuron compiled models neuron_rcnn.backbone_rpn.backbone_rpn_head = neuron_backbone_rpn_head # Print the R-CNN architecture to verify the compiled modules show up print(neuron_rcnn) .. code:: ipython3 # Run inference and print inference latency start = time.time() for _ in range(10): outputs = neuron_rcnn([inputs])[0] print(f'Inference time: {((time.time() - start)/10):0.3f} s') .. code:: ipython3 with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): neuron_rcnn([inputs]) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) By running the fused backbone + RPN Head on Inf1, the overall runtime is improved again. We now see a single ``neuron::forward_v2`` operator with a lower runtime than the previous combined runtime of the two separate ``neuron::forward_v2`` operators. Compiling the RoI Heads ~~~~~~~~~~~~~~~~~~~~~~~ In this section, we extract and compile part of RoI Heads module (`predictor.model.roi_heads &lt;https://github.com/facebookresearch/detectron2/blob/v0.6/detectron2/modeling/roi_heads/roi_heads.py&gt;`__ L530-L778). This will run most of the remaining ``aten::linear`` and ``aten::addmm`` operators on Inf1. We cannot extract the entire RoI Heads module because it contains unsupported operators. Thus, we create a ``NeuronBoxHeadBoxPredictor`` wrapper that extracts specific parts of the ``roi_heads`` for compilation. The example input for compilation is the shape of the input into the ``self.roi_heads.box_head.forward`` function. We write another wrapper, ``ROIHead`` that combines the compiled ``roi_heads`` into the rest of the RoI module. The ``_forward_box`` and ``forward`` functions are from the ``predictor.model.roi_heads`` module. We re-write the ``NeuronRCNN`` wrapper to use the optimized RoI Heads wrapper as well as the fused backbone + RPN module. .. code:: ipython3 class NeuronBoxHeadBoxPredictor(torch.nn.Module): """ Wrapper that extracts the RoI Box Head and Box Predictor for compilation. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() self.roi_heads = model.roi_heads def forward(self, box_features): box_features = self.roi_heads.box_head(box_features) predictions = self.roi_heads.box_predictor(box_features) return predictions .. code:: ipython3 # Create the NeuronBoxHeadBoxPredictor wrapper predictor = get_model() box_head_predictor = NeuronBoxHeadBoxPredictor(predictor.model) box_head_predictor.eval() # Compile the wrapper example = torch.rand([1000, 256, 7, 7]) neuron_box_head_predictor = torch_neuron.trace(box_head_predictor, example) roi_head_filename = 'box_head_predictor.pt' torch.jit.save(neuron_box_head_predictor, roi_head_filename) .. code:: ipython3 class ROIHead(torch.nn.Module): """ Wrapper that combines the compiled `roi_heads` into the rest of the RoI module. The `_forward_box` and `forward` functions are from the `predictor.model.roi_heads` module. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() self.roi_heads = model.roi_heads self.neuron_box_head_predictor = NeuronBoxHeadBoxPredictor(model) def _forward_box(self, features, proposals): features = [features[f] for f in self.roi_heads.box_in_features] box_features = self.roi_heads.box_pooler( features, [x.proposal_boxes for x in proposals]) predictions = self.neuron_box_head_predictor(box_features) pred_instances, _ = self.roi_heads.box_predictor.inference( predictions, proposals) return pred_instances def forward(self, images, features, proposals, targets=None): pred_instances = self._forward_box(features, proposals) pred_instances = self.roi_heads.forward_with_given_boxes( features, pred_instances) return pred_instances, {} .. code:: ipython3 class NeuronRCNN(torch.nn.Module): """ Wrapper that uses the fused backbone + RPN module and the optimized RoI Heads wrapper """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() # Create fused Backbone + RPN self.backbone_rpn = BackboneRPN(model) # Create Neuron RoI Head self.roi_heads = ROIHead(model) # Define pre and post-processing functions self.preprocess_image = model.preprocess_image self._postprocess = model._postprocess def forward(self, batched_inputs): images = self.preprocess_image(batched_inputs) proposals, features = self.backbone_rpn(images) results, _ = self.roi_heads(images, features, proposals, None) return self._postprocess(results, batched_inputs, images.image_sizes) .. code:: ipython3 # Initialize an R-CNN on CPU predictor = get_model() # Create the Neuron R-CNN on CPU neuron_rcnn = NeuronRCNN(predictor.model) neuron_rcnn.eval() # Inject the Neuron compiled models neuron_rcnn.backbone_rpn.backbone_rpn_head = neuron_backbone_rpn_head neuron_rcnn.roi_heads.neuron_box_head_predictor = neuron_box_head_predictor .. code:: ipython3 # Run inference and print inference latency start = time.time() for _ in range(10): outputs = neuron_rcnn([inputs])[0] print(f'CPU Inference time: {((time.time() - start)/10):0.3f} s') .. code:: ipython3 with profiler.profile(record_shapes=True) as prof: with profiler.record_function("model_inference"): neuron_rcnn([inputs]) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=30)) Although the overall latency didn’t change significantly, running more of the model on Inf1 instead of CPU will free up CPU resources when multiple models are running in parallel. End-to-end Compilation and Inference ------------------------------------ In this section we provide standalone code that compiles and runs an optimized Detectron2 R-CNN on Inf1. Most of the code in this section is from the previous sections in this application note and it’s consolidated here for easy deployment. This section has the following main componennts: 1. Preprocessing and compilation functions 2. Wrappers that extract the R-CNN ResNet backbone, RPN Head, and RoI Head for compilation on Inf1. 3. A ``NeuronRCNN`` wrapper that creates an optimized end-to-end Detectron2 R-CNN model for inference on Inf1 4. Benchmarking code that runs parallelized inference for optimized throughput on Inf1 Benchmarking ~~~~~~~~~~~~ In the benchmarkinng section, we load multiple optimized RCNN models and run them in parallel to maximize throughput. We use the experimental NeuronCore placement API, ``torch_neuron.experimental.neuron_cores_context()``, to ensure all compiled models in an optimized RCNN model are loaded onto the same NeuronCore. Please note that the functionality and API of ``torch_neuron.experimental.neuron_cores_context()`` might change in future releases. We define a simple benchmark function that loads four optimized RCNN models onto four separate NeuronCores, runs multithreaded inference, and calculates the corresponding latency and throughput. We benchmark various numbers of loaded models to show the impact of parallelism. We observe that throughput increases (at the cost of latency) when more models are run in parallel on Inf1. Increasing the number of worker threads also improves throughput. Other improvements ~~~~~~~~~~~~~~~~~~ There are many additional optimizations that can be applied to RCNN models on Inf1 depending on the application: For latency sensitive applications: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Each of the five layers in the RPN head can be parallelized to decrease the overall latency. - The number of OMP Threads can be increased in the ROI Align kernel. Both of these optimizations will improve latency at the cost of decreasing throughput. For throughput sensitive applications: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - The input batch size can be increased to improve the NeuronCore utilization. .. code:: ipython3 import time import os import urllib.request from typing import Any, Union, Callable import cv2 import numpy as np from concurrent.futures import ThreadPoolExecutor import torch import torch_neuron from detectron2 import model_zoo from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg from detectron2.modeling.meta_arch.rcnn import GeneralizedRCNN # ----------------------------------------------------------------------------- # Helper functions # ----------------------------------------------------------------------------- def get_model(): # Configure the R-CNN model CONFIG_FILE = "COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml" WEIGHTS_FILE = "COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml" cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file(CONFIG_FILE)) cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(WEIGHTS_FILE) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 cfg.MODEL.DEVICE = 'cpu' # Send to CPU for Neuron Tracing # Create the R-CNN predictor wrapper predictor = DefaultPredictor(cfg) return predictor def get_image(): # Get a sample image filename = 'input.jpg' if not os.path.exists(filename): url = "http://images.cocodataset.org/val2017/000000439715.jpg" urllib.request.urlretrieve(url, filename) return filename def preprocess(original_image, predictor): """ A basic preprocessing function that sets the input height=800 and input width=800. The function is derived from the preprocessing steps in the Detectron2 `DefaultPredictor` module. """ height, width = original_image.shape[:2] resize_func = predictor.aug.get_transform(original_image) resize_func.new_h = 800 # Override height resize_func.new_w = 800 # Override width image = resize_func.apply_image(original_image) image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) inputs = {"image": image, "height": height, "width": width} return inputs # ----------------------------------------------------------------------------- # Neuron modules # ----------------------------------------------------------------------------- class NeuronFusedBackboneRPNHead(torch.nn.Module): """ Wrapper to compile the fused ResNet backbone and RPN Head. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() self.backbone = model.backbone self.rpn_head = model.proposal_generator.rpn_head self.in_features = model.proposal_generator.in_features def forward(self, x): features = self.backbone(x) features_ = [features[f] for f in self.in_features] return self.rpn_head(features_), features class BackboneRPN(torch.nn.Module): """ Wrapper that uses the compiled `neuron_backbone_rpn` instead of the original backbone and RPN Head. We copy the remainder of the RPN `forward` code (`predictor.model.proposal_generator.forward`) to create a "fused" backbone + RPN module. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() self.backbone_rpn_head = NeuronFusedBackboneRPNHead(model) self._rpn = model.proposal_generator self.in_features = model.proposal_generator.in_features def forward(self, images): preds, features = self.backbone_rpn_head(images.tensor) features_ = [features[f] for f in self.in_features] pred_objectness_logits, pred_anchor_deltas = preds anchors = self._rpn.anchor_generator(features_) # Transpose the Hi*Wi*A dimension to the middle: pred_objectness_logits = [ # (N, A, Hi, Wi) -&gt; (N, Hi, Wi, A) -&gt; (N, Hi*Wi*A) score.permute(0, 2, 3, 1).flatten(1) for score in pred_objectness_logits ] pred_anchor_deltas = [ # (N, A*B, Hi, Wi) -&gt; (N, A, B, Hi, Wi) -&gt; (N, Hi, Wi, A, B) -&gt; (N, Hi*Wi*A, B) x.view(x.shape[0], -1, self._rpn.anchor_generator.box_dim, x.shape[-2], x.shape[-1]) .permute(0, 3, 4, 1, 2) .flatten(1, -2) for x in pred_anchor_deltas ] proposals = self._rpn.predict_proposals( anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes ) return proposals, features class NeuronBoxHeadBoxPredictor(torch.nn.Module): """ Wrapper that extracts the RoI Box Head and Box Predictor for compilation. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() self.roi_heads = model.roi_heads def forward(self, box_features): box_features = self.roi_heads.box_head(box_features) predictions = self.roi_heads.box_predictor(box_features) return predictions class ROIHead(torch.nn.Module): """ Wrapper that combines the compiled `roi_heads` into the rest of the RoI module. The `_forward_box` and `forward` functions are from the `predictor.model.roi_heads` module. """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() self.roi_heads = model.roi_heads self.neuron_box_head_predictor = NeuronBoxHeadBoxPredictor(model) def _forward_box(self, features, proposals): features = [features[f] for f in self.roi_heads.box_in_features] box_features = self.roi_heads.box_pooler( features, [x.proposal_boxes for x in proposals]) predictions = self.neuron_box_head_predictor(box_features) pred_instances, _ = self.roi_heads.box_predictor.inference( predictions, proposals) return pred_instances def forward(self, images, features, proposals, targets=None): pred_instances = self._forward_box(features, proposals) pred_instances = self.roi_heads.forward_with_given_boxes( features, pred_instances) return pred_instances, {} class NeuronRCNN(torch.nn.Module): """ Wrapper that uses the fused backbone + RPN module and the optimized RoI Heads wrapper """ def __init__(self, model: GeneralizedRCNN) -&gt; None: super().__init__() # Create fused Backbone + RPN self.backbone_rpn = BackboneRPN(model) # Create Neuron RoI Head self.roi_heads = ROIHead(model) # Define pre and post-processing functions self.preprocess_image = model.preprocess_image self._postprocess = model._postprocess def forward(self, batched_inputs): images = self.preprocess_image(batched_inputs) proposals, features = self.backbone_rpn(images) results, _ = self.roi_heads(images, features, proposals, None) return self._postprocess(results, batched_inputs, images.image_sizes) # ----------------------------------------------------------------------------- # Compilation functions # ----------------------------------------------------------------------------- def compile( model: Union[Callable, torch.nn.Module], example_inputs: Any, filename: str, **kwargs ) -&gt; torch.nn.Module: """ Compiles the model for Inf1 if it doesn't already exist and saves it as the provided filename. model: A module or function which defines a torch model or computation. example_inputs: An example set of inputs which will be passed to the `model` during compilation. filename: Name of the compiled model kwargs: Extra `torch_neuron.trace` kwargs """ if not os.path.exists(filename): with torch.no_grad(): compiled_model = torch_neuron.trace(model, example_inputs, **kwargs) torch.jit.save(compiled_model, filename) # ----------------------------------------------------------------------------- # Benchmarking function # ----------------------------------------------------------------------------- def benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=4, batch_size=1, n_threads=4, iterations=200): """ A simple benchmarking function that loads `n_models` optimized models onto separate NeuronCores, runs multithreaded inference, and calculates the corresponding latency and throughput. """ # Load models models = list() for i in range(n_models): with torch_neuron.experimental.neuron_cores_context(i): # Create the RCNN with the fused backbone + RPN Head and compiled RoI Heads # Initialize an R-CNN on CPU predictor = get_model() # Create the Neuron R-CNN on CPU neuron_rcnn = NeuronRCNN(predictor.model) neuron_rcnn.eval() # Inject the Neuron compiled models neuron_rcnn.backbone_rpn.backbone_rpn_head = torch.jit.load(backbone_rpn_filename) neuron_rcnn.roi_heads.neuron_box_head_predictor = torch.jit.load(roi_head_filename) models.append(neuron_rcnn) # Warmup for _ in range(8): for model in models: model([inputs]) latencies = [] # Thread task def task(i): start = time.time() models[i]([inputs]) finish = time.time() latencies.append((finish - start) * 1000) begin = time.time() with ThreadPoolExecutor(max_workers=n_threads) as pool: for i in range(iterations): pool.submit(task, i % n_models) end = time.time() # Compute metrics boundaries = [50, 95, 99] names = [f'Latency P{i} (ms)' for i in boundaries] percentiles = np.percentile(latencies, boundaries) duration = end - begin # Display metrics results = { 'Samples': iterations, 'Batch Size': batch_size, 'Models': n_models, 'Threads': n_threads, 'Duration (s)': end - begin, 'Throughput (inf/s)': (batch_size * iterations) / duration, **dict(zip(names, percentiles)), } print('-' * 80) pad = max(map(len, results)) for key, value in results.items(): if isinstance(value, float): print(f'{key + ":" :&lt;{pad + 1}} {value:0.3f}') else: print(f'{key + ":" :&lt;{pad + 1}} {value}') print() if __name__ == "__main__": # Create and compile the combined backbone and RPN Head wrapper backbone_rpn_filename = 'backbone_rpn.pt' predictor = get_model() backbone_rpn_wrapper = NeuronFusedBackboneRPNHead(predictor.model) backbone_rpn_wrapper.eval() example = torch.rand([1, 3, 800, 800]) compile(backbone_rpn_wrapper, example, backbone_rpn_filename, strict=False) # Create and compile the RoI Head wrapper roi_head_filename = 'box_head_predictor.pt' predictor = get_model() box_head_predictor = NeuronBoxHeadBoxPredictor(predictor.model) box_head_predictor.eval() example = torch.rand([1000, 256, 7, 7]) compile(box_head_predictor, example, roi_head_filename) # Download a sample image from the COCO dataset and read it image_filename = get_image() image = cv2.imread(image_filename) inputs = preprocess(image, get_model()) # Benchmark the Neuron R-CNN model for various numbers of loaded models benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=1, n_threads=1) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=1, n_threads=2) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=2, n_threads=2) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=2, n_threads=4) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=4, n_threads=4) benchmark(backbone_rpn_filename, roi_head_filename, inputs, n_models=4, n_threads=8) </pre></body></html>
2023-09-29T20:55:19.973Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/faq.rst.txt
``` .. _neuron_faq: Neuron FAQ ========== .. contents:: Table of contents :local: :depth: 1 Neuron 2.x FAQ -------------- * :ref:`neuron2-intro-faq` Training Only FAQ ----------------- * :ref:`neuron-training-faq` Inference Only FAQ ------------------ * :ref:`neuron-f1-faq` * :ref:`trouble-shooting-inf1-faq` * :ref:`tf1_faq` * :ref:`tf2_faq` * :ref:`NeuronPerf <neuronperf_faq>` Runtime FAQ ----------- * :ref:`Neuron Runtime FAQ <neuron-runtime-faq>` Compiler FAQ ------------ * :ref:`neuronx_compiler_faq` * :ref:`neuron_compiler_faq` Neuron Containers ----------------- * :ref:`Neuron Containers FAQ <container-faq>` ONNX FAQ -------- * :ref:`onnx-faq` Support ------- * :ref:`neuron_roadmap_faq` * :ref:`contribute-faq` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_faq: Neuron FAQ ========== .. contents:: Table of contents :local: :depth: 1 Neuron 2.x FAQ -------------- * :ref:`neuron2-intro-faq` Training Only FAQ ----------------- * :ref:`neuron-training-faq` Inference Only FAQ ------------------ * :ref:`neuron-f1-faq` * :ref:`trouble-shooting-inf1-faq` * :ref:`tf1_faq` * :ref:`tf2_faq` * :ref:`NeuronPerf &lt;neuronperf_faq&gt;` Runtime FAQ ----------- * :ref:`Neuron Runtime FAQ &lt;neuron-runtime-faq&gt;` Compiler FAQ ------------ * :ref:`neuronx_compiler_faq` * :ref:`neuron_compiler_faq` Neuron Containers ----------------- * :ref:`Neuron Containers FAQ &lt;container-faq&gt;` ONNX FAQ -------- * :ref:`onnx-faq` Support ------- * :ref:`neuron_roadmap_faq` * :ref:`contribute-faq` </pre></body></html>
2023-09-29T20:55:19.997Z
Neuron 2.x Introduction at Trn1 GA - FAQ — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/faq/neuron2-intro-faq.html#neuron2-intro-faq
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n` # Neuron 2.x Introduction at Trn1 GA - FAQ[#](#neuron-2-x-introduction-at-trn1-ga-faq "Permalink to this headline") Table of contents - [What Instances are supported with this release?](#what-instances-are-supported-with-this-release) - [What ML frameworks support Trn1 in this release?](#what-ml-frameworks-support-trn1-in-this-release) - [What ML frameworks support Inf1 in this release?](#what-ml-frameworks-support-inf1-in-this-release) - [What are the common Neuron packages that are shared between Trn1 and Inf1?](#what-are-the-common-neuron-packages-that-are-shared-between-trn1-and-inf1) - [What additional Neuron packages support Trn1 only?](#what-additional-neuron-packages-support-trn1-only) - [What additional Neuron packages support Inf1 only?](#what-additional-neuron-packages-support-inf1-only) - [What are the changes in Neuron packages and installation instructions introduced in this release?](#what-are-the-changes-in-neuron-packages-and-installation-instructions-introduced-in-this-release) - [If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1?](#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-deploy-it-on-inf1) - [Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1?](#can-a-neuron-model-binary-neff-that-was-compiled-on-trn1-run-on-inf1) - [Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1?](#can-a-neuron-model-binary-neff-that-was-compiled-on-inf1-run-on-trn1) - [If I have trained a model on Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on CPU, GPU or other platforms?](#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-cpu-gpu-or-other-platforms) - [If I have trained a model on a platform other than Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on Trn1?](#if-i-have-trained-a-model-on-a-platform-other-than-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-trn1) - [What distributed ML frameworks/libraries are be supported by Neuron?](#what-distributed-ml-frameworks-libraries-are-be-supported-by-neuron) - [What happened to releases 2.0-2.2?](#what-happened-to-releases-2-0-2-2) ## [What Instances are supported with this release?](#id4)[#](#what-instances-are-supported-with-this-release "Permalink to this headline") This release supports Trn1 and Inf1. ## [What ML frameworks support Trn1 in this release?](#id5)[#](#what-ml-frameworks-support-trn1-in-this-release "Permalink to this headline") In this release, PyTorch Neuron (`torch-neuronx`) supports Trn1. Future Neuron releases will add support for additional ML frameworks to Trn1. ## [What ML frameworks support Inf1 in this release?](#id6)[#](#what-ml-frameworks-support-inf1-in-this-release "Permalink to this headline") In this release, the following ML frameworks support Inf1: - PyTorch Neuron (`torch-neuron`) - the same version as in Neuron 1.19.2. - TensorFlow Neuron (`tensorflow-neuron`) - the same version as in released in Neuron 1.19.2. - MXNet Neuron (`mxnet-neuron`) - the same version as in Neuron 1.19.2. Note Inf1 support Inference only. ## [What additional Neuron packages support Trn1 only?](#id8)[#](#what-additional-neuron-packages-support-trn1-only "Permalink to this headline") Neuron packages supporting Trn1 only[#](#id2 "Permalink to this table") | Package | Description | | --- | --- | | `neuronx-cc` | Neuron Compiler with XLA frontend | | `torch-neuronx` | Neuron PyTorch with PyTorch XLA backend | | `aws-neuronx-collective` | Collective Communication Operation library | | `aws-neuronx-tools` | Neuron System Tools | | `aws-neuronx-runtime-lib` | Neuron Runtime | Note In next releases `aws-neuronx-tools` and `aws-neuronx-runtime-lib` will support Inf1 also. ## [What additional Neuron packages support Inf1 only?](#id9)[#](#what-additional-neuron-packages-support-inf1-only "Permalink to this headline") Neuron packages supporting Inf1 only[#](#id3 "Permalink to this table") | Package | Description | | --- | --- | | `neuron-cc` | Neuron Compiler (Inference only) | | `torch-neuron` | Neuron PyTorch (Inference only) | | `tensorflow-neuron` | TensorFlow Neuron (Inference only) | | `mxnet-neuron` | MXNet Neuron (Inference only) | | `aneuronperf` | NeuronPerf | ## [If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1?](#id11)[#](#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-deploy-it-on-inf1 "Permalink to this headline") You can deploy the model on Inf1 or any other platform such as CPU, GPU or others, as long as the operators and data-types supported by the source platform are also supported by the target platform. ## [Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1?](#id12)[#](#can-a-neuron-model-binary-neff-that-was-compiled-on-trn1-run-on-inf1 "Permalink to this headline") No, the model must be re-compiled for Inf1. This can be done directly using our [CLI](../../compiler/neuron-cc/command-line-reference.html#neuron-compiler-cli-reference) or via a framework such as PyTorch. ## [Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1?](#id13)[#](#can-a-neuron-model-binary-neff-that-was-compiled-on-inf1-run-on-trn1 "Permalink to this headline") No. The model must be re-compiled for Trn1 using PyTorch. ## [If I have trained a model on Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on CPU, GPU or other platforms?](#id14)[#](#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-cpu-gpu-or-other-platforms "Permalink to this headline") Yes, as long as the operators and data-types supported by the source platform are also supported by the target platform. XLA operators supported by Trn1 can be found [here](../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.html#neuron-cc-ops-xla). ## [If I have trained a model on a platform other than Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on Trn1?](#id15)[#](#if-i-have-trained-a-model-on-a-platform-other-than-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-trn1 "Permalink to this headline") Yes, as long as the operators and data-types supported by the source platform are also supported by the target platform. XLA operators supported by Trn1 can be found [here](../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.html#neuron-cc-ops-xla). ## [What distributed ML frameworks/libraries are be supported by Neuron?](#id16)[#](#what-distributed-ml-frameworks-libraries-are-be-supported-by-neuron "Permalink to this headline") PyTorch Neuron provides support for distributed training. See <Megatron-LM GPT Pretraining Tutorial> for an example. ## [What happened to releases 2.0-2.2?](#id17)[#](#what-happened-to-releases-2-0-2-2 "Permalink to this headline") These releases correspond to prior, private-preview releases. _This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Neuron 2.x Introduction at Trn1 GA - FAQ — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../_static/pygments.css"> <link rel="stylesheet" href="../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script> <script src="../../_static/jquery.js"></script> <script src="../../_static/underscore.js"></script> <script src="../../_static/doctools.js"></script> <script src="../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../_static/contentui.js"></script> <script src="../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../genindex.html"> <link rel="search" title="Search" href="../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/faq/neuron2-intro-faq", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/faq/neuron2-intro-faq.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/faq/neuron2-intro-faq.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../_sources/general/faq/neuron2-intro-faq.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-instances-are-supported-with-this-release"> What Instances are supported with this release? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-ml-frameworks-support-trn1-in-this-release"> What ML frameworks support Trn1 in this release? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-ml-frameworks-support-inf1-in-this-release"> What ML frameworks support Inf1 in this release? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-are-the-common-neuron-packages-that-are-shared-between-trn1-and-inf1"> What are the common Neuron packages that are shared between Trn1 and Inf1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-additional-neuron-packages-support-trn1-only"> What additional Neuron packages support Trn1 only? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-additional-neuron-packages-support-inf1-only"> What additional Neuron packages support Inf1 only? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-are-the-changes-in-neuron-packages-and-installation-instructions-introduced-in-this-release"> What are the changes in Neuron packages and installation instructions introduced in this release? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-deploy-it-on-inf1"> If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-a-neuron-model-binary-neff-that-was-compiled-on-trn1-run-on-inf1"> Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-a-neuron-model-binary-neff-that-was-compiled-on-inf1-run-on-trn1"> Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-cpu-gpu-or-other-platforms"> If I have trained a model on Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on CPU, GPU or other platforms? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#if-i-have-trained-a-model-on-a-platform-other-than-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-trn1"> If I have trained a model on a platform other than Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on Trn1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-distributed-ml-frameworks-libraries-are-be-supported-by-neuron"> What distributed ML frameworks/libraries are be supported by Neuron? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-happened-to-releases-2-0-2-2"> What happened to releases 2.0-2.2? </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Neuron 2.x Introduction at Trn1 GA - FAQ</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-instances-are-supported-with-this-release"> What Instances are supported with this release? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-ml-frameworks-support-trn1-in-this-release"> What ML frameworks support Trn1 in this release? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-ml-frameworks-support-inf1-in-this-release"> What ML frameworks support Inf1 in this release? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-are-the-common-neuron-packages-that-are-shared-between-trn1-and-inf1"> What are the common Neuron packages that are shared between Trn1 and Inf1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-additional-neuron-packages-support-trn1-only"> What additional Neuron packages support Trn1 only? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-additional-neuron-packages-support-inf1-only"> What additional Neuron packages support Inf1 only? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-are-the-changes-in-neuron-packages-and-installation-instructions-introduced-in-this-release"> What are the changes in Neuron packages and installation instructions introduced in this release? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-deploy-it-on-inf1"> If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-a-neuron-model-binary-neff-that-was-compiled-on-trn1-run-on-inf1"> Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-a-neuron-model-binary-neff-that-was-compiled-on-inf1-run-on-trn1"> Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-cpu-gpu-or-other-platforms"> If I have trained a model on Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on CPU, GPU or other platforms? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#if-i-have-trained-a-model-on-a-platform-other-than-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-trn1"> If I have trained a model on a platform other than Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on Trn1? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-distributed-ml-frameworks-libraries-are-be-supported-by-neuron"> What distributed ML frameworks/libraries are be supported by Neuron? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-happened-to-releases-2-0-2-2"> What happened to releases 2.0-2.2? </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> <div class="section" id="neuron-2-x-introduction-at-trn1-ga-faq"> <span id="neuron2-intro-faq"></span><h1>Neuron 2.x Introduction at Trn1 GA - FAQ<a class="headerlink" href="#neuron-2-x-introduction-at-trn1-ga-faq" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#what-instances-are-supported-with-this-release" id="id4">What Instances are supported with this release?</a></p></li> <li><p><a class="reference internal" href="#what-ml-frameworks-support-trn1-in-this-release" id="id5">What ML frameworks support Trn1 in this release?</a></p></li> <li><p><a class="reference internal" href="#what-ml-frameworks-support-inf1-in-this-release" id="id6">What ML frameworks support Inf1 in this release?</a></p></li> <li><p><a class="reference internal" href="#what-are-the-common-neuron-packages-that-are-shared-between-trn1-and-inf1" id="id7">What are the common Neuron packages that are shared between Trn1 and Inf1?</a></p></li> <li><p><a class="reference internal" href="#what-additional-neuron-packages-support-trn1-only" id="id8">What additional Neuron packages support Trn1 only?</a></p></li> <li><p><a class="reference internal" href="#what-additional-neuron-packages-support-inf1-only" id="id9">What additional Neuron packages support Inf1 only?</a></p></li> <li><p><a class="reference internal" href="#what-are-the-changes-in-neuron-packages-and-installation-instructions-introduced-in-this-release" id="id10">What are the changes in Neuron packages and installation instructions introduced in this release?</a></p></li> <li><p><a class="reference internal" href="#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-deploy-it-on-inf1" id="id11">If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1?</a></p></li> <li><p><a class="reference internal" href="#can-a-neuron-model-binary-neff-that-was-compiled-on-trn1-run-on-inf1" id="id12">Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1?</a></p></li> <li><p><a class="reference internal" href="#can-a-neuron-model-binary-neff-that-was-compiled-on-inf1-run-on-trn1" id="id13">Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1?</a></p></li> <li><p><a class="reference internal" href="#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-cpu-gpu-or-other-platforms" id="id14">If I have trained a model on Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on CPU, GPU or other platforms?</a></p></li> <li><p><a class="reference internal" href="#if-i-have-trained-a-model-on-a-platform-other-than-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-trn1" id="id15">If I have trained a model on a platform other than Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on Trn1?</a></p></li> <li><p><a class="reference internal" href="#what-distributed-ml-frameworks-libraries-are-be-supported-by-neuron" id="id16">What distributed ML frameworks/libraries are be supported by Neuron?</a></p></li> <li><p><a class="reference internal" href="#what-happened-to-releases-2-0-2-2" id="id17">What happened to releases 2.0-2.2?</a></p></li> </ul> </div> <div class="section" id="what-instances-are-supported-with-this-release"> <h2><a class="toc-backref" href="#id4">What Instances are supported with this release?</a><a class="headerlink" href="#what-instances-are-supported-with-this-release" title="Permalink to this headline">#</a></h2> <p>This release supports Trn1 and Inf1.</p> </div> <div class="section" id="what-ml-frameworks-support-trn1-in-this-release"> <h2><a class="toc-backref" href="#id5">What ML frameworks support Trn1 in this release?</a><a class="headerlink" href="#what-ml-frameworks-support-trn1-in-this-release" title="Permalink to this headline">#</a></h2> <p>In this release, PyTorch Neuron (<code class="docutils literal notranslate"><span class="pre">torch-neuronx</span></code>) supports Trn1. Future Neuron releases will add support for additional ML frameworks to Trn1.</p> </div> <div class="section" id="what-ml-frameworks-support-inf1-in-this-release"> <h2><a class="toc-backref" href="#id6">What ML frameworks support Inf1 in this release?</a><a class="headerlink" href="#what-ml-frameworks-support-inf1-in-this-release" title="Permalink to this headline">#</a></h2> <p>In this release, the following ML frameworks support Inf1:</p> <ul class="simple"> <li><p>PyTorch Neuron (<code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code>) - the same version as in Neuron 1.19.2.</p></li> <li><p>TensorFlow Neuron (<code class="docutils literal notranslate"><span class="pre">tensorflow-neuron</span></code>) - the same version as in released in Neuron 1.19.2.</p></li> <li><p>MXNet Neuron (<code class="docutils literal notranslate"><span class="pre">mxnet-neuron</span></code>) - the same version as in Neuron 1.19.2.</p></li> </ul> <div class="admonition note"> <p class="admonition-title">Note</p> <p>Inf1 support Inference only.</p> </div> </div> <div class="section" id="what-are-the-common-neuron-packages-that-are-shared-between-trn1-and-inf1"> <h2><a class="toc-backref" href="#id7">What are the common Neuron packages that are shared between Trn1 and Inf1?</a><a class="headerlink" href="#what-are-the-common-neuron-packages-that-are-shared-between-trn1-and-inf1" title="Permalink to this headline">#</a></h2> <table class="colwidths-auto table-smaller-font-size table" id="id1"> <caption><span class="caption-text">Common Neuron packages between Inf1 and Trn1</span><a class="headerlink" href="#id1" title="Permalink to this table">#</a></caption> <thead> <tr class="row-odd"><th class="head"><p>Package</p></th> <th class="head"><p>Description</p></th> </tr> </thead> <tbody> <tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-dkms</span></code></p></td> <td><p>Neuron Driver</p></td> </tr> <tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-k8-plugin</span></code></p></td> <td><p>Neuron Plugin for Kubernetes</p></td> </tr> <tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-k8-scheduler</span></code></p></td> <td><p>Neuron Scheduler for Kubernetes</p></td> </tr> <tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-oci-hooks</span></code></p></td> <td><p>Neuron OCI Hooks support</p></td> </tr> </tbody> </table> </div> <div class="section" id="what-additional-neuron-packages-support-trn1-only"> <h2><a class="toc-backref" href="#id8">What additional Neuron packages support Trn1 only?</a><a class="headerlink" href="#what-additional-neuron-packages-support-trn1-only" title="Permalink to this headline">#</a></h2> <table class="colwidths-auto table-smaller-font-size table" id="id2"> <caption><span class="caption-text">Neuron packages supporting Trn1 only</span><a class="headerlink" href="#id2" title="Permalink to this table">#</a></caption> <thead> <tr class="row-odd"><th class="head"><p>Package</p></th> <th class="head"><p>Description</p></th> </tr> </thead> <tbody> <tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">neuronx-cc</span></code></p></td> <td><p>Neuron Compiler with XLA frontend</p></td> </tr> <tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">torch-neuronx</span></code></p></td> <td><p>Neuron PyTorch with PyTorch XLA backend</p></td> </tr> <tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-collective</span></code></p></td> <td><p>Collective Communication Operation library</p></td> </tr> <tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-tools</span></code></p></td> <td><p>Neuron System Tools</p></td> </tr> <tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-runtime-lib</span></code></p></td> <td><p>Neuron Runtime</p></td> </tr> </tbody> </table> <div class="admonition note"> <p class="admonition-title">Note</p> <p>In next releases <code class="docutils literal notranslate"><span class="pre">aws-neuronx-tools</span></code> and <code class="docutils literal notranslate"><span class="pre">aws-neuronx-runtime-lib</span></code> will support Inf1 also.</p> </div> </div> <div class="section" id="what-additional-neuron-packages-support-inf1-only"> <h2><a class="toc-backref" href="#id9">What additional Neuron packages support Inf1 only?</a><a class="headerlink" href="#what-additional-neuron-packages-support-inf1-only" title="Permalink to this headline">#</a></h2> <table class="colwidths-auto table-smaller-font-size table" id="id3"> <caption><span class="caption-text">Neuron packages supporting Inf1 only</span><a class="headerlink" href="#id3" title="Permalink to this table">#</a></caption> <thead> <tr class="row-odd"><th class="head"><p>Package</p></th> <th class="head"><p>Description</p></th> </tr> </thead> <tbody> <tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">neuron-cc</span></code></p></td> <td><p>Neuron Compiler (Inference only)</p></td> </tr> <tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code></p></td> <td><p>Neuron PyTorch (Inference only)</p></td> </tr> <tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">tensorflow-neuron</span></code></p></td> <td><p>TensorFlow Neuron (Inference only)</p></td> </tr> <tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">mxnet-neuron</span></code></p></td> <td><p>MXNet Neuron (Inference only)</p></td> </tr> <tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">aneuronperf</span></code></p></td> <td><p>NeuronPerf</p></td> </tr> </tbody> </table> </div> <div class="section" id="what-are-the-changes-in-neuron-packages-and-installation-instructions-introduced-in-this-release"> <h2><a class="toc-backref" href="#id10">What are the changes in Neuron packages and installation instructions introduced in this release?</a><a class="headerlink" href="#what-are-the-changes-in-neuron-packages-and-installation-instructions-introduced-in-this-release" title="Permalink to this headline">#</a></h2> <p>For full details please see:</p> <ul class="simple"> <li><p><a class="reference internal" href="../announcements/neuron2.x/neuron230-packages-changes.html#neuron-packages-changes"><span class="std std-ref">Introducing Packaging and installation changes</span></a> application note.</p></li> </ul> </div> <div class="section" id="if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-deploy-it-on-inf1"> <h2><a class="toc-backref" href="#id11">If I have trained a model on Trn1, can I load the model (from a checkpoint) and deploy it on Inf1?</a><a class="headerlink" href="#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-deploy-it-on-inf1" title="Permalink to this headline">#</a></h2> <p>You can deploy the model on Inf1 or any other platform such as CPU, GPU or others, as long as the operators and data-types supported by the source platform are also supported by the target platform.</p> </div> <div class="section" id="can-a-neuron-model-binary-neff-that-was-compiled-on-trn1-run-on-inf1"> <h2><a class="toc-backref" href="#id12">Can a Neuron model binary (NEFF) that was compiled on Trn1, run on Inf1?</a><a class="headerlink" href="#can-a-neuron-model-binary-neff-that-was-compiled-on-trn1-run-on-inf1" title="Permalink to this headline">#</a></h2> <p>No, the model must be re-compiled for Inf1. This can be done directly using our <a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html#neuron-compiler-cli-reference"><span class="std std-ref">CLI</span></a> or via a framework such as <span class="xref std std-ref">PyTorch</span>.</p> </div> <div class="section" id="can-a-neuron-model-binary-neff-that-was-compiled-on-inf1-run-on-trn1"> <h2><a class="toc-backref" href="#id13">Can a Neuron model binary (NEFF) that was compiled on Inf1, run on Trn1?</a><a class="headerlink" href="#can-a-neuron-model-binary-neff-that-was-compiled-on-inf1-run-on-trn1" title="Permalink to this headline">#</a></h2> <p>No. The model must be re-compiled for Trn1 using <span class="xref std std-ref">PyTorch</span>.</p> </div> <div class="section" id="if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-cpu-gpu-or-other-platforms"> <h2><a class="toc-backref" href="#id14">If I have trained a model on Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on CPU, GPU or other platforms?</a><a class="headerlink" href="#if-i-have-trained-a-model-on-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-cpu-gpu-or-other-platforms" title="Permalink to this headline">#</a></h2> <p>Yes, as long as the operators and data-types supported by the source platform are also supported by the target platform.</p> <p>XLA operators supported by Trn1 can be found <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.html#neuron-cc-ops-xla"><span class="std std-ref">here</span></a>.</p> </div> <div class="section" id="if-i-have-trained-a-model-on-a-platform-other-than-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-trn1"> <h2><a class="toc-backref" href="#id15">If I have trained a model on a platform other than Trn1, can I load the model (from a checkpoint) and fine-tune it or deploy it on Trn1?</a><a class="headerlink" href="#if-i-have-trained-a-model-on-a-platform-other-than-trn1-can-i-load-the-model-from-a-checkpoint-and-fine-tune-it-or-deploy-it-on-trn1" title="Permalink to this headline">#</a></h2> <p>Yes, as long as the operators and data-types supported by the source platform are also supported by the target platform.</p> <p>XLA operators supported by Trn1 can be found <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.html#neuron-cc-ops-xla"><span class="std std-ref">here</span></a>.</p> </div> <div class="section" id="what-distributed-ml-frameworks-libraries-are-be-supported-by-neuron"> <h2><a class="toc-backref" href="#id16">What distributed ML frameworks/libraries are be supported by Neuron?</a><a class="headerlink" href="#what-distributed-ml-frameworks-libraries-are-be-supported-by-neuron" title="Permalink to this headline">#</a></h2> <p>PyTorch Neuron provides support for distributed training. See <span class="xref std std-ref">&lt;Megatron-LM GPT Pretraining Tutorial&gt;</span> for an example.</p> </div> <div class="section" id="what-happened-to-releases-2-0-2-2"> <h2><a class="toc-backref" href="#id17">What happened to releases 2.0-2.2?</a><a class="headerlink" href="#what-happened-to-releases-2-0-2-2" title="Permalink to this headline">#</a></h2> <p>These releases correspond to prior, private-preview releases.</p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:20.145Z
Update to latest MXNet Neuron — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/mxnet-neuron/setup/mxnet-update.html#update-neuron-mxnet
# Update to latest MXNet Neuron — AWS Neuron Documentation _This document is relevant for_: `Inf1` ## Update to latest MXNet Neuron[#](#update-to-latest-mxnet-neuron "Permalink to this headline") Note - Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI. - For an example of how to install Neuron components in a container, see [Tutorial Docker environment setup](../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup) and our neuron-containers documentation for more details. Table of contents - [Develop on AWS ML accelerator instance](#develop-on-aws-ml-accelerator-instance) - [Compile on compute instance](#compile-on-compute-instance) - [Deploy on AWS ML accelerator instance](#deploy-on-aws-ml-accelerator-instance) ## [Develop on AWS ML accelerator instance](#id1)[#](#develop-on-aws-ml-accelerator-instance "Permalink to this headline") The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. MXNet 1.8.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc ``` MXNet 1.5.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` ## [Compile on compute instance](#id2)[#](#compile-on-compute-instance "Permalink to this headline") If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment. MXNet 1.8.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc ``` MXNet 1.5.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` ## [Deploy on AWS ML accelerator instance](#id3)[#](#deploy-on-aws-ml-accelerator-instance "Permalink to this headline") During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed. Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively. Important For successful installation or update to next releases (Neuron 1.20.0 and newer): - Uninstall `aws-neuron-dkms` by running: `sudo apt remove aws-neuron-dkms` or `sudo yum remove aws-neuron-dkms` - Install or upgrade to latest Neuron driver (`aws-neuron-dkms`) by following the “Setup Guide” instructions. MXNet 1.8.0 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc ``` MXNet 1.5.1 Ubuntu 20 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` Amazon Linux 2 DLAMI Base Note For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents. ``` # Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 ``` _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Update to latest MXNet Neuron — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../_static/pygments.css"> <link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script> <script src="../../../_static/jquery.js"></script> <script src="../../../_static/underscore.js"></script> <script src="../../../_static/doctools.js"></script> <script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../_static/contentui.js"></script> <script src="../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../genindex.html"> <link rel="search" title="Search" href="../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/mxnet-neuron/setup/mxnet-update", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/mxnet-neuron/setup/mxnet-update.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/mxnet-neuron/setup/mxnet-update.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../_sources/frameworks/mxnet-neuron/setup/mxnet-update.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Update to latest MXNet Neuron</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance"> Develop on AWS ML accelerator instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compile-on-compute-instance"> Compile on compute instance </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#deploy-on-aws-ml-accelerator-instance"> Deploy on AWS ML accelerator instance </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="update-to-latest-mxnet-neuron"> <span id="update-neuron-mxnet"></span><h1>Update to latest MXNet Neuron<a class="headerlink" href="#update-to-latest-mxnet-neuron" title="Permalink to this headline">#</a></h1> <div class="admonition note"> <p class="admonition-title">Note</p> <ul class="simple"> <li><p>Instructions in this page only apply to setting up Neuron components on Linux host running Ubuntu or Amazon Linux AMI.</p></li> <li><p>For an example of how to install Neuron components in a container, see <a class="reference internal" href="../../../containers/tutorials/tutorial-docker-env-setup.html#tutorial-docker-env-setup"><span class="std std-ref">Tutorial Docker environment setup</span></a> and our <span class="xref std std-ref">neuron-containers</span> documentation for more details.</p></li> </ul> </div> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#develop-on-aws-ml-accelerator-instance" id="id1">Develop on AWS ML accelerator instance</a></p></li> <li><p><a class="reference internal" href="#compile-on-compute-instance" id="id2">Compile on compute instance</a></p></li> <li><p><a class="reference internal" href="#deploy-on-aws-ml-accelerator-instance" id="id3">Deploy on AWS ML accelerator instance</a></p></li> </ul> </div> <div class="section" id="develop-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id1">Develop on AWS ML accelerator instance</a><a class="headerlink" href="#develop-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>The simplest environment setup for model development installs all Neuron SDK components directly on an AWS ML accelerator instance: the Neuron framework extensions, compiler, runtime, and tools. This will allow you to compile, execute, and performance tune your model, all in the same instance. This is the recommended workflow when first starting to work with Neuron device or when optimizing a model.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-0"> MXNet 1.8.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-2" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-2"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc </pre></div> </div> </div> <input id="sd-tab-item-3" name="sd-tab-set-1" type="radio"> <label class="sd-tab-label" for="sd-tab-item-3"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-1" name="sd-tab-set-0" type="radio"> <label class="sd-tab-label" for="sd-tab-item-1"> MXNet 1.5.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-4" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-4"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> <input id="sd-tab-item-5" name="sd-tab-set-2" type="radio"> <label class="sd-tab-label" for="sd-tab-item-5"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="compile-on-compute-instance"> <h2><a class="toc-backref" href="#id2">Compile on compute instance</a><a class="headerlink" href="#compile-on-compute-instance" title="Permalink to this headline">#</a></h2> <p>If model compilation occurs outside the model deployment environment, you can install only the Neuron framework extensions and the compiler on any compute instance. This setup is helpful when compiling large complex models that require large amount of memory or during a CICD process where models are compiled in a separate step, prior to deployment.</p> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-6" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-6"> MXNet 1.8.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-8" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-8"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc </pre></div> </div> </div> <input id="sd-tab-item-9" name="sd-tab-set-4" type="radio"> <label class="sd-tab-label" for="sd-tab-item-9"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-7" name="sd-tab-set-3" type="radio"> <label class="sd-tab-label" for="sd-tab-item-7"> MXNet 1.5.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-10" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-10"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> <input id="sd-tab-item-11" name="sd-tab-set-5" type="radio"> <label class="sd-tab-label" for="sd-tab-item-11"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> </div> </div> </div> </div> <div class="section" id="deploy-on-aws-ml-accelerator-instance"> <h2><a class="toc-backref" href="#id3">Deploy on AWS ML accelerator instance</a><a class="headerlink" href="#deploy-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2> <p>During deployment it can be beneficial to reduce the number of components installed in the system. For use-cases where only inference is necessary (compilation is already complete), only the framework and runtime should be installed.</p> <p>Note: If you are using a regular U18, U20, or AL2 AMI, follow the same setup instructions as the Base DLAMIs respectively.</p> <div class="admonition important"> <p class="admonition-title">Important</p> <dl class="simple"> <dt>For successful installation or update to next releases (Neuron 1.20.0 and newer):</dt><dd><ul class="simple"> <li><p>Uninstall <code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code> by running: <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">apt</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code> or <code class="docutils literal notranslate"><span class="pre">sudo</span> <span class="pre">yum</span> <span class="pre">remove</span> <span class="pre">aws-neuron-dkms</span></code></p></li> <li><p>Install or upgrade to latest Neuron driver (<code class="docutils literal notranslate"><span class="pre">aws-neuron-dkms</span></code>) by following the “Setup Guide” instructions.</p></li> </ul> </dd> </dl> </div> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-12" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-12"> MXNet 1.8.0</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-14" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-14"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc </pre></div> </div> </div> <input id="sd-tab-item-15" name="sd-tab-set-7" type="radio"> <label class="sd-tab-label" for="sd-tab-item-15"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mx_neuron neuron-cc </pre></div> </div> </div> </div> </div> <input id="sd-tab-item-13" name="sd-tab-set-6" type="radio"> <label class="sd-tab-label" for="sd-tab-item-13"> MXNet 1.5.1</label><div class="sd-tab-content docutils"> <div class="sd-tab-set docutils"> <input checked="checked" id="sd-tab-item-16" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-16"> Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.8 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> <input id="sd-tab-item-17" name="sd-tab-set-8" type="radio"> <label class="sd-tab-label" for="sd-tab-item-17"> Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils"> <div class="admonition note"> <p class="admonition-title">Note</p> <p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p> </div> <div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv source aws_neuron_venv_mxnet_inf1/bin/activate # Install Jupyter notebook kernel pip install ipykernel python3.7 -m ipykernel install --user --name aws_neuron_venv_mxnet_inf1 --display-name "Python (mxnet_neuron)" pip install jupyter notebook pip install environment_kernels # Set pip repository pointing to the Neuron repository python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Update MXNet Neuron wget https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl pip install aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl python -m pip install --upgrade mxnet_neuron==1.5.1.* neuron-cc==1.15.0 </pre></div> </div> </div> </div> </div> </div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:20.461Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/troubleshooting.rst.txt
``` .. _general-troubleshooting: Troubleshooting Guide ===================== .. contents:: Table of contents :local: :depth: 1 Training Only Troubleshooting ----------------------------- * :ref:`PyTorch Neuron for Training <pytorch-neuron-traning-troubleshooting>` Inference Only Troubleshooting ------------------------------ * :ref:`PyTorch Neuron for Inference <pytorch-neuron-inference-troubleshooting>` * :ref:`NeuronPerf <neuronperf_troubleshooting>` * :ref:`MXNet Neuron <mxnet_troubleshooting_guide>` Runtime Troubleshooting ------------------------------ * :ref:`Neuron Runtime Troubleshooting on Inf1 and Trn1 <nrt-troubleshooting>` Containers Troubleshooting -------------------------- * :ref:`Containers <container-troubleshooting>` Setup Troubleshooting --------------------- * :ref:`neuron-setup-troubleshooting` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _general-troubleshooting: Troubleshooting Guide ===================== .. contents:: Table of contents :local: :depth: 1 Training Only Troubleshooting ----------------------------- * :ref:`PyTorch Neuron for Training &lt;pytorch-neuron-traning-troubleshooting&gt;` Inference Only Troubleshooting ------------------------------ * :ref:`PyTorch Neuron for Inference &lt;pytorch-neuron-inference-troubleshooting&gt;` * :ref:`NeuronPerf &lt;neuronperf_troubleshooting&gt;` * :ref:`MXNet Neuron &lt;mxnet_troubleshooting_guide&gt;` Runtime Troubleshooting ------------------------------ * :ref:`Neuron Runtime Troubleshooting on Inf1 and Trn1 &lt;nrt-troubleshooting&gt;` Containers Troubleshooting -------------------------- * :ref:`Containers &lt;container-troubleshooting&gt;` Setup Troubleshooting --------------------- * :ref:`neuron-setup-troubleshooting` </pre></body></html>
2023-09-29T20:55:20.641Z
Training with Neuron - FAQ — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/faq/training/neuron-training.html#neuron-training-faq
# Training with Neuron - FAQ — AWS Neuron Documentation ## Contents - [Compute](#compute) - [How do I get started with training my model on Trn1?](#how-do-i-get-started-with-training-my-model-on-trn1) - [How do I setup EFA for multi-node training?](#how-do-i-setup-efa-for-multi-node-training) - [How do I know if I can train my models with Trainium?](#how-do-i-know-if-i-can-train-my-models-with-trainium) - [How should I size Trainium NeuronCores vs GPUs?](#how-should-i-size-trainium-neuroncores-vs-gpus) - [What are the time to train advantages of Trn1?](#what-are-the-time-to-train-advantages-of-trn1) - [What are some of the training performance results for Trn1?](#what-are-some-of-the-training-performance-results-for-trn1) - [Can I use CUDA libraries with AWS Trainium?](#can-i-use-cuda-libraries-with-aws-trainium) - [Networking](#networking) - [What’s important to know about the networking in Trn1?](#whats-important-to-know-about-the-networking-in-trn1) - [How does Trainium accelerates collective communication operations?](#how-does-trainium-accelerates-collective-communication-operations) - [What does Strong/Weak Scaling mean?](#what-does-strong-weak-scaling-mean) - [Usability](#usability) - [What have AWS done to improve usability of Trainium?](#what-have-aws-done-to-improve-usability-of-trainium) - [What other AWS services work with Trn1?](#what-other-aws-services-work-with-trn1) - [What tools are available to develop models with Trn1?](#what-tools-are-available-to-develop-models-with-trn1) - [How will compile time impact my work flow?](#how-will-compile-time-impact-my-work-flow) _This document is relevant for_: `Trn1`, `Trn1n` ## Training with Neuron - FAQ[#](#training-with-neuron-faq "Permalink to this headline") Table of contents - [Compute](#compute) - [How do I get started with training my model on Trn1?](#how-do-i-get-started-with-training-my-model-on-trn1) - [How do I setup EFA for multi-node training?](#how-do-i-setup-efa-for-multi-node-training) - [How do I know if I can train my models with Trainium?](#how-do-i-know-if-i-can-train-my-models-with-trainium) - [How should I size Trainium NeuronCores vs GPUs?](#how-should-i-size-trainium-neuroncores-vs-gpus) - [What are the time to train advantages of Trn1?](#what-are-the-time-to-train-advantages-of-trn1) - [What are some of the training performance results for Trn1?](#what-are-some-of-the-training-performance-results-for-trn1) - [Can I use CUDA libraries with AWS Trainium?](#can-i-use-cuda-libraries-with-aws-trainium) - [Networking](#networking) - [What’s important to know about the networking in Trn1?](#whats-important-to-know-about-the-networking-in-trn1) - [How does Trainium accelerates collective communication operations?](#how-does-trainium-accelerates-collective-communication-operations) - [What does Strong/Weak Scaling mean?](#what-does-strong-weak-scaling-mean) - [Usability](#usability) - [What have AWS done to improve usability of Trainium?](#what-have-aws-done-to-improve-usability-of-trainium) - [What other AWS services work with Trn1?](#what-other-aws-services-work-with-trn1) - [What tools are available to develop models with Trn1?](#what-tools-are-available-to-develop-models-with-trn1) - [How will compile time impact my work flow?](#how-will-compile-time-impact-my-work-flow) ## [Compute](#id1)[#](#compute "Permalink to this headline") ### [How do I know if I can train my models with Trainium?](#id4)[#](#how-do-i-know-if-i-can-train-my-models-with-trainium "Permalink to this headline") We aim to support a broad set of models and distribution libraries. We continuously add more capabilities and enable new features via Neuron SDK releases and suggest you will follow our public roadmap and join our slack and email lists. ### [How should I size Trainium NeuronCores vs GPUs?](#id5)[#](#how-should-i-size-trainium-neuroncores-vs-gpus "Permalink to this headline") For simplicity, you should consider each NeuronCore within your instances as an independent deep learning compute engine, the equivalent of a GPU. As point of comparison, a trn1.32xlarge has 32 NeuronCores, and their max performance is 40% higher than of P4d for BF16/FP16/FP8, 2.5X faster for TF32, and 5X faster for FP32. Each NeuronCore is independent and connected to the rest of the NeuronCores within the instance via NeuronLink, and across instances with EFA. Each NeuronCore has also full access to the accelerator memory in the instance, which helps scale large models across NeuronCores using various collective compute ops techniques. ### [What are the time to train advantages of Trn1?](#id6)[#](#what-are-the-time-to-train-advantages-of-trn1 "Permalink to this headline") While the answer is largely model defendant, training performance on Trn1 is fast due thanks for multiple system wide optimizations working in concert. Dependent on the data type, you should expect between 1.4-5X higher throughput on Trn1 as compared to the latest GPUs instances (P4d). For distributed workloads, 800Gbps EFA gives customers lower latency, and 2x the throughput as compared to P4d. (a Trn1n 1.6Tb option is coming soon). Each Trainium also has a dedicated collective compute (CC) engine, which enables running the CC ops in parallel to the NeuronCores compute. This enables another 10-15% acceleration of the overall workload. Finally, stochastic rounding enables running at half precision speeds (BF16) while maintaining accuracy at near full precision, this is not only simplifying model development (no need for mixed precision) it also helps the loss function converge faster and reduce memory footprint. ### [What are some of the training performance results for Trn1?](#id7)[#](#what-are-some-of-the-training-performance-results-for-trn1 "Permalink to this headline") They are great! please refer to the [Neuron Performance](../../benchmarks/index.html#benchmark) page for open-source model performance results. We encourage you to try it for your own models/application. ### [Can I use CUDA libraries with AWS Trainium?](#id8)[#](#can-i-use-cuda-libraries-with-aws-trainium "Permalink to this headline") AWS Trainium and Neuron are plugged into popular frameworkd, and is automatically optimizing model deployment on Neuron devices like Inferentia and Trainium. The Neuron SDK automatically optimizes for Trainium without using closed source dependencies like Nvidia CUDA, not requiring any application level code changes to accelerate models. We believe this intentional approach allows developers freedom of choice with their code and models. If you have applications dependencieson CUDA (or other 3rd party closed source artifacts) you will need to strip them out, and from that point the Neuron compiler will take the model as is and optimize it at the hardware level. ## [Networking](#id9)[#](#networking "Permalink to this headline") ### [What’s important to know about the networking in Trn1?](#id10)[#](#whats-important-to-know-about-the-networking-in-trn1 "Permalink to this headline") Trn1 have the fastest EFA in AWS, clocked at 800Gbps they enable more collective communication as compared to other training instances, which is important if your training job spans across multiple servers. You should also expect lower latency as we streamline the communication path between the dedicated collective communication engine on Trainium, and the AWS Nitro EFA NICs. ### [How does Trainium accelerates collective communication operations?](#id11)[#](#how-does-trainium-accelerates-collective-communication-operations "Permalink to this headline") Trainium introduces a dedicated collective compute engine, that runs in parallel to the compute cores (aka NeuronCores). This improves convergence time of intermediate steps as the communication happens in parallel to the compute. This capability, in addition to the faster and optimized EFA, results in better scalability and faster time to train, as compared to other training instances in AWS. ### [What does Strong/Weak Scaling mean?](#id12)[#](#what-does-strong-weak-scaling-mean "Permalink to this headline") To enable strong scaling, we optimized Trainium to be efficient at small batch sizes. Compared to GPUs, Trn1 maintains high efficiency even for small batch sizes. This allows you to scale-out to thousands of devices without increasing the global mini-batch size at the same rate, which in turn leads to faster end-to-end training convergence. In weak scaling setup, we show the optimal throughput with suffciently large batch size per Trainium. The large batch size is set to leverage the high core utilization so that the overall end-to-end training will be fast. This setup also enables a large global batch size as it scales with the total number of nodes in the cluster. ## [Usability](#id13)[#](#usability "Permalink to this headline") ### [What have AWS done to improve usability of Trainium?](#id14)[#](#what-have-aws-done-to-improve-usability-of-trainium "Permalink to this headline") Stochastic rounding enables running at half precision speeds (BF16) while maintaining accuracy at near full precision. This of course helps the loss function converge faster and reduce memory footprint, but equally important, it is simplifying model development as you can write your model in FP32, and Neuron/Trainium will auto-cast the model to BF16, and execute it with SR enabled. There is no need to loss accuracy with pure BF16 runs, and more importantly no need for experimenting with mixed precision strategies to find the optimal settings. Eager debug mode provides a convenient utility to step through the code and evaluate operator correctness as part of your model creation/debug. For more details, please refer to the Neuron documentation ### [What other AWS services work with Trn1?](#id15)[#](#what-other-aws-services-work-with-trn1 "Permalink to this headline") Trn1 via its Neuron SDK supports Amazon ECS, EKS, ParallelCluster, Batch, and Amazon SageMaker. Customers can also choose to run in a Neuron container within their self-managed containers orchestration service (e.g., Kubernetes and Ray). ### [How will compile time impact my work flow?](#id17)[#](#how-will-compile-time-impact-my-work-flow "Permalink to this headline") We understand compilation is a new step with Trainium, but as long as the overall time to train and cost to train is optimized, the compilation impact on these two metrics is minimized. To further help reduce compilation time impact on usability, Neuron supports a persistent cache, where artifacts that have not changed since the last run can be reused, skipping compilation alltogether. For developing and experimenting with new models, you can use the eager debug mode, that compiles (and caches) op-by-op, enabling quick evaluation without compiling large models. We are also working on Neuron model analyzer (see Neuron roadmap) that will recommend optimized hyper parameters, skipping full compilation per experiment. _This document is relevant for_: `Trn1`, `Trn1n`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Training with Neuron - FAQ — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../_static/pygments.css"> <link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script> <script src="../../../_static/jquery.js"></script> <script src="../../../_static/underscore.js"></script> <script src="../../../_static/doctools.js"></script> <script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../_static/contentui.js"></script> <script src="../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../genindex.html"> <link rel="search" title="Search" href="../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/faq/training/neuron-training", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/faq/training/neuron-training.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/faq/training/neuron-training.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../_sources/general/faq/training/neuron-training.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compute"> Compute </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-get-started-with-training-my-model-on-trn1"> How do I get started with training my model on Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-setup-efa-for-multi-node-training"> How do I setup EFA for multi-node training? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-know-if-i-can-train-my-models-with-trainium"> How do I know if I can train my models with Trainium? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-should-i-size-trainium-neuroncores-vs-gpus"> How should I size Trainium NeuronCores vs GPUs? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-are-the-time-to-train-advantages-of-trn1"> What are the time to train advantages of Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-are-some-of-the-training-performance-results-for-trn1"> What are some of the training performance results for Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-i-use-cuda-libraries-with-aws-trainium"> Can I use CUDA libraries with AWS Trainium? </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#networking"> Networking </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#whats-important-to-know-about-the-networking-in-trn1"> What’s important to know about the networking in Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-does-trainium-accelerates-collective-communication-operations"> How does Trainium accelerates collective communication operations? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-does-strong-weak-scaling-mean"> What does Strong/Weak Scaling mean? </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#usability"> Usability </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-have-aws-done-to-improve-usability-of-trainium"> What have AWS done to improve usability of Trainium? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-other-aws-services-work-with-trn1"> What other AWS services work with Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-tools-are-available-to-develop-models-with-trn1"> What tools are available to develop models with Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-will-compile-time-impact-my-work-flow"> How will compile time impact my work flow? </a> </li> </ul> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Training with Neuron - FAQ</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#compute"> Compute </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-get-started-with-training-my-model-on-trn1"> How do I get started with training my model on Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-setup-efa-for-multi-node-training"> How do I setup EFA for multi-node training? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-know-if-i-can-train-my-models-with-trainium"> How do I know if I can train my models with Trainium? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-should-i-size-trainium-neuroncores-vs-gpus"> How should I size Trainium NeuronCores vs GPUs? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-are-the-time-to-train-advantages-of-trn1"> What are the time to train advantages of Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-are-some-of-the-training-performance-results-for-trn1"> What are some of the training performance results for Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-i-use-cuda-libraries-with-aws-trainium"> Can I use CUDA libraries with AWS Trainium? </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#networking"> Networking </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#whats-important-to-know-about-the-networking-in-trn1"> What’s important to know about the networking in Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-does-trainium-accelerates-collective-communication-operations"> How does Trainium accelerates collective communication operations? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-does-strong-weak-scaling-mean"> What does Strong/Weak Scaling mean? </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#usability"> Usability </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-have-aws-done-to-improve-usability-of-trainium"> What have AWS done to improve usability of Trainium? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-other-aws-services-work-with-trn1"> What other AWS services work with Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-tools-are-available-to-develop-models-with-trn1"> What tools are available to develop models with Trn1? </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-will-compile-time-impact-my-work-flow"> How will compile time impact my work flow? </a> </li> </ul> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> <div class="section" id="training-with-neuron-faq"> <span id="neuron-training-faq"></span><h1>Training with Neuron - FAQ<a class="headerlink" href="#training-with-neuron-faq" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#compute" id="id1">Compute</a></p> <ul> <li><p><a class="reference internal" href="#how-do-i-get-started-with-training-my-model-on-trn1" id="id2">How do I get started with training my model on Trn1?</a></p></li> <li><p><a class="reference internal" href="#how-do-i-setup-efa-for-multi-node-training" id="id3">How do I setup EFA for multi-node training?</a></p></li> <li><p><a class="reference internal" href="#how-do-i-know-if-i-can-train-my-models-with-trainium" id="id4">How do I know if I can train my models with Trainium?</a></p></li> <li><p><a class="reference internal" href="#how-should-i-size-trainium-neuroncores-vs-gpus" id="id5">How should I size Trainium NeuronCores vs GPUs?</a></p></li> <li><p><a class="reference internal" href="#what-are-the-time-to-train-advantages-of-trn1" id="id6">What are the time to train advantages of Trn1?</a></p></li> <li><p><a class="reference internal" href="#what-are-some-of-the-training-performance-results-for-trn1" id="id7">What are some of the training performance results for Trn1?</a></p></li> <li><p><a class="reference internal" href="#can-i-use-cuda-libraries-with-aws-trainium" id="id8">Can I use CUDA libraries with AWS Trainium?</a></p></li> </ul> </li> <li><p><a class="reference internal" href="#networking" id="id9">Networking</a></p> <ul> <li><p><a class="reference internal" href="#whats-important-to-know-about-the-networking-in-trn1" id="id10">What’s important to know about the networking in Trn1?</a></p></li> <li><p><a class="reference internal" href="#how-does-trainium-accelerates-collective-communication-operations" id="id11">How does Trainium accelerates collective communication operations?</a></p></li> <li><p><a class="reference internal" href="#what-does-strong-weak-scaling-mean" id="id12">What does Strong/Weak Scaling mean?</a></p></li> </ul> </li> <li><p><a class="reference internal" href="#usability" id="id13">Usability</a></p> <ul> <li><p><a class="reference internal" href="#what-have-aws-done-to-improve-usability-of-trainium" id="id14">What have AWS done to improve usability of Trainium?</a></p></li> <li><p><a class="reference internal" href="#what-other-aws-services-work-with-trn1" id="id15">What other AWS services work with Trn1?</a></p></li> <li><p><a class="reference internal" href="#what-tools-are-available-to-develop-models-with-trn1" id="id16">What tools are available to develop models with Trn1?</a></p></li> <li><p><a class="reference internal" href="#how-will-compile-time-impact-my-work-flow" id="id17">How will compile time impact my work flow?</a></p></li> </ul> </li> </ul> </div> <div class="section" id="compute"> <h2><a class="toc-backref" href="#id1">Compute</a><a class="headerlink" href="#compute" title="Permalink to this headline">#</a></h2> <div class="section" id="how-do-i-get-started-with-training-my-model-on-trn1"> <h3><a class="toc-backref" href="#id2">How do I get started with training my model on Trn1?</a><a class="headerlink" href="#how-do-i-get-started-with-training-my-model-on-trn1" title="Permalink to this headline">#</a></h3> <p>Once you select your machine learning framework, you can get started here: <a class="reference internal" href="../../quick-start/docs-quicklinks.html#docs-quick-links"><span class="std std-ref">Neuron Quick Links</span></a></p> </div> <div class="section" id="how-do-i-setup-efa-for-multi-node-training"> <h3><a class="toc-backref" href="#id3">How do I setup EFA for multi-node training?</a><a class="headerlink" href="#how-do-i-setup-efa-for-multi-node-training" title="Permalink to this headline">#</a></h3> <p>For setting up EFA that is needed for multi-node training, please see <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html#setup-trn1-multi-node-execution"><span class="std std-ref">How to prepare trn1.32xlarge for multi-node execution</span></a></p> </div> <div class="section" id="how-do-i-know-if-i-can-train-my-models-with-trainium"> <h3><a class="toc-backref" href="#id4">How do I know if I can train my models with Trainium?</a><a class="headerlink" href="#how-do-i-know-if-i-can-train-my-models-with-trainium" title="Permalink to this headline">#</a></h3> <p>We aim to support a broad set of models and distribution libraries. We continuously add more capabilities and enable new features via Neuron SDK releases and suggest you will follow our public roadmap and join our slack and email lists.</p> </div> <div class="section" id="how-should-i-size-trainium-neuroncores-vs-gpus"> <h3><a class="toc-backref" href="#id5">How should I size Trainium NeuronCores vs GPUs?</a><a class="headerlink" href="#how-should-i-size-trainium-neuroncores-vs-gpus" title="Permalink to this headline">#</a></h3> <p>For simplicity, you should consider each NeuronCore within your instances as an independent deep learning compute engine, the equivalent of a GPU. As point of comparison, a trn1.32xlarge has 32 NeuronCores, and their max performance is 40% higher than of P4d for BF16/FP16/FP8, 2.5X faster for TF32, and 5X faster for FP32. Each NeuronCore is independent and connected to the rest of the NeuronCores within the instance via NeuronLink, and across instances with EFA. Each NeuronCore has also full access to the accelerator memory in the instance, which helps scale large models across NeuronCores using various collective compute ops techniques.</p> </div> <div class="section" id="what-are-the-time-to-train-advantages-of-trn1"> <h3><a class="toc-backref" href="#id6">What are the time to train advantages of Trn1?</a><a class="headerlink" href="#what-are-the-time-to-train-advantages-of-trn1" title="Permalink to this headline">#</a></h3> <p>While the answer is largely model defendant, training performance on Trn1 is fast due thanks for multiple system wide optimizations working in concert. Dependent on the data type, you should expect between 1.4-5X higher throughput on Trn1 as compared to the latest GPUs instances (P4d). For distributed workloads, 800Gbps EFA gives customers lower latency, and 2x the throughput as compared to P4d. (a Trn1n 1.6Tb option is coming soon). Each Trainium also has a dedicated collective compute (CC) engine, which enables running the CC ops in parallel to the NeuronCores compute. This enables another 10-15% acceleration of the overall workload. Finally, stochastic rounding enables running at half precision speeds (BF16) while maintaining accuracy at near full precision, this is not only simplifying model development (no need for mixed precision) it also helps the loss function converge faster and reduce memory footprint.</p> </div> <div class="section" id="what-are-some-of-the-training-performance-results-for-trn1"> <h3><a class="toc-backref" href="#id7">What are some of the training performance results for Trn1?</a><a class="headerlink" href="#what-are-some-of-the-training-performance-results-for-trn1" title="Permalink to this headline">#</a></h3> <p>They are great! please refer to the <a class="reference internal" href="../../benchmarks/index.html#benchmark"><span class="std std-ref">Neuron Performance</span></a> page for open-source model performance results. We encourage you to try it for your own models/application.</p> </div> <div class="section" id="can-i-use-cuda-libraries-with-aws-trainium"> <h3><a class="toc-backref" href="#id8">Can I use CUDA libraries with AWS Trainium?</a><a class="headerlink" href="#can-i-use-cuda-libraries-with-aws-trainium" title="Permalink to this headline">#</a></h3> <p>AWS Trainium and Neuron are plugged into popular frameworkd, and is automatically optimizing model deployment on Neuron devices like Inferentia and Trainium. The Neuron SDK automatically optimizes for Trainium without using closed source dependencies like Nvidia CUDA, not requiring any application level code changes to accelerate models. We believe this intentional approach allows developers freedom of choice with their code and models. If you have applications dependencieson CUDA (or other 3rd party closed source artifacts) you will need to strip them out, and from that point the Neuron compiler will take the model as is and optimize it at the hardware level.</p> </div> </div> <div class="section" id="networking"> <h2><a class="toc-backref" href="#id9">Networking</a><a class="headerlink" href="#networking" title="Permalink to this headline">#</a></h2> <div class="section" id="whats-important-to-know-about-the-networking-in-trn1"> <h3><a class="toc-backref" href="#id10">What’s important to know about the networking in Trn1?</a><a class="headerlink" href="#whats-important-to-know-about-the-networking-in-trn1" title="Permalink to this headline">#</a></h3> <p>Trn1 have the fastest EFA in AWS, clocked at 800Gbps they enable more collective communication as compared to other training instances, which is important if your training job spans across multiple servers. You should also expect lower latency as we streamline the communication path between the dedicated collective communication engine on Trainium, and the AWS Nitro EFA NICs.</p> </div> <div class="section" id="how-does-trainium-accelerates-collective-communication-operations"> <h3><a class="toc-backref" href="#id11">How does Trainium accelerates collective communication operations?</a><a class="headerlink" href="#how-does-trainium-accelerates-collective-communication-operations" title="Permalink to this headline">#</a></h3> <p>Trainium introduces a dedicated collective compute engine, that runs in parallel to the compute cores (aka NeuronCores). This improves convergence time of intermediate steps as the communication happens in parallel to the compute. This capability, in addition to the faster and optimized EFA, results in better scalability and faster time to train, as compared to other training instances in AWS.</p> </div> <div class="section" id="what-does-strong-weak-scaling-mean"> <h3><a class="toc-backref" href="#id12">What does Strong/Weak Scaling mean?</a><a class="headerlink" href="#what-does-strong-weak-scaling-mean" title="Permalink to this headline">#</a></h3> <p>To enable strong scaling, we optimized Trainium to be efficient at small batch sizes. Compared to GPUs, Trn1 maintains high efficiency even for small batch sizes. This allows you to scale-out to thousands of devices without increasing the global mini-batch size at the same rate, which in turn leads to faster end-to-end training convergence.</p> <p>In weak scaling setup, we show the optimal throughput with suffciently large batch size per Trainium. The large batch size is set to leverage the high core utilization so that the overall end-to-end training will be fast. This setup also enables a large global batch size as it scales with the total number of nodes in the cluster.</p> </div> </div> <div class="section" id="usability"> <h2><a class="toc-backref" href="#id13">Usability</a><a class="headerlink" href="#usability" title="Permalink to this headline">#</a></h2> <div class="section" id="what-have-aws-done-to-improve-usability-of-trainium"> <h3><a class="toc-backref" href="#id14">What have AWS done to improve usability of Trainium?</a><a class="headerlink" href="#what-have-aws-done-to-improve-usability-of-trainium" title="Permalink to this headline">#</a></h3> <p>Stochastic rounding enables running at half precision speeds (BF16) while maintaining accuracy at near full precision. This of course helps the loss function converge faster and reduce memory footprint, but equally important, it is simplifying model development as you can write your model in FP32, and Neuron/Trainium will auto-cast the model to BF16, and execute it with SR enabled. There is no need to loss accuracy with pure BF16 runs, and more importantly no need for experimenting with mixed precision strategies to find the optimal settings.</p> <p>Eager debug mode provides a convenient utility to step through the code and evaluate operator correctness as part of your model creation/debug. For more details, please refer to the Neuron documentation</p> </div> <div class="section" id="what-other-aws-services-work-with-trn1"> <h3><a class="toc-backref" href="#id15">What other AWS services work with Trn1?</a><a class="headerlink" href="#what-other-aws-services-work-with-trn1" title="Permalink to this headline">#</a></h3> <p>Trn1 via its Neuron SDK supports Amazon ECS, EKS, ParallelCluster, Batch, and Amazon SageMaker. Customers can also choose to run in a Neuron container within their self-managed containers orchestration service (e.g., Kubernetes and Ray).</p> </div> <div class="section" id="what-tools-are-available-to-develop-models-with-trn1"> <h3><a class="toc-backref" href="#id16">What tools are available to develop models with Trn1?</a><a class="headerlink" href="#what-tools-are-available-to-develop-models-with-trn1" title="Permalink to this headline">#</a></h3> <p>When running training, evaluation or inference workloads you can use Neuron 2.x CLI tools such as neuron-ls and neuron-top to get insights into the NeuronCores and NeuronDevices performance and memory utilization, topology and host vCPU performance and memory utilization. In addition, the Neuron Plugin for TensorBoard provides a standard GUI that enables profile and debug of models. TensorBoard views include:</p> <ul class="simple"> <li><p>Model overview: provide a summary of the model and the utilization on the Host and NeuronDevice</p></li> <li><p>Operators’ view: provide a breakdown of ML framework and HLO operators on both Host and NeuronDevice</p></li> <li><p>Code trace view: show a timeline of the model execution at the framework and HLO operators level</p></li> <li><p>Hardware trace view: show a timeline of the model execution at the level of hardware (Host, NeuronDevice, Data Transfer)</p></li> <li><p>Topology view: show the NeuronDevices topology within an instance</p></li> </ul> </div> <div class="section" id="how-will-compile-time-impact-my-work-flow"> <h3><a class="toc-backref" href="#id17">How will compile time impact my work flow?</a><a class="headerlink" href="#how-will-compile-time-impact-my-work-flow" title="Permalink to this headline">#</a></h3> <p>We understand compilation is a new step with Trainium, but as long as the overall time to train and cost to train is optimized, the compilation impact on these two metrics is minimized. To further help reduce compilation time impact on usability, Neuron supports a persistent cache, where artifacts that have not changed since the last run can be reused, skipping compilation alltogether. For developing and experimenting with new models, you can use the eager debug mode, that compiles (and caches) op-by-op, enabling quick evaluation without compiling large models. We are also working on Neuron model analyzer (see Neuron roadmap) that will recommend optimized hyper parameters, skipping full compilation per experiment.</p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> </div> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:20.726Z
Troubleshooting for Inf1 - FAQ — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/faq/inference/trouble-shooting-faq.html#trouble-shooting-inf1-faq
# Troubleshooting for Inf1 - FAQ — AWS Neuron Documentation Table of contents - [Performance is not what I expect it to be, what’s the next step?](#performance-is-not-what-i-expect-it-to-be-what-s-the-next-step) - [Do I need to worry about size of model and size of inferentia memory? what problems can I expect to have?](#do-i-need-to-worry-about-size-of-model-and-size-of-inferentia-memory-what-problems-can-i-expect-to-have) - [How can I debug / profile my inference request?](#how-can-i-debug-profile-my-inference-request) - [How to report Bug/Feature Requests](#how-to-report-bug-feature-requests) ## [Performance is not what I expect it to be, what’s the next step?](#id1)[#](#performance-is-not-what-i-expect-it-to-be-what-s-the-next-step "Permalink to this headline") Please check our performance-optimization section on performance tuning and other notes on how to use pipelining and batching to improve performance. ## [How to report Bug/Feature Requests](#id4)[#](#how-to-report-bug-feature-requests "Permalink to this headline") We welcome you to use the Neuron GitHub issue tracker to report bugs or suggest features. When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn’t already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: - A reproducible test case or series of steps - The version of our code being used - Any modifications you’ve made relevant to the bug - Anything unusual about your environment or deployment _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Troubleshooting for Inf1 - FAQ — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../_static/pygments.css"> <link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script> <script src="../../../_static/jquery.js"></script> <script src="../../../_static/underscore.js"></script> <script src="../../../_static/doctools.js"></script> <script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../_static/contentui.js"></script> <script src="../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../genindex.html"> <link rel="search" title="Search" href="../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/faq/inference/trouble-shooting-faq", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/faq/inference/trouble-shooting-faq.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/faq/inference/trouble-shooting-faq.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../_sources/general/faq/inference/trouble-shooting-faq.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#performance-is-not-what-i-expect-it-to-be-what-s-the-next-step"> Performance is not what I expect it to be, what’s the next step? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#do-i-need-to-worry-about-size-of-model-and-size-of-inferentia-memory-what-problems-can-i-expect-to-have"> Do I need to worry about size of model and size of inferentia memory? what problems can I expect to have? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-can-i-debug-profile-my-inference-request"> How can I debug / profile my inference request? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-report-bug-feature-requests"> How to report Bug/Feature Requests </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Troubleshooting for Inf1 - FAQ</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#performance-is-not-what-i-expect-it-to-be-what-s-the-next-step"> Performance is not what I expect it to be, what’s the next step? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#do-i-need-to-worry-about-size-of-model-and-size-of-inferentia-memory-what-problems-can-i-expect-to-have"> Do I need to worry about size of model and size of inferentia memory? what problems can I expect to have? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-can-i-debug-profile-my-inference-request"> How can I debug / profile my inference request? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-report-bug-feature-requests"> How to report Bug/Feature Requests </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="troubleshooting-for-inf1-faq"> <span id="trouble-shooting-inf1-faq"></span><h1>Troubleshooting for Inf1 - FAQ<a class="headerlink" href="#troubleshooting-for-inf1-faq" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#performance-is-not-what-i-expect-it-to-be-what-s-the-next-step" id="id1">Performance is not what I expect it to be, what’s the next step?</a></p></li> <li><p><a class="reference internal" href="#do-i-need-to-worry-about-size-of-model-and-size-of-inferentia-memory-what-problems-can-i-expect-to-have" id="id2">Do I need to worry about size of model and size of inferentia memory? what problems can I expect to have?</a></p></li> <li><p><a class="reference internal" href="#how-can-i-debug-profile-my-inference-request" id="id3">How can I debug / profile my inference request?</a></p></li> <li><p><a class="reference internal" href="#how-to-report-bug-feature-requests" id="id4">How to report Bug/Feature Requests</a></p></li> </ul> </div> <div class="section" id="performance-is-not-what-i-expect-it-to-be-what-s-the-next-step"> <h2><a class="toc-backref" href="#id1">Performance is not what I expect it to be, what’s the next step?</a><a class="headerlink" href="#performance-is-not-what-i-expect-it-to-be-what-s-the-next-step" title="Permalink to this headline">#</a></h2> <p>Please check our <span class="xref std std-ref">performance-optimization</span> section on performance tuning and other notes on how to use pipelining and batching to improve performance.</p> </div> <div class="section" id="do-i-need-to-worry-about-size-of-model-and-size-of-inferentia-memory-what-problems-can-i-expect-to-have"> <h2><a class="toc-backref" href="#id2">Do I need to worry about size of model and size of inferentia memory? what problems can I expect to have?</a><a class="headerlink" href="#do-i-need-to-worry-about-size-of-model-and-size-of-inferentia-memory-what-problems-can-i-expect-to-have" title="Permalink to this headline">#</a></h2> <p>Errors like this will be logged and can be found as shown <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html#neuron-gatherinfo"><span class="std std-ref">Using Neuron GatherInfo Tool to collect debug and support information</span></a>.</p> </div> <div class="section" id="how-can-i-debug-profile-my-inference-request"> <h2><a class="toc-backref" href="#id3">How can I debug / profile my inference request?</a><a class="headerlink" href="#how-can-i-debug-profile-my-inference-request" title="Permalink to this headline">#</a></h2> <p>See <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html#neuron-plugin-tensorboard"><span class="std std-ref">Neuron Plugin for TensorBoard (Inf1)</span></a></p> </div> <div class="section" id="how-to-report-bug-feature-requests"> <h2><a class="toc-backref" href="#id4">How to report Bug/Feature Requests</a><a class="headerlink" href="#how-to-report-bug-feature-requests" title="Permalink to this headline">#</a></h2> <p>We welcome you to use the Neuron GitHub issue tracker to report bugs or suggest features.</p> <p>When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn’t already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:</p> <ul class="simple"> <li><p>A reproducible test case or series of steps</p></li> <li><p>The version of our code being used</p></li> <li><p>Any modifications you’ve made relevant to the bug</p></li> <li><p>Anything unusual about your environment or deployment</p></li> </ul> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:20.947Z
TensorFlow 1.x FAQ — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tf1_faq.html#tf1-faq
# TensorFlow 1.x FAQ — AWS Neuron Documentation Table of contents - [How do I get started with TensorFlow?](#how-do-i-get-started-with-tensorflow) - [What TensorFlow versions are supported by Neuron?](#what-tensorflow-versions-are-supported-by-neuron) - [What operators are supported?](#what-operators-are-supported) - [How do I compile my model?](#how-do-i-compile-my-model) - [How do I deploy my model?](#how-do-i-deploy-my-model) - [Where can I find tutorials and examples ?](#where-can-i-find-tutorials-and-examples) - [How to debug or profile my model?](#how-to-debug-or-profile-my-model) ## [What operators are supported?](#id3)[#](#what-operators-are-supported "Permalink to this headline") `neuron-cc list-operators --framework TENSORFLOW` provides a list of supported TensorFlow 1.x operators, and they are the operators that run on the machine learning accelerator. Note that operators not in this list are still expected to work with the supported operators in native TensorFlow together, although not accelerated by the hardware. ## [How do I deploy my model?](#id5)[#](#how-do-i-deploy-my-model "Permalink to this headline") Same way as deploying any tensorflow [SavedModel](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/saved_model.md#user-content-save-and-restore-models). In Python TensorFlow, the easiest way is through the [tf.contrib.predictor module](https://docs.w3cub.com/tensorflow~python/tf/contrib/predictor/from_saved_model). If a Python-free deployment is preferred for performance or some other reasons, [tensorflow-serving](https://www.tensorflow.org/tfx/guide/serving) is a great choice and the AWS Neuron team provides pre-built model server apt/yum packages named as `tensorflow-model-server-neuron`. ## [How to debug or profile my model?](#id7)[#](#how-to-debug-or-profile-my-model "Permalink to this headline") At TensorFlow level, the [v1 profiler](https://www.tensorflow.org/api_docs/python/tf/compat/v1/profiler/Profiler) is a great tool that provides operator-level breakdown of the inference execution time. Additionally, the [AWS Neuron TensorBoard integration](../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html#neuron-plugin-tensorboard) provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications. _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>TensorFlow 1.x FAQ — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../_static/pygments.css"> <link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script> <script src="../../../_static/jquery.js"></script> <script src="../../../_static/underscore.js"></script> <script src="../../../_static/doctools.js"></script> <script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../_static/contentui.js"></script> <script src="../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../genindex.html"> <link rel="search" title="Search" href="../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/tf1_faq", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/tf1_faq.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tf1_faq.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../_sources/frameworks/tensorflow/tensorflow-neuron/tf1_faq.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-get-started-with-tensorflow"> How do I get started with TensorFlow? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-tensorflow-versions-are-supported-by-neuron"> What TensorFlow versions are supported by Neuron? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-operators-are-supported"> What operators are supported? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-compile-my-model"> How do I compile my model? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-deploy-my-model"> How do I deploy my model? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#where-can-i-find-tutorials-and-examples"> Where can I find tutorials and examples ? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-debug-or-profile-my-model"> How to debug or profile my model? </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>TensorFlow 1.x FAQ</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-get-started-with-tensorflow"> How do I get started with TensorFlow? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-tensorflow-versions-are-supported-by-neuron"> What TensorFlow versions are supported by Neuron? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-operators-are-supported"> What operators are supported? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-compile-my-model"> How do I compile my model? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-deploy-my-model"> How do I deploy my model? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#where-can-i-find-tutorials-and-examples"> Where can I find tutorials and examples ? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-debug-or-profile-my-model"> How to debug or profile my model? </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="tensorflow-1-x-faq"> <span id="tf1-faq"></span><h1>TensorFlow 1.x FAQ<a class="headerlink" href="#tensorflow-1-x-faq" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#how-do-i-get-started-with-tensorflow" id="id1">How do I get started with TensorFlow?</a></p></li> <li><p><a class="reference internal" href="#what-tensorflow-versions-are-supported-by-neuron" id="id2">What TensorFlow versions are supported by Neuron?</a></p></li> <li><p><a class="reference internal" href="#what-operators-are-supported" id="id3">What operators are supported?</a></p></li> <li><p><a class="reference internal" href="#how-do-i-compile-my-model" id="id4">How do I compile my model?</a></p></li> <li><p><a class="reference internal" href="#how-do-i-deploy-my-model" id="id5">How do I deploy my model?</a></p></li> <li><p><a class="reference internal" href="#where-can-i-find-tutorials-and-examples" id="id6">Where can I find tutorials and examples ?</a></p></li> <li><p><a class="reference internal" href="#how-to-debug-or-profile-my-model" id="id7">How to debug or profile my model?</a></p></li> </ul> </div> <div class="section" id="how-do-i-get-started-with-tensorflow"> <h2><a class="toc-backref" href="#id1">How do I get started with TensorFlow?</a><a class="headerlink" href="#how-do-i-get-started-with-tensorflow" title="Permalink to this headline">#</a></h2> <p>The easiest entry point is the tutorials offered by the AWS Neuron team. For beginners, the <a class="reference internal" href="../../../src/examples/tensorflow/tensorflow_resnet50/resnet50.html"><span class="std std-ref">ResNet50 tutorial</span></a> is a good place to start.</p> </div> <div class="section" id="what-tensorflow-versions-are-supported-by-neuron"> <h2><a class="toc-backref" href="#id2">What TensorFlow versions are supported by Neuron?</a><a class="headerlink" href="#what-tensorflow-versions-are-supported-by-neuron" title="Permalink to this headline">#</a></h2> <p>TensorFlow version 1.15.5</p> </div> <div class="section" id="what-operators-are-supported"> <h2><a class="toc-backref" href="#id3">What operators are supported?</a><a class="headerlink" href="#what-operators-are-supported" title="Permalink to this headline">#</a></h2> <p><code class="docutils literal notranslate"><span class="pre">neuron-cc</span> <span class="pre">list-operators</span> <span class="pre">--framework</span> <span class="pre">TENSORFLOW</span></code> provides a list of supported TensorFlow 1.x operators, and they are the operators that run on the machine learning accelerator. Note that operators not in this list are still expected to work with the supported operators in native TensorFlow together, although not accelerated by the hardware.</p> </div> <div class="section" id="how-do-i-compile-my-model"> <h2><a class="toc-backref" href="#id4">How do I compile my model?</a><a class="headerlink" href="#how-do-i-compile-my-model" title="Permalink to this headline">#</a></h2> <p>tensorflow-neuron includes a public-facing compilation API called tfn.saved_model.compile. More can be found here <a class="reference internal" href="api-compilation-python-api.html#tensorflow-ref-neuron-compile-api"><span class="std std-ref">TensorFlow 1.x (tensorflow-neuron) Compilation API</span></a>.</p> </div> <div class="section" id="how-do-i-deploy-my-model"> <h2><a class="toc-backref" href="#id5">How do I deploy my model?</a><a class="headerlink" href="#how-do-i-deploy-my-model" title="Permalink to this headline">#</a></h2> <p>Same way as deploying any tensorflow <a class="reference external" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/saved_model.md#user-content-save-and-restore-models">SavedModel</a>. In Python TensorFlow, the easiest way is through the <a class="reference external" href="https://docs.w3cub.com/tensorflow~python/tf/contrib/predictor/from_saved_model">tf.contrib.predictor module</a>. If a Python-free deployment is preferred for performance or some other reasons, <a class="reference external" href="https://www.tensorflow.org/tfx/guide/serving">tensorflow-serving</a> is a great choice and the AWS Neuron team provides pre-built model server apt/yum packages named as <code class="docutils literal notranslate"><span class="pre">tensorflow-model-server-neuron</span></code>.</p> </div> <div class="section" id="where-can-i-find-tutorials-and-examples"> <h2><a class="toc-backref" href="#id6">Where can I find tutorials and examples ?</a><a class="headerlink" href="#where-can-i-find-tutorials-and-examples" title="Permalink to this headline">#</a></h2> <p><a class="reference internal" href="tutorials/index.html#tensorflow-tutorials"><span class="std std-ref">TensorFlow Tutorials</span></a> is a great place to start with.</p> </div> <div class="section" id="how-to-debug-or-profile-my-model"> <h2><a class="toc-backref" href="#id7">How to debug or profile my model?</a><a class="headerlink" href="#how-to-debug-or-profile-my-model" title="Permalink to this headline">#</a></h2> <p>At TensorFlow level, the <a class="reference external" href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/profiler/Profiler">v1 profiler</a> is a great tool that provides operator-level breakdown of the inference execution time. Additionally, the <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html#neuron-plugin-tensorboard"><span class="std std-ref">AWS Neuron TensorBoard integration</span></a> provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications.</p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:21.022Z
TensorFlow 2.x FAQ — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tf2_faq.html#tf2-faq
# TensorFlow 2.x FAQ — AWS Neuron Documentation ## Contents - [How do I get started with TensorFlow?](#how-do-i-get-started-with-tensorflow) - [What TensorFlow versions are supported by Neuron?](#what-tensorflow-versions-are-supported-by-neuron) - [What operators are supported?](#what-operators-are-supported) - [How do I compile my model?](#how-do-i-compile-my-model) - [How do I deploy my model?](#how-do-i-deploy-my-model) - [Python tensorflow](#python-tensorflow) - [tensorflow-serving](#tensorflow-serving) - [Where can I find tutorials and examples ?](#where-can-i-find-tutorials-and-examples) - [How to debug or profile my model?](#how-to-debug-or-profile-my-model) _This document is relevant for_: `Inf1` ## TensorFlow 2.x FAQ[#](#tensorflow-2-x-faq "Permalink to this headline") Table of contents - [How do I get started with TensorFlow?](#how-do-i-get-started-with-tensorflow) - [What TensorFlow versions are supported by Neuron?](#what-tensorflow-versions-are-supported-by-neuron) - [What operators are supported?](#what-operators-are-supported) - [How do I compile my model?](#how-do-i-compile-my-model) - [How do I deploy my model?](#how-do-i-deploy-my-model) - [Where can I find tutorials and examples ?](#where-can-i-find-tutorials-and-examples) - [How to debug or profile my model?](#how-to-debug-or-profile-my-model) ## [What TensorFlow versions are supported by Neuron?](#id2)[#](#what-tensorflow-versions-are-supported-by-neuron "Permalink to this headline") The AWS Neuron provide well-tested tensorflow-neuron packages that work with a range of tensorflow official releases, as long as the version of tensorflow-neuron matches that of tensorflow. For example, you may install `tensorflow-neuron==2.3.3.1.0.9999.0` on top of `tensorflow==2.3.3` and expect them to work together. Currently, tensorflow-neuron can work with tensorflow versions 2.1.4, 2.2.3, 2.3.3, 2.4.2, 2.5.0. In a fresh Python environment, `pip install tensorflow-neuron` would bring in the highest version (2.5.0 as of 07/13/2021), which then pulls `tensorflow==2.5.0` into the current environment. If you already have a particular version of tensorflow 2.x installed, then it is recommended to pay attention to the precise version of tensorflow-neuron and only install the desired one. For example, in an existing Python environment with `tensorflow==2.3.3` installed, you may install tensorflow-neuron by pip install `tensorflow-neuron==2.3.3`, which will reuse the existing tensorflow installation. ## [What operators are supported?](#id3)[#](#what-operators-are-supported "Permalink to this headline") Due to fundamental backend design changes in the TensorFlow 2.x framework, the concept of “supported graph operators” is no longer well-defined. Please refer to [Accelerated Python APIs and graph operators](tensorflow2-accelerated-ops.html#tensorflow-ref-neuron-accelerated-ops) for a guide to the set of TensorFlow 2.x Python APIs and graph operators that can be accelerated by Neuron. ## [How do I compile my model?](#id4)[#](#how-do-i-compile-my-model "Permalink to this headline") It is achieved by a new public API called tfn.trace, which resembles the compilation API of AWS PyTorch Neuron integration. Programmatically, customers would be able to execute the following code. ``` import tensorflow as tf import tensorflow.neuron as tfn ... model = tf.keras.Model(inputs=inputs, outputs=outputs) model_neuron = tfn.trace(model, example_inputs) model_neuron.save('./model_neuron_dir') ... model_loaded = tf.saved_model.load('./model_dir') predict_func = model_loaded['serving_default'] model_loaded_neuron = tfn.trace(predict_func, example_inputs2) model_loaded_neuron.save('./model_loaded_neuron_dir') ... ``` ## [How do I deploy my model?](#id5)[#](#how-do-i-deploy-my-model "Permalink to this headline") ### Python tensorflow[#](#python-tensorflow "Permalink to this headline") Pre-compiled models can be saved and reloaded back into a Python environment using regular tensorflow model loading APIs, as long as tensorflow-neuron is installed. ``` import tensorflow as tf model = tf.keras.models.load_model('./model_loaded_neuron_dir') example_inputs = ... output = model(example_inputs) ``` ### tensorflow-serving[#](#tensorflow-serving "Permalink to this headline") Pre-compiled models can be saved into SavedModel format via tensorflow SavedModel APIs ``` import tensorflow as tf import tensorflow.neuron as tfn ... model = tf.keras.Model(inputs=inputs, outputs=outputs) model_neuron = tfn.trace(model, example_inputs) tf.saved_model.save(model_neuron, './model_neuron_dir/1') ``` The generated SavedModel ‘./model\_neuron\_dir’ can be loaded into tensorflow-model-server-neuron, which can be installed through apt or yum based on the type of the operating system. For example, on Ubuntu 18.04 LTS the following command installs and launches a tensorflow-model-server-neuron on a pre-compiled SavedModel. ``` sudo apt install tensorflow-model-server-neuron # --model_base_path needs to be an absolute path tensorflow_model_server_neuron --model_base_path=$(pwd)/model_neuron_dir ``` ## [How to debug or profile my model?](#id7)[#](#how-to-debug-or-profile-my-model "Permalink to this headline") [AWS Neuron TensorBoard integration](../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html#neuron-plugin-tensorboard) provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications. _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>TensorFlow 2.x FAQ — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../_static/pygments.css"> <link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script> <script src="../../../_static/jquery.js"></script> <script src="../../../_static/underscore.js"></script> <script src="../../../_static/doctools.js"></script> <script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../_static/contentui.js"></script> <script src="../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../genindex.html"> <link rel="search" title="Search" href="../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/tf2_faq", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><script type="text/javascript" src="/_/static/vendor/jquery.js"></script></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/tf2_faq.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tf2_faq.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../_sources/frameworks/tensorflow/tensorflow-neuron/tf2_faq.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-get-started-with-tensorflow"> How do I get started with TensorFlow? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-tensorflow-versions-are-supported-by-neuron"> What TensorFlow versions are supported by Neuron? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-operators-are-supported"> What operators are supported? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-compile-my-model"> How do I compile my model? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-deploy-my-model"> How do I deploy my model? </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#python-tensorflow"> Python tensorflow </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#tensorflow-serving"> tensorflow-serving </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#where-can-i-find-tutorials-and-examples"> Where can I find tutorials and examples ? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-debug-or-profile-my-model"> How to debug or profile my model? </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>TensorFlow 2.x FAQ</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-get-started-with-tensorflow"> How do I get started with TensorFlow? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-tensorflow-versions-are-supported-by-neuron"> What TensorFlow versions are supported by Neuron? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-operators-are-supported"> What operators are supported? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-compile-my-model"> How do I compile my model? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-do-i-deploy-my-model"> How do I deploy my model? </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#python-tensorflow"> Python tensorflow </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#tensorflow-serving"> tensorflow-serving </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#where-can-i-find-tutorials-and-examples"> Where can I find tutorials and examples ? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-debug-or-profile-my-model"> How to debug or profile my model? </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="tensorflow-2-x-faq"> <span id="tf2-faq"></span><h1>TensorFlow 2.x FAQ<a class="headerlink" href="#tensorflow-2-x-faq" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#how-do-i-get-started-with-tensorflow" id="id1">How do I get started with TensorFlow?</a></p></li> <li><p><a class="reference internal" href="#what-tensorflow-versions-are-supported-by-neuron" id="id2">What TensorFlow versions are supported by Neuron?</a></p></li> <li><p><a class="reference internal" href="#what-operators-are-supported" id="id3">What operators are supported?</a></p></li> <li><p><a class="reference internal" href="#how-do-i-compile-my-model" id="id4">How do I compile my model?</a></p></li> <li><p><a class="reference internal" href="#how-do-i-deploy-my-model" id="id5">How do I deploy my model?</a></p></li> <li><p><a class="reference internal" href="#where-can-i-find-tutorials-and-examples" id="id6">Where can I find tutorials and examples ?</a></p></li> <li><p><a class="reference internal" href="#how-to-debug-or-profile-my-model" id="id7">How to debug or profile my model?</a></p></li> </ul> </div> <div class="section" id="how-do-i-get-started-with-tensorflow"> <h2><a class="toc-backref" href="#id1">How do I get started with TensorFlow?</a><a class="headerlink" href="#how-do-i-get-started-with-tensorflow" title="Permalink to this headline">#</a></h2> <p>The easiest entry point is the tutorials offered by the AWS Neuron team. For beginners, the <a class="reference internal" href="../../../src/examples/tensorflow/huggingface_bert/huggingface_bert.html"><span class="std std-ref">HuggingFace DistilBERT Tutorial</span></a> is a good place to start.</p> </div> <div class="section" id="what-tensorflow-versions-are-supported-by-neuron"> <h2><a class="toc-backref" href="#id2">What TensorFlow versions are supported by Neuron?</a><a class="headerlink" href="#what-tensorflow-versions-are-supported-by-neuron" title="Permalink to this headline">#</a></h2> <p>The AWS Neuron provide well-tested tensorflow-neuron packages that work with a range of tensorflow official releases, as long as the version of tensorflow-neuron matches that of tensorflow. For example, you may install <code class="docutils literal notranslate"><span class="pre">tensorflow-neuron==2.3.3.1.0.9999.0</span></code> on top of <code class="docutils literal notranslate"><span class="pre">tensorflow==2.3.3</span></code> and expect them to work together.</p> <p>Currently, tensorflow-neuron can work with tensorflow versions 2.1.4, 2.2.3, 2.3.3, 2.4.2, 2.5.0.</p> <p>In a fresh Python environment, <code class="docutils literal notranslate"><span class="pre">pip</span> <span class="pre">install</span> <span class="pre">tensorflow-neuron</span></code> would bring in the highest version (2.5.0 as of 07/13/2021), which then pulls <code class="docutils literal notranslate"><span class="pre">tensorflow==2.5.0</span></code> into the current environment.</p> <p>If you already have a particular version of tensorflow 2.x installed, then it is recommended to pay attention to the precise version of tensorflow-neuron and only install the desired one. For example, in an existing Python environment with <code class="docutils literal notranslate"><span class="pre">tensorflow==2.3.3</span></code> installed, you may install tensorflow-neuron by pip install <code class="docutils literal notranslate"><span class="pre">tensorflow-neuron==2.3.3</span></code>, which will reuse the existing tensorflow installation.</p> </div> <div class="section" id="what-operators-are-supported"> <h2><a class="toc-backref" href="#id3">What operators are supported?</a><a class="headerlink" href="#what-operators-are-supported" title="Permalink to this headline">#</a></h2> <p>Due to fundamental backend design changes in the TensorFlow 2.x framework, the concept of “supported graph operators” is no longer well-defined. Please refer to <a class="reference internal" href="tensorflow2-accelerated-ops.html#tensorflow-ref-neuron-accelerated-ops"><span class="std std-ref">Accelerated Python APIs and graph operators</span></a> for a guide to the set of TensorFlow 2.x Python APIs and graph operators that can be accelerated by Neuron.</p> </div> <div class="section" id="how-do-i-compile-my-model"> <h2><a class="toc-backref" href="#id4">How do I compile my model?</a><a class="headerlink" href="#how-do-i-compile-my-model" title="Permalink to this headline">#</a></h2> <p>It is achieved by a new public API called tfn.trace, which resembles the compilation API of AWS PyTorch Neuron integration. Programmatically, customers would be able to execute the following code.</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span> <span class="kn">import</span> <span class="nn">tensorflow.neuron</span> <span class="k">as</span> <span class="nn">tfn</span> <span class="o">...</span> <span class="n">model</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">Model</span><span class="p">(</span><span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">,</span> <span class="n">outputs</span><span class="o">=</span><span class="n">outputs</span><span class="p">)</span> <span class="n">model_neuron</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">example_inputs</span><span class="p">)</span> <span class="n">model_neuron</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'./model_neuron_dir'</span><span class="p">)</span> <span class="o">...</span> <span class="n">model_loaded</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="s1">'./model_dir'</span><span class="p">)</span> <span class="n">predict_func</span> <span class="o">=</span> <span class="n">model_loaded</span><span class="p">[</span><span class="s1">'serving_default'</span><span class="p">]</span> <span class="n">model_loaded_neuron</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">predict_func</span><span class="p">,</span> <span class="n">example_inputs2</span><span class="p">)</span> <span class="n">model_loaded_neuron</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'./model_loaded_neuron_dir'</span><span class="p">)</span> <span class="o">...</span> </pre></div> </div> </div> <div class="section" id="how-do-i-deploy-my-model"> <h2><a class="toc-backref" href="#id5">How do I deploy my model?</a><a class="headerlink" href="#how-do-i-deploy-my-model" title="Permalink to this headline">#</a></h2> <div class="section" id="python-tensorflow"> <h3>Python tensorflow<a class="headerlink" href="#python-tensorflow" title="Permalink to this headline">#</a></h3> <p>Pre-compiled models can be saved and reloaded back into a Python environment using regular tensorflow model loading APIs, as long as tensorflow-neuron is installed.</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span> <span class="n">model</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">models</span><span class="o">.</span><span class="n">load_model</span><span class="p">(</span><span class="s1">'./model_loaded_neuron_dir'</span><span class="p">)</span> <span class="n">example_inputs</span> <span class="o">=</span> <span class="o">...</span> <span class="n">output</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">example_inputs</span><span class="p">)</span> </pre></div> </div> </div> <div class="section" id="tensorflow-serving"> <h3>tensorflow-serving<a class="headerlink" href="#tensorflow-serving" title="Permalink to this headline">#</a></h3> <p>Pre-compiled models can be saved into SavedModel format via tensorflow SavedModel APIs</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span> <span class="kn">import</span> <span class="nn">tensorflow.neuron</span> <span class="k">as</span> <span class="nn">tfn</span> <span class="o">...</span> <span class="n">model</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">Model</span><span class="p">(</span><span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">,</span> <span class="n">outputs</span><span class="o">=</span><span class="n">outputs</span><span class="p">)</span> <span class="n">model_neuron</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">example_inputs</span><span class="p">)</span> <span class="n">tf</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">model_neuron</span><span class="p">,</span> <span class="s1">'./model_neuron_dir/1'</span><span class="p">)</span> </pre></div> </div> <p>The generated SavedModel ‘./model_neuron_dir’ can be loaded into tensorflow-model-server-neuron, which can be installed through apt or yum based on the type of the operating system. For example, on Ubuntu 18.04 LTS the following command installs and launches a tensorflow-model-server-neuron on a pre-compiled SavedModel.</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>sudo apt install tensorflow-model-server-neuron # --model_base_path needs to be an absolute path tensorflow_model_server_neuron --model_base_path=$(pwd)/model_neuron_dir </pre></div> </div> </div> </div> <div class="section" id="where-can-i-find-tutorials-and-examples"> <h2><a class="toc-backref" href="#id6">Where can I find tutorials and examples ?</a><a class="headerlink" href="#where-can-i-find-tutorials-and-examples" title="Permalink to this headline">#</a></h2> <p><a class="reference internal" href="../../../src/examples/tensorflow/huggingface_bert/huggingface_bert.html"><span class="std std-ref">HuggingFace DistilBERT Tutorial</span></a> is a good place to start.</p> </div> <div class="section" id="how-to-debug-or-profile-my-model"> <h2><a class="toc-backref" href="#id7">How to debug or profile my model?</a><a class="headerlink" href="#how-to-debug-or-profile-my-model" title="Permalink to this headline">#</a></h2> <p><a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html#neuron-plugin-tensorboard"><span class="std std-ref">AWS Neuron TensorBoard integration</span></a> provides visibility into what is happening inside of the Neuron runtime, and allows a more fine-grained (but also more hardware-awared) reasoning on where to improve the performance of machine learning applications.</p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:21.239Z
Inference with Neuron - FAQ — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/faq/inference/neuron-faq.html#neuron-f1-faq
# Inference with Neuron - FAQ — AWS Neuron Documentation _This document is relevant for_: `Inf1` ## Inference with Neuron - FAQ[#](#inference-with-neuron-faq "Permalink to this headline") Table of contents - [What ML model types and operators are supported by AWS Neuron?](#what-ml-model-types-and-operators-are-supported-by-aws-neuron) - [Why is a compiler needed, and how do I use it?](#why-is-a-compiler-needed-and-how-do-i-use-it) - [I am using a ML framework today – what will change for me to use this?](#i-am-using-a-ml-framework-today-what-will-change-for-me-to-use-this) - [What is a NeuronCore Pipeline? How do I take advantage of it?](#what-is-a-neuroncore-pipeline-how-do-i-take-advantage-of-it) - [NeuronCores, NeuronCore Groups and NeuronCore Pipelines: What do they do?](#neuroncores-neuroncore-groups-and-neuroncore-pipelines-what-do-they-do) - [Can I use TensorFlow networks from tfhub.dev as-is ? if not, what should I do?](#can-i-use-tensorflow-networks-from-tfhub-dev-as-is-if-not-what-should-i-do) ## [What ML model types and operators are supported by AWS Neuron?](#id1)[#](#what-ml-model-types-and-operators-are-supported-by-aws-neuron "Permalink to this headline") AWS Neuron includes a compiler that converts your trained machine learning models to a binary object for execution. The Neuron compiler supports many commonly used machine learning operators used in computer vision, natural language processing, recommender engines and more. A list of supported ML operators and supported inputs are in [Neuron Supported operators](../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html#neuron-supported-operators) . It’s important to mention that to get good performance doesn’t require all of the model operators to run on the chip. In many cases, some of the operators will continue to run on the instance CPUs, like the case of embeddings or image pre-processing, and will still provide a compelling end to end performance. We call this approach auto-partitioning, where the Neuron compiler optimizes the model execution based on operators that are most suitable to run on the CPU or the chip. For the latest model architecture support, please refer to the model architecuture fit and performance pages. ## [Why is a compiler needed, and how do I use it?](#id2)[#](#why-is-a-compiler-needed-and-how-do-i-use-it "Permalink to this headline") The Neuron compiler converts a model from a framework level Neural Network graph, with operators like convolution and pooling, into a Neuron Device-specific instruction set, builds the schedule for execution of these instructions, and converts the model parameters into format that the neuron device can consume. The supported input formats include TensorFlow, PyTorch, and MXNet. The output from the compiler is a Neuron Executable File Format (NEFF) artifact. The NEFF contains a combination of binary code, the model parameters, and additional meta-data needed by the Neuron runtime and profiler. ## [I am using a ML framework today – what will change for me to use this?](#id3)[#](#i-am-using-a-ml-framework-today-what-will-change-for-me-to-use-this "Permalink to this headline") To use Inferentia within the Inf1 instances, the developer needs to perform one-time compilation of the pre-trained model to generate a NEFF, and use this as the inference model in fleet of Inf1 instances. - tensorflow-neuron - [PyTorch Neuron](../../../frameworks/torch/index.html#neuron-pytorch) - [MXNet Neuron](../../../frameworks/mxnet-neuron/index.html#neuron-mxnet) ## [What is a NeuronCore Pipeline? How do I take advantage of it?](#id4)[#](#what-is-a-neuroncore-pipeline-how-do-i-take-advantage-of-it "Permalink to this headline") A NeuronCore Pipeline is a unique technique to shard a specific Neural Network across multiple NeuronCores, to take advantage of the large on-chip cache instead of moving data in and out of external memory. The result is an increased throughput and reduce latency typically important for real-time inference applications. All Inf1 instances support it, and the Inf1 instances with multiple Inferentia accelerators, such as inf1.6xlarge or inf1.24xlarge support it thanks to the fast chip-to-chip interconnect. Developers can choose to use NeuronCore Pipeline mode during compile stage, with an opt-in flag. neuron-cc provides further details. ## [NeuronCores, NeuronCore Groups and NeuronCore Pipelines: What do they do?](#id5)[#](#neuroncores-neuroncore-groups-and-neuroncore-pipelines-what-do-they-do "Permalink to this headline") Each Inferentia chip has four compute engines called NeuronCores. A NeuronCore Group is a way to aggregate NeuronCores to increase hardware utilization and assign models with the right compute sizing for a specific application. If you want to run mutiple models in parallel, you can assign different models to separate NeuronCore Groups. A model compiled to use multiple NeuronCores in a NeuronCore Pipeline can be assigned to a NeuronCore Group with enough NeuronCores to load into. Finally- it is also possible for sets of Inferentia devices to be mapped to separate Neuron Runtimes. [Neuron Features](../../arch/neuron-features/index.html#neuron-features-index) section has more information and examples. ## [Can I use TensorFlow networks from tfhub.dev as-is ? if not, what should I do?](#id6)[#](#can-i-use-tensorflow-networks-from-tfhub-dev-as-is-if-not-what-should-i-do "Permalink to this headline") Yes. Models format can be imported into TensorFlow, either as a standard model-server, in which case it appears as a simple command line utility, or via the Python based TensorFlow environment. The primary additional step needed is to compile the model into Inferentia NEFF format. _This document is relevant for_: `Inf1`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Inference with Neuron - FAQ — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../_static/pygments.css"> <link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script> <script src="../../../_static/jquery.js"></script> <script src="../../../_static/underscore.js"></script> <script src="../../../_static/doctools.js"></script> <script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../_static/contentui.js"></script> <script src="../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../genindex.html"> <link rel="search" title="Search" href="../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/faq/inference/neuron-faq", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/faq/inference/neuron-faq.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/faq/inference/neuron-faq.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../_sources/general/faq/inference/neuron-faq.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-ml-model-types-and-operators-are-supported-by-aws-neuron"> What ML model types and operators are supported by AWS Neuron? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#why-is-a-compiler-needed-and-how-do-i-use-it"> Why is a compiler needed, and how do I use it? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#i-am-using-a-ml-framework-today-what-will-change-for-me-to-use-this"> I am using a ML framework today – what will change for me to use this? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-is-a-neuroncore-pipeline-how-do-i-take-advantage-of-it"> What is a NeuronCore Pipeline? How do I take advantage of it? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#neuroncores-neuroncore-groups-and-neuroncore-pipelines-what-do-they-do"> NeuronCores, NeuronCore Groups and NeuronCore Pipelines: What do they do? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-i-use-tensorflow-networks-from-tfhub-dev-as-is-if-not-what-should-i-do"> Can I use TensorFlow networks from tfhub.dev as-is ? if not, what should I do? </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Inference with Neuron - FAQ</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-ml-model-types-and-operators-are-supported-by-aws-neuron"> What ML model types and operators are supported by AWS Neuron? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#why-is-a-compiler-needed-and-how-do-i-use-it"> Why is a compiler needed, and how do I use it? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#i-am-using-a-ml-framework-today-what-will-change-for-me-to-use-this"> I am using a ML framework today – what will change for me to use this? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-is-a-neuroncore-pipeline-how-do-i-take-advantage-of-it"> What is a NeuronCore Pipeline? How do I take advantage of it? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#neuroncores-neuroncore-groups-and-neuroncore-pipelines-what-do-they-do"> NeuronCores, NeuronCore Groups and NeuronCore Pipelines: What do they do? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-i-use-tensorflow-networks-from-tfhub-dev-as-is-if-not-what-should-i-do"> Can I use TensorFlow networks from tfhub.dev as-is ? if not, what should I do? </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> <div class="section" id="inference-with-neuron-faq"> <span id="neuron-f1-faq"></span><h1>Inference with Neuron - FAQ<a class="headerlink" href="#inference-with-neuron-faq" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#what-ml-model-types-and-operators-are-supported-by-aws-neuron" id="id1">What ML model types and operators are supported by AWS Neuron?</a></p></li> <li><p><a class="reference internal" href="#why-is-a-compiler-needed-and-how-do-i-use-it" id="id2">Why is a compiler needed, and how do I use it?</a></p></li> <li><p><a class="reference internal" href="#i-am-using-a-ml-framework-today-what-will-change-for-me-to-use-this" id="id3">I am using a ML framework today – what will change for me to use this?</a></p></li> <li><p><a class="reference internal" href="#what-is-a-neuroncore-pipeline-how-do-i-take-advantage-of-it" id="id4">What is a NeuronCore Pipeline? How do I take advantage of it?</a></p></li> <li><p><a class="reference internal" href="#neuroncores-neuroncore-groups-and-neuroncore-pipelines-what-do-they-do" id="id5">NeuronCores, NeuronCore Groups and NeuronCore Pipelines: What do they do?</a></p></li> <li><p><a class="reference internal" href="#can-i-use-tensorflow-networks-from-tfhub-dev-as-is-if-not-what-should-i-do" id="id6">Can I use TensorFlow networks from tfhub.dev as-is ? if not, what should I do?</a></p></li> </ul> </div> <div class="section" id="what-ml-model-types-and-operators-are-supported-by-aws-neuron"> <h2><a class="toc-backref" href="#id1">What ML model types and operators are supported by AWS Neuron?</a><a class="headerlink" href="#what-ml-model-types-and-operators-are-supported-by-aws-neuron" title="Permalink to this headline">#</a></h2> <p>AWS Neuron includes a compiler that converts your trained machine learning models to a binary object for execution. The Neuron compiler supports many commonly used machine learning operators used in computer vision, natural language processing, recommender engines and more. A list of supported ML operators and supported inputs are in <a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html#neuron-supported-operators"><span class="std std-ref">Neuron Supported operators</span></a> .</p> <p>It’s important to mention that to get good performance doesn’t require all of the model operators to run on the chip. In many cases, some of the operators will continue to run on the instance CPUs, like the case of embeddings or image pre-processing, and will still provide a compelling end to end performance. We call this approach auto-partitioning, where the Neuron compiler optimizes the model execution based on operators that are most suitable to run on the CPU or the chip.</p> <p>For the latest model architecture support, please refer to the model architecuture fit and performance pages.</p> </div> <div class="section" id="why-is-a-compiler-needed-and-how-do-i-use-it"> <h2><a class="toc-backref" href="#id2">Why is a compiler needed, and how do I use it?</a><a class="headerlink" href="#why-is-a-compiler-needed-and-how-do-i-use-it" title="Permalink to this headline">#</a></h2> <p>The Neuron compiler converts a model from a framework level Neural Network graph, with operators like convolution and pooling, into a Neuron Device-specific instruction set, builds the schedule for execution of these instructions, and converts the model parameters into format that the neuron device can consume. The supported input formats include TensorFlow, PyTorch, and MXNet. The output from the compiler is a Neuron Executable File Format (NEFF) artifact. The NEFF contains a combination of binary code, the model parameters, and additional meta-data needed by the Neuron runtime and profiler.</p> </div> <div class="section" id="i-am-using-a-ml-framework-today-what-will-change-for-me-to-use-this"> <h2><a class="toc-backref" href="#id3">I am using a ML framework today – what will change for me to use this?</a><a class="headerlink" href="#i-am-using-a-ml-framework-today-what-will-change-for-me-to-use-this" title="Permalink to this headline">#</a></h2> <p>To use Inferentia within the Inf1 instances, the developer needs to perform one-time compilation of the pre-trained model to generate a NEFF, and use this as the inference model in fleet of Inf1 instances.</p> <ul class="simple"> <li><p><span class="xref std std-ref">tensorflow-neuron</span></p></li> <li><p><a class="reference internal" href="../../../frameworks/torch/index.html#neuron-pytorch"><span class="std std-ref">PyTorch Neuron</span></a></p></li> <li><p><a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html#neuron-mxnet"><span class="std std-ref">MXNet Neuron</span></a></p></li> </ul> </div> <div class="section" id="what-is-a-neuroncore-pipeline-how-do-i-take-advantage-of-it"> <h2><a class="toc-backref" href="#id4">What is a NeuronCore Pipeline? How do I take advantage of it?</a><a class="headerlink" href="#what-is-a-neuroncore-pipeline-how-do-i-take-advantage-of-it" title="Permalink to this headline">#</a></h2> <p>A NeuronCore Pipeline is a unique technique to shard a specific Neural Network across multiple NeuronCores, to take advantage of the large on-chip cache instead of moving data in and out of external memory. The result is an increased throughput and reduce latency typically important for real-time inference applications. All Inf1 instances support it, and the Inf1 instances with multiple Inferentia accelerators, such as inf1.6xlarge or inf1.24xlarge support it thanks to the fast chip-to-chip interconnect.</p> <p>Developers can choose to use NeuronCore Pipeline mode during compile stage, with an opt-in flag. <span class="xref std std-ref">neuron-cc</span> provides further details.</p> </div> <div class="section" id="neuroncores-neuroncore-groups-and-neuroncore-pipelines-what-do-they-do"> <h2><a class="toc-backref" href="#id5">NeuronCores, NeuronCore Groups and NeuronCore Pipelines: What do they do?</a><a class="headerlink" href="#neuroncores-neuroncore-groups-and-neuroncore-pipelines-what-do-they-do" title="Permalink to this headline">#</a></h2> <p>Each Inferentia chip has four compute engines called NeuronCores. A NeuronCore Group is a way to aggregate NeuronCores to increase hardware utilization and assign models with the right compute sizing for a specific application. If you want to run mutiple models in parallel, you can assign different models to separate NeuronCore Groups. A model compiled to use multiple NeuronCores in a NeuronCore Pipeline can be assigned to a NeuronCore Group with enough NeuronCores to load into. Finally- it is also possible for sets of Inferentia devices to be mapped to separate Neuron Runtimes. <a class="reference internal" href="../../arch/neuron-features/index.html#neuron-features-index"><span class="std std-ref">Neuron Features</span></a> section has more information and examples.</p> </div> <div class="section" id="can-i-use-tensorflow-networks-from-tfhub-dev-as-is-if-not-what-should-i-do"> <h2><a class="toc-backref" href="#id6">Can I use TensorFlow networks from tfhub.dev as-is ? if not, what should I do?</a><a class="headerlink" href="#can-i-use-tensorflow-networks-from-tfhub-dev-as-is-if-not-what-should-i-do" title="Permalink to this headline">#</a></h2> <p>Yes. Models format can be imported into TensorFlow, either as a standard model-server, in which case it appears as a simple command line utility, or via the Python based TensorFlow environment. The primary additional step needed is to compile the model into Inferentia NEFF format.</p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:21.323Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/security.rst.txt
``` .. _security: Security Disclosures ===================== If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions here (https://aws.amazon.com/security/vulnerability-reporting/) or email AWS security directly (`mailto:[email protected] <mailto:[email protected]>`__). ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _security: Security Disclosures ===================== If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions here (https://aws.amazon.com/security/vulnerability-reporting/) or email AWS security directly (`mailto:[email protected] &lt;mailto:[email protected]&gt;`__). </pre></body></html>
2023-09-29T20:55:21.403Z
ONNX FAQ — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/faq/onnx-faq.html#onnx-faq
# ONNX FAQ — AWS Neuron Documentation Table of contents - [Can I use ONNX models with Neuron ? If not, what should I do?](#can-i-use-onnx-models-with-neuron-if-not-what-should-i-do) ## [Can I use ONNX models with Neuron ? If not, what should I do?](#id2)[#](#can-i-use-onnx-models-with-neuron-if-not-what-should-i-do "Permalink to this headline") AWS Neuron does not directly support compilation of models in the ONNX file format. The recommended way to compile a model that is in the ONNX file format is to first convert the model to PyTorch using a publicly available tool like [onnx2pytorch](https://github.com/ToriML/onnx2pytorch) . Once the ONNX model is converted to PyTorch, it can then be compiled with the [`torch_neuron.trace()`](../../frameworks/torch/torch-neuron/api-compilation-python-api.html#torch_neuron.trace "torch_neuron.trace") function to produce a model that can run on Neuron. _This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>ONNX FAQ — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../_static/pygments.css"> <link rel="stylesheet" href="../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script> <script src="../../_static/jquery.js"></script> <script src="../../_static/underscore.js"></script> <script src="../../_static/doctools.js"></script> <script src="../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../_static/contentui.js"></script> <script src="../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../genindex.html"> <link rel="search" title="Search" href="../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/faq/onnx-faq", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/faq/onnx-faq.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/faq/onnx-faq.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../_sources/general/faq/onnx-faq.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry active"> <a class="reference internal nav-link active" href="#can-i-use-onnx-models-with-neuron-if-not-what-should-i-do"> Can I use ONNX models with Neuron ? If not, what should I do? </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>ONNX FAQ</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-i-use-onnx-models-with-neuron-if-not-what-should-i-do"> Can I use ONNX models with Neuron ? If not, what should I do? </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> <div class="section" id="onnx-faq"> <span id="id1"></span><h1>ONNX FAQ<a class="headerlink" href="#onnx-faq" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#can-i-use-onnx-models-with-neuron-if-not-what-should-i-do" id="id2">Can I use ONNX models with Neuron ? If not, what should I do?</a></p></li> </ul> </div> <div class="section" id="can-i-use-onnx-models-with-neuron-if-not-what-should-i-do"> <h2><a class="toc-backref" href="#id2">Can I use ONNX models with Neuron ? If not, what should I do?</a><a class="headerlink" href="#can-i-use-onnx-models-with-neuron-if-not-what-should-i-do" title="Permalink to this headline">#</a></h2> <p>AWS Neuron does not directly support compilation of models in the ONNX file format. The recommended way to compile a model that is in the ONNX file format is to first convert the model to PyTorch using a publicly available tool like <a class="reference external" href="https://github.com/ToriML/onnx2pytorch">onnx2pytorch</a> . Once the ONNX model is converted to PyTorch, it can then be compiled with the <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html#torch_neuron.trace" title="torch_neuron.trace"><code class="xref py py-func docutils literal notranslate"><span class="pre">torch_neuron.trace()</span></code></a> function to produce a model that can run on Neuron.</p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:21.458Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/roadmap-readme.rst.txt
``` .. _neuron_roadmap: Roadmap ======= The AWS Neuron feature roadmap provides visibility onto what we are working on in terms of functional and performance in the near future. We hope this will help you better plan how to use Neuron with your products. We’d love to get our customers feedback as well, to help us ensure we are working on the most important requests. .. toctree:: :maxdepth: 1 Neuron Public Roadmap <https://github.com/orgs/aws-neuron/projects/1/views/1> roadmap-faq ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_roadmap: Roadmap ======= The AWS Neuron feature roadmap provides visibility onto what we are working on in terms of functional and performance in the near future. We hope this will help you better plan how to use Neuron with your products. We’d love to get our customers feedback as well, to help us ensure we are working on the most important requests. .. toctree:: :maxdepth: 1 Neuron Public Roadmap &lt;https://github.com/orgs/aws-neuron/projects/1/views/1&gt; roadmap-faq </pre></body></html>
2023-09-29T20:55:21.609Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/sdk-policy.rst.txt
``` .. _sdk-maintenance-policy: .. _neuron-maintenance-policy: SDK Maintenance Policy ====================== .. contents:: Table of Contents :local: :depth: 2 Overview -------- This document outlines the maintenance policy for AWS Neuron Software Development Kit (SDK) and its underlying dependencies. AWS regularly provides the Neuron SDK with updates that may contain support for new or updated APIs, new features, enhancements, bug fixes, security patches, or documentation updates. Updates may also address changes with dependencies, language runtimes, and operating systems. Neuron SDK releases are available as Conda ( up to :ref:`Neuron 1.13.0 <eol-conda-packages>` ) and Pip Packages that can be installed within Amazon Machine Images (AMIs). We recommend users to stay up-to-date with SDK releases to keep up with the latest features, security updates, and underlying dependencies. Continued use of an unsupported SDK version is not recommended and is done at the user’s discretion. Neuron SDK ---------- AWS Neuron is the SDK for `AWS Inferentia <https://aws.amazon.com/machine-learning/inferentia/>`__, the custom designed machine learning chips enabling high-performance deep learning inference applications on `EC2 Inf1 instances <https://aws.amazon.com/ec2/instance-types/inf1/>`__. Neuron includes a deep learning compiler, runtime and tools that are natively integrated into TensorFlow, PyTorch and MXNet. With Neuron, you can develop, profile, and deploy high-performance inference applications on top of `EC2 Inf1 instances <https://aws.amazon.com/ec2/instance-types/inf1/>`__. The Neuron SDK release versions are in the form of X.Y.Z where X represents the major version and Y represent the minor version. Increasing the major version of an SDK indicates that this SDK underwent significant and substantial changes, and some of those changes may not maintain the same programming model. Increasing the minor version of an SDK indicates that this SDK underwent addition of new features, support of new dependency software versions, end-of-support of certain dependency software, enhacement and/or bugfixes. Applications may need to be updated in order for them to work with the newest SDK version. It is important to update major versions carefully and in accordance with the upgrade guidelines provided by AWS. Dependency Software ------------------- Neuron SDK has underlying dependencies, such as language runtimes, operating systems, or third party libraries and machine learning frameworks. These dependencies are typically tied to the language community or the vendor who owns that particular component. The following terms are used to classify underlying dependencies: * Operating system (OS): Examples include Amazon Linux AMI, Amazon Linux 2. * Language runtime: Examples include Python. * Third party library / framework: Examples include PyTorch, TensorFlow, MXNet and ONNX. Each community or vendor maintains their own versioning policy and publishes their own end-of-support schedule for their product. Neuron SDK version life-cycle ----------------------------- The life-cycle for Neuron SDK version consists of 3 phases, which are outlined below. - **Supported (Phase 1)** During this phase, AWS will provide critical bugfixes and security patches. Usually AWS will support each Neuron SDK version for at least 12 months, but AWS reserves the right to stop supporting an SDK version before the 12 months period. .. note:: AWS will address new features or Dependency Software updates by publishing a new version with an increment in the Neuron SDK minor version. - **End-of-Support Announcement (Phase 2)** AWS will announce the End-of-Support phase at least 3 months before a specific Neuron SDK version enters End-of-Support phase. During this period, the SDK will continue to be supported. - **End-of-Support (Phase 3)** When a Neuron SDK version reaches end-of support, it will no longer receive critical bugfixes and security patches. Previously published Neuron SDK versions will continue to be available via Conda ( up to :ref:`Neuron 1.13.0 <eol-conda-packages>` ) or Pip packages. Use of an SDK version which has reached end-of-support is done at the user’s discretion. We recommend users to upgrade to the latest Neuron SDK version. Dependency Software version life-cycle -------------------------------------- The life-cycle for Dependency Software version consists of 4 phases, but there may not be a Phase 3 (Maintenance) period in some cases. The phases are outlined below. - **Supported (Phase 1)** During this phase, Dependency Software version is supported. AWS will provide regular updates, bug fixes and/or security patches to the Dependency Software version, AWS will address those updates and bug fixes by including them in a new Neuron SDK version with an increment in the Neuron SDK minor version. There is no minimum support period for a Dependency Software version. - **Maintenance and/or End-of-Support Announcement (Phase 2)** AWS will announce the Maintenance phase or the End-of-Support phase of Dependency Software version. Since each community or vendor maintains their own versioning policy and publishes their own end-of-support schedule for their product, there is no minimum duration to do the announcement before Dependency Software version enters Maintenance phase or End-of-Support phase and in some cases the announcement can happen at the same time when the Dependency Software version enters Maintenance phase or End-of-Support phase. During this period, the Dependency Software version will continue to be supported. - **Maintenance (Phase 3)** During the maintenance phase, AWS limits Dependency Software version to address critical bug fixes and security issues only. There is no minimum Maintenance period. This phase is optional and AWS will reserve the right to skip it for specific Dependency Software products. - **End-of-Support (Phase 4)** When a Dependency Software version reaches end-of support, it will no longer receive updates or releases. Previously published releases will continue to be available via Conda ( up to :ref:`Neuron 1.13.0 <eol-conda-packages>` ) or Pip packages. Use of an SDK which has reached end-of-support is done at the user’s discretion. We recommend users to upgrade to the new major version. When a Dependency Software version reaches end-of support, it will no longer receive critical bugfixes and security patches. Previously published Dependency Software versions will continue to be available via Neuron SDK Conda ( up to :ref:`Neuron 1.13.0 <eol-conda-packages>` ) or Pip packages. Use of a Dependency Software version which has reached end-of-support is done at the user’s discretion. We recommend users to upgrade to the latest Neuron SDK version that include the latest Dependency Software versions. .. note:: AWS reserves the right to stop support for an underlying dependency without a maintenance phase. Communication ------------- Maintenance and End-Of-Support announcements are communicated as follows: * Neuron SDK documentation. To see the list of available Neuron SDK versions and supported Dependency Software versions see :ref:`neuron-release-content` and :ref:`neuron-whatsnew` in latest Neuron version. Licenses -------- The license files for the Neuron SDK packages are located in the installation directories. For RPM/YUM packages, first follow Neuron SDK setup instructions to install RPM/YUM packages, then do: .. code:: bash # The following command assumes you have already installed RPM/YUM packages per Neuron SDK setup instructions if [ $USER == "ubuntu" ]; then sudo dpkg -L $(sudo dpkg-query -f '${binary:Package}\n' -W | grep neuron) | grep -i license; else rpm -ql $(rpm -qa | grep neuron) | grep -i license; fi Example output: .. code:: bash /usr/share/doc/aws-neuronx-tools/LICENSE.txt /usr/share/doc/aws-neuronx-tools/THIRD-PARTY-LICENSES.txt /usr/share/doc/aws-neuronx-oci-hook/LICENSE.txt /usr/share/doc/aws-neuronx-oci-hook/THIRD-PARTY-LICENSES.txt /usr/share/doc/aws-neuronx-collectives/LICENSE.txt /usr/share/doc/aws-neuronx-runtime-lib/LICENSE.txt /usr/src/aws-neuronx-2.7.33.0/LICENSE For the Python packages, you can see the locations of licenses in the site-packages directory of the Python environment using the following commands: .. code:: bash # The following installation instructions are only for license check, not development or deployment. # See Neuron SDK setup instruction for proper development or deployment setups. python -m venv check_license_venv source check_license_venv/bin/activate pip install -U pip python -m pip config set global.extra-index-url "https://pip.repos.neuron.amazonaws.com" python -m pip install neuron-cc neuronx-cc torch-neuron torch-neuronx tensorflow-neuron tensorflow-neuronx tensorboard-plugin-neuron tensorboard-plugin-neuronx mx_neuron ls $VIRTUAL_ENV/lib/python*/site-packages/{libneuronxla,torch_xla,torch_neuron,tensorflow_neuron,tensorboard_plugin_neuron,mx_neuron,neuron}*/*LICENSE* Example output: .. code:: bash /home/ec2-user/test_venv/lib/python3.7/site-packages/libneuronxla/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/libneuronxla/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/mx_neuron/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/neuron_cc-1.14.3.0+adaa2ac56.dist-info/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/neuronx_cc-2.5.0.28+1be23f232.dist-info/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/neuronx_hwm-2.5.0.0+dad732dd6.dist-info/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorboard_plugin_neuron/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorboard_plugin_neuron/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorboard_plugin_neuronx/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorboard_plugin_neuronx/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorflow_neuron/LICENSE /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorflow_neuron/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorflow_neuronx/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/torch_neuron-1.13.1.2.6.5.0.dist-info/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/torch_neuronx/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/torch_xla-1.13.0+torchneuron5.dist-info/LICENSE Neuron documentation, samples and tools packages on GitHub licenses are available in the respective GitHub repositories: https://github.com/aws-neuron/aws-neuron-sdk/blob/master/LICENSE-DOCUMENTATION https://github.com/aws-neuron/transformers-neuronx/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-samples/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-sdk/blob/master/src/neuronperf/LICENSE https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-tensorflow/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-tensorflow/blob/master/THIRD-PARTY-LICENSES.txt https://github.com/aws-neuron/neuronx-nemo-megatron/blob/main/THIRD-PARTY-LICENSES ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _sdk-maintenance-policy: .. _neuron-maintenance-policy: SDK Maintenance Policy ====================== .. contents:: Table of Contents :local: :depth: 2 Overview -------- This document outlines the maintenance policy for AWS Neuron Software Development Kit (SDK) and its underlying dependencies. AWS regularly provides the Neuron SDK with updates that may contain support for new or updated APIs, new features, enhancements, bug fixes, security patches, or documentation updates. Updates may also address changes with dependencies, language runtimes, and operating systems. Neuron SDK releases are available as Conda ( up to :ref:`Neuron 1.13.0 &lt;eol-conda-packages&gt;` ) and Pip Packages that can be installed within Amazon Machine Images (AMIs). We recommend users to stay up-to-date with SDK releases to keep up with the latest features, security updates, and underlying dependencies. Continued use of an unsupported SDK version is not recommended and is done at the user’s discretion. Neuron SDK ---------- AWS Neuron is the SDK for `AWS Inferentia &lt;https://aws.amazon.com/machine-learning/inferentia/&gt;`__, the custom designed machine learning chips enabling high-performance deep learning inference applications on `EC2 Inf1 instances &lt;https://aws.amazon.com/ec2/instance-types/inf1/&gt;`__. Neuron includes a deep learning compiler, runtime and tools that are natively integrated into TensorFlow, PyTorch and MXNet. With Neuron, you can develop, profile, and deploy high-performance inference applications on top of `EC2 Inf1 instances &lt;https://aws.amazon.com/ec2/instance-types/inf1/&gt;`__. The Neuron SDK release versions are in the form of X.Y.Z where X represents the major version and Y represent the minor version. Increasing the major version of an SDK indicates that this SDK underwent significant and substantial changes, and some of those changes may not maintain the same programming model. Increasing the minor version of an SDK indicates that this SDK underwent addition of new features, support of new dependency software versions, end-of-support of certain dependency software, enhacement and/or bugfixes. Applications may need to be updated in order for them to work with the newest SDK version. It is important to update major versions carefully and in accordance with the upgrade guidelines provided by AWS. Dependency Software ------------------- Neuron SDK has underlying dependencies, such as language runtimes, operating systems, or third party libraries and machine learning frameworks. These dependencies are typically tied to the language community or the vendor who owns that particular component. The following terms are used to classify underlying dependencies: * Operating system (OS): Examples include Amazon Linux AMI, Amazon Linux 2. * Language runtime: Examples include Python. * Third party library / framework: Examples include PyTorch, TensorFlow, MXNet and ONNX. Each community or vendor maintains their own versioning policy and publishes their own end-of-support schedule for their product. Neuron SDK version life-cycle ----------------------------- The life-cycle for Neuron SDK version consists of 3 phases, which are outlined below. - **Supported (Phase 1)** During this phase, AWS will provide critical bugfixes and security patches. Usually AWS will support each Neuron SDK version for at least 12 months, but AWS reserves the right to stop supporting an SDK version before the 12 months period. .. note:: AWS will address new features or Dependency Software updates by publishing a new version with an increment in the Neuron SDK minor version. - **End-of-Support Announcement (Phase 2)** AWS will announce the End-of-Support phase at least 3 months before a specific Neuron SDK version enters End-of-Support phase. During this period, the SDK will continue to be supported. - **End-of-Support (Phase 3)** When a Neuron SDK version reaches end-of support, it will no longer receive critical bugfixes and security patches. Previously published Neuron SDK versions will continue to be available via Conda ( up to :ref:`Neuron 1.13.0 &lt;eol-conda-packages&gt;` ) or Pip packages. Use of an SDK version which has reached end-of-support is done at the user’s discretion. We recommend users to upgrade to the latest Neuron SDK version. Dependency Software version life-cycle -------------------------------------- The life-cycle for Dependency Software version consists of 4 phases, but there may not be a Phase 3 (Maintenance) period in some cases. The phases are outlined below. - **Supported (Phase 1)** During this phase, Dependency Software version is supported. AWS will provide regular updates, bug fixes and/or security patches to the Dependency Software version, AWS will address those updates and bug fixes by including them in a new Neuron SDK version with an increment in the Neuron SDK minor version. There is no minimum support period for a Dependency Software version. - **Maintenance and/or End-of-Support Announcement (Phase 2)** AWS will announce the Maintenance phase or the End-of-Support phase of Dependency Software version. Since each community or vendor maintains their own versioning policy and publishes their own end-of-support schedule for their product, there is no minimum duration to do the announcement before Dependency Software version enters Maintenance phase or End-of-Support phase and in some cases the announcement can happen at the same time when the Dependency Software version enters Maintenance phase or End-of-Support phase. During this period, the Dependency Software version will continue to be supported. - **Maintenance (Phase 3)** During the maintenance phase, AWS limits Dependency Software version to address critical bug fixes and security issues only. There is no minimum Maintenance period. This phase is optional and AWS will reserve the right to skip it for specific Dependency Software products. - **End-of-Support (Phase 4)** When a Dependency Software version reaches end-of support, it will no longer receive updates or releases. Previously published releases will continue to be available via Conda ( up to :ref:`Neuron 1.13.0 &lt;eol-conda-packages&gt;` ) or Pip packages. Use of an SDK which has reached end-of-support is done at the user’s discretion. We recommend users to upgrade to the new major version. When a Dependency Software version reaches end-of support, it will no longer receive critical bugfixes and security patches. Previously published Dependency Software versions will continue to be available via Neuron SDK Conda ( up to :ref:`Neuron 1.13.0 &lt;eol-conda-packages&gt;` ) or Pip packages. Use of a Dependency Software version which has reached end-of-support is done at the user’s discretion. We recommend users to upgrade to the latest Neuron SDK version that include the latest Dependency Software versions. .. note:: AWS reserves the right to stop support for an underlying dependency without a maintenance phase. Communication ------------- Maintenance and End-Of-Support announcements are communicated as follows: * Neuron SDK documentation. To see the list of available Neuron SDK versions and supported Dependency Software versions see :ref:`neuron-release-content` and :ref:`neuron-whatsnew` in latest Neuron version. Licenses -------- The license files for the Neuron SDK packages are located in the installation directories. For RPM/YUM packages, first follow Neuron SDK setup instructions to install RPM/YUM packages, then do: .. code:: bash # The following command assumes you have already installed RPM/YUM packages per Neuron SDK setup instructions if [ $USER == "ubuntu" ]; then sudo dpkg -L $(sudo dpkg-query -f '${binary:Package}\n' -W | grep neuron) | grep -i license; else rpm -ql $(rpm -qa | grep neuron) | grep -i license; fi Example output: .. code:: bash /usr/share/doc/aws-neuronx-tools/LICENSE.txt /usr/share/doc/aws-neuronx-tools/THIRD-PARTY-LICENSES.txt /usr/share/doc/aws-neuronx-oci-hook/LICENSE.txt /usr/share/doc/aws-neuronx-oci-hook/THIRD-PARTY-LICENSES.txt /usr/share/doc/aws-neuronx-collectives/LICENSE.txt /usr/share/doc/aws-neuronx-runtime-lib/LICENSE.txt /usr/src/aws-neuronx-2.7.33.0/LICENSE For the Python packages, you can see the locations of licenses in the site-packages directory of the Python environment using the following commands: .. code:: bash # The following installation instructions are only for license check, not development or deployment. # See Neuron SDK setup instruction for proper development or deployment setups. python -m venv check_license_venv source check_license_venv/bin/activate pip install -U pip python -m pip config set global.extra-index-url "https://pip.repos.neuron.amazonaws.com" python -m pip install neuron-cc neuronx-cc torch-neuron torch-neuronx tensorflow-neuron tensorflow-neuronx tensorboard-plugin-neuron tensorboard-plugin-neuronx mx_neuron ls $VIRTUAL_ENV/lib/python*/site-packages/{libneuronxla,torch_xla,torch_neuron,tensorflow_neuron,tensorboard_plugin_neuron,mx_neuron,neuron}*/*LICENSE* Example output: .. code:: bash /home/ec2-user/test_venv/lib/python3.7/site-packages/libneuronxla/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/libneuronxla/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/mx_neuron/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/neuron_cc-1.14.3.0+adaa2ac56.dist-info/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/neuronx_cc-2.5.0.28+1be23f232.dist-info/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/neuronx_hwm-2.5.0.0+dad732dd6.dist-info/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorboard_plugin_neuron/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorboard_plugin_neuron/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorboard_plugin_neuronx/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorboard_plugin_neuronx/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorflow_neuron/LICENSE /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorflow_neuron/THIRD-PARTY-LICENSES.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/tensorflow_neuronx/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/torch_neuron-1.13.1.2.6.5.0.dist-info/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/torch_neuronx/LICENSE.txt /home/ec2-user/test_venv/lib/python3.7/site-packages/torch_xla-1.13.0+torchneuron5.dist-info/LICENSE Neuron documentation, samples and tools packages on GitHub licenses are available in the respective GitHub repositories: https://github.com/aws-neuron/aws-neuron-sdk/blob/master/LICENSE-DOCUMENTATION https://github.com/aws-neuron/transformers-neuronx/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-samples/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-sdk/blob/master/src/neuronperf/LICENSE https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-tensorflow/blob/master/LICENSE https://github.com/aws-neuron/aws-neuron-tensorflow/blob/master/THIRD-PARTY-LICENSES.txt https://github.com/aws-neuron/neuronx-nemo-megatron/blob/main/THIRD-PARTY-LICENSES </pre></body></html>
2023-09-29T20:55:21.620Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/release.rst.txt
``` .. _latest-release: Release Details =============== Latest Release --------------- * :ref:`What's New <latest-neuron-release>` * :ref:`Release Artifacts <latest-neuron-release-artifacts>` Previous Releases ----------------- * :ref:`prev-rn` * :ref:`pre-release-content` * :ref:`prev-n1-rn` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _latest-release: Release Details =============== Latest Release --------------- * :ref:`What's New &lt;latest-neuron-release&gt;` * :ref:`Release Artifacts &lt;latest-neuron-release-artifacts&gt;` Previous Releases ----------------- * :ref:`prev-rn` * :ref:`pre-release-content` * :ref:`prev-n1-rn` </pre></body></html>
2023-09-29T20:55:21.640Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/contact.rst.txt
``` .. _contact-us: Contact Us ========== For support please checkout the `Github issues <https://github.com/aws/aws-neuron-sdk/issues>`__ or `Neuron AWS forums <https://forums.aws.amazon.com/forum.jspa?forumID=355>`__ for an answer, if none of those resources have an answer to your question please open a ticket. If you have an urgent need for a feature you can also contact us directly at [email protected]. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _contact-us: Contact Us ========== For support please checkout the `Github issues &lt;https://github.com/aws/aws-neuron-sdk/issues&gt;`__ or `Neuron AWS forums &lt;https://forums.aws.amazon.com/forum.jspa?forumID=355&gt;`__ for an answer, if none of those resources have an answer to your question please open a ticket. If you have an urgent need for a feature you can also contact us directly at [email protected].</pre></body></html>
2023-09-29T20:55:21.647Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.rst.txt
``` .. _neuron_llm_inference: Generative LLM inference with Neuron ==================================== .. contents:: Table of contents :local: :depth: 2 Background ---------- Large Language Models (LLMs) generate human-like text through a process known as generative inference. Fundamentally, generative LLM inference generates text outputs given an input prompt, by iteratively predicting the next token in a sequence. These models typically take a sequence of integers as input, which represent a sequence of tokens (words/subwords), and generate a prediction for the next token to be emitted. Below is a simple example that illustrates this in code: .. code-block:: python # Vocabulary of tokens the model can parse. The position of each token in the # vocabulary is used as the token_id (an integer representing that token) vocab = ["having", "I", "fun", "am", "learning", ".", "Neuron"] # input token_ids: list of integers that represent the input tokens in this # case: "I", "am", "having", "fun" input_token_ids = [1, 3, 0, 2] # The LLM gets a vector of input token_ids, and generates a probability-distribution # for what the output token_id should be (with a probability score for each token_id # in the vocabulary) output = LLM(input_token_ids) # by taking argmax on the output, we effectively perform a 'greedy sampling' process, # i.e. we choose the token_id with the highest probability. Other sampling techniques # also exist, e.g. Top-K. By choosing a probabilistic sampling method we enable the model # to generate different outputs when called multiple times with the same input. next_token_id = np.argmax(output) # map the token_id back into an output token next_token = vocab[next_token_id] To generate full sentences, the application iteratively invokes the LLM to generate the next token's prediction, and at each iteration we append the predicted token back into the input: .. code-block:: python def generate(input_token_ids, n_tokens_to_generate): for _ in range(n_tokens_to_generate): # decode loop output = LLM(input_token_ids) # model forward pass next_token_id = np.argmax(output) # greedy sampling if (next_token_id == EOS_TOK_ID) break # break if generated End Of Sentence (EOS) # append the prediction to the input, and continue to the next out_token input_token_ids.append(int(next_token_id)) return input_token_ids[-n_tokens_to_generate :] # only return generated token_ids input_token_ids = [1, 3] # "I" "am" output_token_ids = generate(input_tokens_ids, 4) # output_token_ids = [0, 2, 4, 6] output_tokens = [vocab[i] for i in output_token_ids] # "having" "fun" "learning" “Neuron” This process, of predicting a future value (regression), and adding them back into the input (auto), is sometimes referred to as autoregression. For more details, Jay Mody’s \ `GPT in 60 Lines of NumPy <https://jaykmody.com/blog/gpt-from-scratch/>`__\ is an excellent writeup on GPTs (Generative Pre-trained Transformers). Performance optimizations ------------------------- The sheer size of state-of-the-art LLMs, as well as the sequential nature of text generation, poses multiple challenges for efficient generative LLM deployment. First, the model is typically sharded across multiple devices in order to fit the model in device memory, which creates communication overhead and complexity among devices. Second, certain deployments have strict application-level latency bounds and hence require substantial latency optimizations, which is especially challenging due to the sequential nature of token-by-token generation. Finally, generating one token at a time often leads to poor device utilization due to low arithmetic intensity, which can be improved via batching (see :ref:`what_batch_size_to_use`). The Neuron SDK provides several built-in optimizations, to allow you to extract the optimal performance when deploying LLM models, including: KV-caching: ^^^^^^^^^^^ The `transformers-neuronx <https://github.com/aws-neuron/transformers-neuronx>`__ library implements the KV-cache optimization, which saves compute resources by reusing previously calculated SelfAttention key-value pairs, instead of recalculating them for each generated token. To illustrate this concept, we show the inner workings of the MaskedSelfAttention operator in the figure below. At each token generation step, the Query vector of a single current token is multiplied by the Key vectors of all previous tokens in the sequence to create attention scores, and the scores are further multiplied by the Value vectors of all previous tokens. .. image:: /images/masked-self-attention-operator.png The core idea behind this optimization is that instead of re-computing the Key and Value vectors for all previous tokens at each token generation step, Neuron can perform only the incremental computation for the current token and re-use previously computed Key/Value vectors from the KV-cache. The Key/Value vector of the current token is also appended to the KV-cache for the next token generation step. .. image:: /images/kv-cache-optimization.png As a final observation, one should note that the first token in the output sequence is unique in two ways: .. container:: - There's no KV-cache available at that point - Neuron needs to compute the entire KV-cache for <input_len> tokens (the input prompt), rather than one incremental KV-cache entry This means that first-token latency is typically going to be higher than the following tokens. Model sharding: ^^^^^^^^^^^^^^^ Neuron enables you to shard the model across devices via Tensor Parallelism, Pipeline Parallelism (coming soon), or a combination of the two (coming soon). Tensor Parallelism shards each layer across multiple devices, and allows you to achieve the optimal latency. Pipeline Parallelism places different layers on different devices and creates a pipeline between them (as the name suggests), and is mostly useful when optimizing throughput and/or cost-per-inference. To find the optimal Tensor/Pipeline parallelism configuration for your model, see the :ref:`model_partitioning` section. Computation/communication overlap: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The Neuron compiler automatically fuses Collective Communication primitives (e.g. AllReduce) with the following computation (e.g. GEMM) in the compute graph. This helps to minimize any overhead caused by sharding the model across devices. Compact data-types: ^^^^^^^^^^^^^^^^^^^ Neuron supports INT8 (coming soon) and FP8 (coming soon), which can significantly reduce memory bandwidth and capacity requirements of the model. This is especially useful for Generative LLM inference which is typically memory bound. Therefore, using a compact data-type can improve the overall LLM inference performance with lower latency and higher throughput. Bucketing: ^^^^^^^^^^ The transformers-neuronx library automatically uses bucketing to process the input prompt and output tokens. Bucketing makes it possible to handle variable sequence lengths without requiring support for dynamic shapes. We use multiple progressively larger buckets to help minimize the portion of the KV-cache that we need to read for each token. .. _model_partitioning: Model partitioning ------------------ How many NeuronCores do I need? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Transformer models are typically defined via a hyper-parameter configuration, such as the below: .. code-block:: python { "n_vocab": 50257, # number of tokens in our vocabulary "n_ctx": 2048, # maximum possible sequence length of the input "n_embd": 9216, # embedding dimension (determines the "width" of the network) "n_head": 72, # number of attention heads (n_embd must be divisible by n_head) "n_layer": 64 # number of layers (determines the "depth" of the network) } To determine the number of NeuronCores needed to fit the model, we perform the following calculation: .. code-block:: python weight_mem_footprint = 12 x <n_layer> x <n_embd>^2 x <dtype-size> KV_cache_mem_footprint = <batch-size> x <n_layer> x <n_ctx> x <n_embd> x 2 x <dtype-size> # <dtype-size> is 2 for BF16/FP16, or 1 for FP8/INT8 mem_footprint = weight_mem_footprint + KV_cache_mem_footprint And from here, determining the number of NeuronCores is straightforward: .. code-block:: python num_neuron_cores = ceil_to_closest_supported_size (mem_footprint / <NC-HBM-capacity>, <instance-type>) # 16GiB per Inferentia2/Trainium1 NeuronCore As an example, we examine running OPT-66B on Inf2, with batch-size of 16, the number of required NeuronCores can be computed as below. .. code-block:: python # OPT-66B example (BF16, Inf2) # n_layer=64, n_ctx=2048, n_embd=9216, batch=16 weight_mem_footprint = 12 x 64 x 9216^2 x 2 = 121.5 GiB KV_cache_mem_footprint = 16 x 64 x 2048 x 9216 x 2 x 2 = 72 GiB mem_footprint = 121.5GiB + 72GiB = 193.5 GiB num_neuron_cores = ceil_to_closest_supported_size (193.5GiB / 16GiB, Inf2) = ceil_to_closest_supported_size (12.1) = 24 ## Currently, the Neuron runtime supports tensor-parallelism degrees 2, 8, and 32 on Trn1 ## and supports tensor-parallelism degrees 2, 4, 8, 12 and 24 on Inf2. You can use :ref:`neuron_calculator` to compute the number of cores needed for a custom hyper-parameter configuration Which parallelism technique should I use? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Tensor parallelism improves latency, at the expense of increased intra-layer communication. Thus, as a general rule, we advise to use the smallest tensor parallelism degree that meets your latency requirement, and then use pipeline/data parallelism from that point on. If latency is not a main concern in your application (e.g. model evaluation), and the primary goal is to maximize throughput (i.e., minimize total cost per token), then it is most efficient to use pipeline parallelism and increase the batch-size as much as possible. .. _what_batch_size_to_use: What batch-size should I use? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Due to the serial token generation nature of generative LLM inference, this workload tends to be extremely memory bound. This means that throughput (and thus cost per inference) will improve significantly by batching. As a general rule, we recommend increasing the batch-size to the maximum amount that fits within the latency budget (up to batch=256, a larger batch-size beyond that typically does not help with performance). Note that the KV-cache grows linearly with the batch-size, and can grow to a point of running out of memory (typically referred to as OOM). If the latency budget allows, we recommend growing the batch-size to the maximum value that doesn’t result in OOM. Users may also consider pipelining the model beyond what’s necessary to fit model parameters / KV-cache on devices, in order to free up device-memory space and thus allow the batch-size to grow higher without causing OOM issues. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_llm_inference: Generative LLM inference with Neuron ==================================== .. contents:: Table of contents :local: :depth: 2 Background ---------- Large Language Models (LLMs) generate human-like text through a process known as generative inference. Fundamentally, generative LLM inference generates text outputs given an input prompt, by iteratively predicting the next token in a sequence. These models typically take a sequence of integers as input, which represent a sequence of tokens (words/subwords), and generate a prediction for the next token to be emitted. Below is a simple example that illustrates this in code: .. code-block:: python # Vocabulary of tokens the model can parse. The position of each token in the # vocabulary is used as the token_id (an integer representing that token) vocab = ["having", "I", "fun", "am", "learning", ".", "Neuron"] # input token_ids: list of integers that&nbsp;represent the input tokens in this # case: "I", "am", "having", "fun" input_token_ids = [1, 3, 0, 2] # The LLM gets a vector of input token_ids, and generates a probability-distribution # for what the output token_id should be (with a probability score for each token_id # in the vocabulary) output = LLM(input_token_ids) # by taking argmax on the output, we effectively perform a 'greedy sampling' process, # i.e. we&nbsp;choose the token_id with the highest probability. Other sampling techniques # also exist, e.g. Top-K. By choosing a probabilistic sampling method we enable the model # to generate different outputs when called multiple times with the same input. next_token_id = np.argmax(output) # map the token_id back into an output token next_token = vocab[next_token_id] To generate full sentences, the application iteratively invokes the LLM to generate the next token's prediction, and at each iteration we append the predicted token back into the input: .. code-block:: python def generate(input_token_ids, n_tokens_to_generate): for _ in range(n_tokens_to_generate): # decode loop output = LLM(input_token_ids) # model forward pass next_token_id = np.argmax(output) # greedy sampling if (next_token_id == EOS_TOK_ID) break # break if generated End Of Sentence (EOS) # append the prediction to the input, and continue to the next out_token input_token_ids.append(int(next_token_id)) return input_token_ids[-n_tokens_to_generate :] # only return generated token_ids input_token_ids = [1, 3] # "I" "am" output_token_ids = generate(input_tokens_ids, 4) # output_token_ids = [0, 2, 4, 6] output_tokens = [vocab[i] for i in output_token_ids] # "having" "fun" "learning" “Neuron” This process, of predicting a future value (regression), and adding them back into the input (auto), is sometimes referred to as autoregression.&nbsp;For more details, Jay Mody’s \ `GPT in 60 Lines of NumPy &lt;https://jaykmody.com/blog/gpt-from-scratch/&gt;`__\ is an excellent writeup on GPTs (Generative Pre-trained Transformers). Performance optimizations ------------------------- The sheer size of state-of-the-art LLMs, as well as the sequential nature of text generation, poses multiple challenges for efficient generative LLM deployment. First, the model is typically sharded across multiple devices in order to fit the model in device memory, which creates communication overhead and complexity among devices. Second, certain deployments have strict application-level latency bounds and hence require substantial latency optimizations, which is especially challenging due to the sequential nature of token-by-token generation. Finally, generating one token at a time often leads to poor device utilization due to low arithmetic intensity, which can be improved via batching (see :ref:`what_batch_size_to_use`). The Neuron SDK provides several built-in optimizations, to allow you to extract the optimal performance when deploying LLM models, including: KV-caching: ^^^^^^^^^^^ The `transformers-neuronx &lt;https://github.com/aws-neuron/transformers-neuronx&gt;`__ library implements the KV-cache optimization, which saves compute resources by reusing previously calculated SelfAttention key-value pairs, instead of recalculating them for each generated token. To illustrate this concept, we show the inner workings of the MaskedSelfAttention operator in the figure below. At each token generation step, the Query vector of a single current token is multiplied by the Key vectors of all previous tokens in the sequence to create attention scores, and the scores are further multiplied by the Value vectors of all previous tokens. .. image:: /images/masked-self-attention-operator.png The core idea behind this optimization is that instead of re-computing the Key and Value vectors for all previous tokens at each token generation step, Neuron can perform only the incremental computation for the current token and re-use previously computed Key/Value vectors from the KV-cache. The Key/Value vector of the current token is also appended to the KV-cache for the next token generation step. .. image:: /images/kv-cache-optimization.png As a final observation, one should note that the first token in the output sequence is unique in two ways: .. container:: - There's no KV-cache available at that point - Neuron needs to compute the entire KV-cache for &lt;input_len&gt; tokens (the input prompt), rather than one incremental KV-cache entry This means that first-token latency is typically going to be higher than the following tokens. Model sharding: ^^^^^^^^^^^^^^^ Neuron enables you to shard the model across devices via Tensor Parallelism, Pipeline Parallelism (coming soon), or a combination of the two (coming soon). Tensor Parallelism shards each layer across multiple devices, and allows you to achieve the optimal latency. Pipeline Parallelism places different layers on different devices and creates a pipeline between them (as the name suggests), and is mostly useful when optimizing throughput and/or cost-per-inference. To find the optimal Tensor/Pipeline parallelism configuration for your model, see the :ref:`model_partitioning` section. Computation/communication overlap: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The Neuron compiler automatically fuses Collective Communication primitives (e.g. AllReduce) with the following computation (e.g. GEMM) in the compute graph. This helps to minimize any overhead caused by sharding the model across devices. Compact data-types: ^^^^^^^^^^^^^^^^^^^ Neuron supports INT8 (coming soon) and FP8 (coming soon), which can significantly reduce memory bandwidth and capacity requirements of the model. This is especially useful for Generative LLM inference which is typically memory bound. Therefore, using a compact data-type can improve the overall LLM inference performance with lower latency and higher throughput. Bucketing: ^^^^^^^^^^ The transformers-neuronx library automatically uses bucketing to process the input prompt and output tokens. Bucketing makes it possible to handle variable sequence lengths without requiring support for dynamic shapes. We use multiple progressively larger buckets to help minimize the portion of the KV-cache that we need to read for each token. .. _model_partitioning: Model partitioning ------------------ How many NeuronCores do I need? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Transformer models are typically defined via a hyper-parameter configuration, such as the below: .. code-block:: python { "n_vocab": 50257, # number of tokens in our vocabulary "n_ctx": 2048, # maximum possible sequence length of the input "n_embd": 9216, # embedding dimension (determines the "width" of the network) "n_head": 72, # number of attention heads (n_embd must be divisible by n_head) "n_layer": 64 # number of layers (determines the "depth" of the network) } To determine the number of NeuronCores needed to fit the model, we perform the following calculation: .. code-block:: python weight_mem_footprint = 12 x &lt;n_layer&gt; x &lt;n_embd&gt;^2 x &lt;dtype-size&gt; KV_cache_mem_footprint = &lt;batch-size&gt; x &lt;n_layer&gt; x &lt;n_ctx&gt; x &lt;n_embd&gt; x 2 x &lt;dtype-size&gt; # &lt;dtype-size&gt; is 2 for BF16/FP16, or 1 for FP8/INT8 mem_footprint = weight_mem_footprint + KV_cache_mem_footprint And from here, determining the number of NeuronCores is straightforward: .. code-block:: python num_neuron_cores = ceil_to_closest_supported_size (mem_footprint / &lt;NC-HBM-capacity&gt;, &lt;instance-type&gt;) # 16GiB per Inferentia2/Trainium1 NeuronCore As an example, we examine running OPT-66B on Inf2, with batch-size of 16, the number of required NeuronCores can be computed as below. .. code-block:: python # OPT-66B example (BF16, Inf2) # n_layer=64, n_ctx=2048, n_embd=9216, batch=16 weight_mem_footprint = 12 x 64 x 9216^2 x 2 = 121.5 GiB KV_cache_mem_footprint = 16 x 64 x 2048 x 9216 x 2 x 2 = 72 GiB mem_footprint = 121.5GiB + 72GiB = 193.5 GiB num_neuron_cores = ceil_to_closest_supported_size (193.5GiB / 16GiB, Inf2) = ceil_to_closest_supported_size (12.1) = 24 ##&nbsp;Currently, the Neuron runtime supports tensor-parallelism degrees 2, 8, and 32 on Trn1 ## and supports tensor-parallelism degrees&nbsp;2, 4, 8, 12 and 24 on Inf2. You can use :ref:`neuron_calculator` to compute the number of cores needed for a custom hyper-parameter configuration Which parallelism technique should I use? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Tensor parallelism improves latency, at the expense of increased intra-layer communication. Thus, as a general rule, we advise to use the smallest tensor parallelism degree that meets your latency requirement, and then use pipeline/data parallelism from that point on. If latency is not a main concern in your application (e.g. model evaluation), and the primary goal is to maximize throughput (i.e., minimize total cost per token), then it is most efficient to use pipeline parallelism and increase the batch-size as much as possible. .. _what_batch_size_to_use: What batch-size should I use? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Due to the serial token generation nature of generative LLM inference, this workload tends to be extremely memory bound. This means that throughput (and thus cost per inference) will improve significantly by batching. As a general rule, we recommend increasing the batch-size to the maximum amount that fits within the latency budget (up to batch=256, a larger batch-size beyond that typically does not help with performance). Note that the KV-cache grows linearly with the batch-size, and can grow to a point of running out of memory (typically referred to as OOM). If the latency budget allows, we recommend growing the batch-size to the maximum value that doesn’t result in OOM. Users may also consider pipelining the model beyond what’s necessary to fit model parameters / KV-cache on devices, in order to free up device-memory space and thus allow the batch-size to grow higher without causing OOM issues. </pre></body></html>
2023-09-29T20:55:21.713Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/perf/neuron-cc/parallel-ncgs.rst.txt
``` .. _parallel-exec-ncgs: Parallel Execution using NEURON_RT_NUM_CORES =============================================== .. important :: ``NEURONCORE_GROUP_SIZES`` will no longer be supported starting Neuron 1.19.0 release if your application is using ``NEURONCORE_GROUP_SIZES`` please see :ref:`neuron-migrating-apps-neuron-to-libnrt` and :ref:`eol-ncgs-env_2` for more details. Introduction ------------ Inf1 instances are available with a different number of Inferentia chips, each Inferentia chip is combined of 4 NeuronCores and an Inf1 instance includes 4 to 64 NeuronCores depending on the instance size. This guide will show you how to load one or more compiled models into different consecutive groups of NeuronCores using your framework of choice. Data Parallel Execution ----------------------- In PyTorch and TensorFlow, the same compiled model can run in parallel on an Inf1 instance by loading it multiple times, up to the total number of NeuronCores specified in NEURON_RT_NUM_CORES or NEURON_RT_VISIBLE_CORES. For more information about NEURON_RT_NUM_CORES and NEURON_RT_VISIBLE_CORES, please refer to :ref:`Neuron Runtime Configuration <nrt-configuration>`. Running multiple models using single process ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To run multiple models using single process, you set the environment variable ``NEURON_RT_NUM_CORES`` with a comma separated list of number of cores in each group. You can set ``NEURON_RT_NUM_CORES`` environment variable at runtime: .. code :: bash #!/bin/bash NEURON_RT_NUM_CORES=13 python your_neuron_application.py Or from within your python process running your models (NOTE: you can only set it once in the same process at the beginning of the script): .. code :: bash #!/usr/bin/env python import os # Set Environment os.environ['NEURON_RT_NUM_CORES']='13' # Load models and run inferences ... The below examples allow you to load 4 models into 4 groups of NeuronCores within one process. For example, if there are 4 models A, B, C, D compiled to 2, 4, 3, and 4 NeuronCores respectively, you directly load the models A, B, C, D in sequence within your TensorFlow or PyTorch Neuron process. This example requires an inf1.6xlarge instance with 16 NeuronCores, as the total number of NeuronCores within the NeuronCore Groups is 13. In MXNet, the mapping from models to NeuronCores is controlled by context ``mx.neuron(neuron_core_index)`` where ``neuron_core_index`` is the NeuronCore index at the start of the group. In the example above, you map model A to ``mx.neuron(0)`` context, model B to ``mx.neuron(2)`` context, model C to ``mx.neuron(6)`` context and model D to ``mx.neuron(9)`` context. For further details, please refer to :ref:`Flexible Execution Group (FlexEG) in Neuron-MXNet<flexeg>`. For PyTorch For an automated data parallel solution in PyTorch, please see :ref:`Data Parallel Inference on Torch Neuron<torch-neuron-dataparallel-app-note>` for more details. For Tensorflow .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='13' # Load models (TF2) model0 = tf.keras.models.load_model(model0_file) # loaded into the first group of NC0-NC1 model1 = tf.keras.models.load_model(model1_file) # loaded into the second group of NC2-NC5 model2 = tf.keras.models.load_model(model1_file) # loaded into the third group of NC6-NC8 model3 = tf.keras.models.load_model(model1_file) # loaded into the fourth group of NC9-NC12 # run inference by simply calling the loaded model results0 = model0(inputs0) results1 = model1(inputs1) results2 = model2(inputs2) results3 = model3(inputs3) For MXNet 2.x: .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='13' # Load models (MXNet) # loaded into the first group of NC0-NC1 sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0) model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null') # loaded into the second group of NC2-NC5 sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0) model1 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null') # loaded into the third group of NC6-NC8 sym, args, aux = mx.model.load_checkpoint(mx_model2_file, 0) model2 = sym.bind(ctx=mx.neuron(6), args=args, aux_states=aux, grad_req='null') # loaded into the fourth group of NC9-NC12 sym, args, aux = mx.model.load_checkpoint(mx_model3_file, 0) model3 = sym.bind(ctx=mx.neuron(9), args=args, aux_states=aux, grad_req='null') # run inference by simply calling the loaded model results0 = model0.forward(data=inputs0) results1 = model1.forward(data=inputs1) results2 = model2.forward(data=inputs2) results3 = model3.forward(data=inputs3) You can identify the NeuronCores used by each application using the ``neuron-top`` command line tool. For more information about the neuron-top user interface, please see :ref:`Neuron Top User Guide <neuron-top-ug>`. .. code :: bash $ neuron-top .. figure:: /images/multi_1core_models_multi_processes.png :scale: 80 % Running multiple models using multiple processes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can also run multiple models in parallel processes, when you set ``NEURON_RT_NUM_CORES`` per process: .. code :: bash $ NEURON_RT_NUM_CORES=2 python your_1st_neuron_application.py $ NEURON_RT_NUM_CORES=2 python your_2nd_neuron_application.py The first process automatically selects a first set of 2 unused NeuronCores for its new group. The second process automatically selects a new set of 2 unused NeuronCores for its new group. .. figure:: /images/multi_2cores_models_multi_processes.png :scale: 80 % Running multiple models on the same NeuronCore group ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can load more than one model in a NeuronCore group within one process. The Neuron runtime will handle switching from one model to the next model within the NeuronCore group when the next model is run within the application. In TensorFlow or PyTorch, simply load the additional models after the initial number of models have been loaded, to fill the NeuronCore groups associated with the process. For PyTorch: .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='2' # Load models (PT) model0 = torch.jit.load(model0_file) # loaded into the first group of NC0-NC1 model1 = torch.jit.load(model1_file) # loaded into the first group of NC0-NC1 # run inference by simply calling the loaded model results0 = model0(inputs0) results1 = model1(inputs1) For TensorFlow 2.x: .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='2' # Load models (TF2) model0 = tf.keras.models.load_model(model0_file) # loaded into the first group of NC0-NC1 model1 = tf.keras.models.load_model(model1_file) # loaded into the first group of NC0-NC1 # run inference by simply calling the loaded model results0 = model0(inputs0) results1 = model1(inputs1) In MXNet, use context ``mx.neuron(neuron_core_index)`` and use the same NeuronCore start index for the additional models. .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='2' # Load models (MXNet) # loaded into the first group of NC0-NC1 sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0) model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null') # loaded into the first group of NC0-NC1 sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0) model1 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null') # run inference by simply calling the loaded model results0 = model0.forward(data=inputs0) results1 = model1.forward(data=inputs1) The total ``NEURON_RT_NUM_CORES`` across all processes cannot exceed the number of NeuronCores available on the instance. For example, on an inf1.xlarge with default configurations where the total number of NeuronCores visible to TensorFlow-Neuron is 4, you can launch one process with ``NEURON_RT_NUM_CORES=2`` (pipelined) and another process with ``NEURON_RT_NUM_CORES=2`` (data-parallel). Examples using ``NEURON_RT_NUM_CORES`` include: * :ref:`PyTorch example </src/examples/pytorch/resnet50.ipynb>` * :ref:`MXNet example </src/examples/mxnet/resnet50_neuroncore_groups.ipynb>` Auto Model Replication in TensorFlow Neuron (``tensorflow-neuron``) (Experimental) ---------------------------------------------------------------------------------- Please see the below API documentation to see how to perform automatic replication on multiple cores. Note that the automatic replication will only work on models compiled with pipeline size 1: via ``--neuroncore-pipeline-cores=1``. If auto replication is not enabled, the model will default to replicate on up to 4 cores. Python API (TF 2.x only): :ref:`tensorflow-ref-auto-replication-python-api` CLI API (TF 1.x and TF 2.x): :ref:`tensorflow-ref-auto-replication-cli-api` Auto Model Replication (Being Deprecated) ----------------------------------------- The Auto Model Replication feature in TensorFlow-Neuron enables you to load the model once and the data parallel replication would happen automatically. This reduces framework memory usage as you are not loading the same model multiple times. This feature is experimental and available in TensorFlow-Neuron only. To enable Auto Model Replication, set NEURONCORE_GROUP_SIZES to Nx1 where N is the desired replication count (the number of NeuronCore groups, each group has size 1). For example, NEURONCORE_GROUP_SIZES=8x1 would automatically replicate the single-NeuronCore model 8 times. .. code :: python os.environ['NEURONCORE_GROUP_SIZES'] = '4x1' or .. code :: bash NEURONCORE_GROUP_SIZES=4x1 python3 application.py When NEURONCORE_GROUP_SIZES is not set, the default is 4x1 where a single-NeuronCore model is replicated 4 times on any sized inf1 machine. This feature is only available for models compiled with neuroncore-pipeline-cores set to 1 (default). You will still need to use threads in the scaffolding code to feed the loaded replicated model instance in order to achieve high throughput. Example of auto model replication: :ref:`/src/examples/tensorflow/openpose_demo/openpose.ipynb` FAQ --- Can I mix data parallel and NeuronCore Pipeline? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Yes. You can compile the model using neuroncore-pipeline-cores option. This tells the compiler to set compilation to the specified number of cores for :ref:`neuroncore-pipeline`. The Neuron Compiler will return a NEFF which fits within this limit. See the :ref:`neuron-compiler-cli-reference` on how to use this option. For example, on an inf1.2xlarge, you can load two model instances, each compiled with neuroncore-pipeline-cores set to 2, so that they can run in parallel. The model instances can be loaded from different saved models or from the same saved model. Can I have a mix of multiple models in one Neuroncore group and single model in another one Neuroncore group? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently, you can do this in MXNet by setting up two Neuroncore groups, then load for example multiple models in one NCG using context mx.neuron(0), and load single model in the second NCG using context mx.neuron(2). You can also load single model in the first NCG and multiple models in the second NCG. For example: .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='6' # Load models (MXNet) # loaded into the first group of NC0-NC1 sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0) model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null') # loaded into the second group of NC2-NC5 sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0) model1 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null') # loaded into the second group of NC2-NC5 sym, args, aux = mx.model.load_checkpoint(mx_model2_file, 0) model2 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null') # loaded into the second group of NC2-NC5 sym, args, aux = mx.model.load_checkpoint(mx_model3_file, 0) model3 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null') # run inference by simply calling the loaded model results0 = model0.forward(data=inputs0) results1 = model1.forward(data=inputs1) results2 = model2.forward(data=inputs2) results3 = model3.forward(data=inputs3) Loading multiple models in one NCG and single model in another NCG is currently not supported in TensorFlow and PyTorch. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _parallel-exec-ncgs: Parallel Execution using NEURON_RT_NUM_CORES =============================================== .. important :: ``NEURONCORE_GROUP_SIZES`` will no longer be supported starting Neuron 1.19.0 release if your application is using ``NEURONCORE_GROUP_SIZES`` please see :ref:`neuron-migrating-apps-neuron-to-libnrt` and :ref:`eol-ncgs-env_2` for more details. Introduction ------------ Inf1 instances are available with a different number of Inferentia chips, each Inferentia chip is combined of 4 NeuronCores and an Inf1 instance includes 4 to 64 NeuronCores depending on the instance size. This guide will show you how to load one or more compiled models into different consecutive groups of NeuronCores using your framework of choice. Data Parallel Execution ----------------------- In PyTorch and TensorFlow, the same compiled model can run in parallel on an Inf1 instance by loading it multiple times, up to the total number of NeuronCores specified in NEURON_RT_NUM_CORES or NEURON_RT_VISIBLE_CORES. For more information about NEURON_RT_NUM_CORES and NEURON_RT_VISIBLE_CORES, please refer to :ref:`Neuron Runtime Configuration &lt;nrt-configuration&gt;`. Running multiple models using single process ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To run multiple models using single process, you set the environment variable ``NEURON_RT_NUM_CORES`` with a comma separated list of number of cores in each group. You can set ``NEURON_RT_NUM_CORES`` environment variable at runtime: .. code :: bash #!/bin/bash NEURON_RT_NUM_CORES=13 python your_neuron_application.py Or from within your python process running your models (NOTE: you can only set it once in the same process at the beginning of the script): .. code :: bash #!/usr/bin/env python import os # Set Environment os.environ['NEURON_RT_NUM_CORES']='13' # Load models and run inferences ... The below examples allow you to load 4 models into 4 groups of NeuronCores within one process. For example, if there are 4 models A, B, C, D compiled to 2, 4, 3, and 4 NeuronCores respectively, you directly load the models A, B, C, D in sequence within your TensorFlow or PyTorch Neuron process. This example requires an inf1.6xlarge instance with 16 NeuronCores, as the total number of NeuronCores within the NeuronCore Groups is 13. In MXNet, the mapping from models to NeuronCores is controlled by context ``mx.neuron(neuron_core_index)`` where ``neuron_core_index`` is the NeuronCore index at the start of the group. In the example above, you map model A to ``mx.neuron(0)`` context, model B to ``mx.neuron(2)`` context, model C to ``mx.neuron(6)`` context and model D to ``mx.neuron(9)`` context. For further details, please refer to :ref:`Flexible Execution Group (FlexEG) in Neuron-MXNet&lt;flexeg&gt;`. For PyTorch For an automated data parallel solution in PyTorch, please see :ref:`Data Parallel Inference on Torch Neuron&lt;torch-neuron-dataparallel-app-note&gt;` for more details. For Tensorflow .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='13' # Load models (TF2) model0 = tf.keras.models.load_model(model0_file) # loaded into the first group of NC0-NC1 model1 = tf.keras.models.load_model(model1_file) # loaded into the second group of NC2-NC5 model2 = tf.keras.models.load_model(model1_file) # loaded into the third group of NC6-NC8 model3 = tf.keras.models.load_model(model1_file) # loaded into the fourth group of NC9-NC12 # run inference by simply calling the loaded model results0 = model0(inputs0) results1 = model1(inputs1) results2 = model2(inputs2) results3 = model3(inputs3) For MXNet 2.x: .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='13' # Load models (MXNet) # loaded into the first group of NC0-NC1 sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0) model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null') # loaded into the second group of NC2-NC5 sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0) model1 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null') # loaded into the third group of NC6-NC8 sym, args, aux = mx.model.load_checkpoint(mx_model2_file, 0) model2 = sym.bind(ctx=mx.neuron(6), args=args, aux_states=aux, grad_req='null') # loaded into the fourth group of NC9-NC12 sym, args, aux = mx.model.load_checkpoint(mx_model3_file, 0) model3 = sym.bind(ctx=mx.neuron(9), args=args, aux_states=aux, grad_req='null') # run inference by simply calling the loaded model results0 = model0.forward(data=inputs0) results1 = model1.forward(data=inputs1) results2 = model2.forward(data=inputs2) results3 = model3.forward(data=inputs3) You can identify the NeuronCores used by each application using the ``neuron-top`` command line tool. For more information about the neuron-top user interface, please see :ref:`Neuron Top User Guide &lt;neuron-top-ug&gt;`. .. code :: bash $ neuron-top .. figure:: /images/multi_1core_models_multi_processes.png :scale: 80 % Running multiple models using multiple processes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can also run multiple models in parallel processes, when you set ``NEURON_RT_NUM_CORES`` per process: .. code :: bash $ NEURON_RT_NUM_CORES=2 python your_1st_neuron_application.py $ NEURON_RT_NUM_CORES=2 python your_2nd_neuron_application.py The first process automatically selects a first set of 2 unused NeuronCores for its new group. The second process automatically selects a new set of 2 unused NeuronCores for its new group. .. figure:: /images/multi_2cores_models_multi_processes.png :scale: 80 % Running multiple models on the same NeuronCore group ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can load more than one model in a NeuronCore group within one process. The Neuron runtime will handle switching from one model to the next model within the NeuronCore group when the next model is run within the application. In TensorFlow or PyTorch, simply load the additional models after the initial number of models have been loaded, to fill the NeuronCore groups associated with the process. For PyTorch: .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='2' # Load models (PT) model0 = torch.jit.load(model0_file) # loaded into the first group of NC0-NC1 model1 = torch.jit.load(model1_file) # loaded into the first group of NC0-NC1 # run inference by simply calling the loaded model results0 = model0(inputs0) results1 = model1(inputs1) For TensorFlow 2.x: .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='2' # Load models (TF2) model0 = tf.keras.models.load_model(model0_file) # loaded into the first group of NC0-NC1 model1 = tf.keras.models.load_model(model1_file) # loaded into the first group of NC0-NC1 # run inference by simply calling the loaded model results0 = model0(inputs0) results1 = model1(inputs1) In MXNet, use context ``mx.neuron(neuron_core_index)`` and use the same NeuronCore start index for the additional models. .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='2' # Load models (MXNet) # loaded into the first group of NC0-NC1 sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0) model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null') # loaded into the first group of NC0-NC1 sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0) model1 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null') # run inference by simply calling the loaded model results0 = model0.forward(data=inputs0) results1 = model1.forward(data=inputs1) The total ``NEURON_RT_NUM_CORES`` across all processes cannot exceed the number of NeuronCores available on the instance. For example, on an inf1.xlarge with default configurations where the total number of NeuronCores visible to TensorFlow-Neuron is 4, you can launch one process with ``NEURON_RT_NUM_CORES=2`` (pipelined) and another process with ``NEURON_RT_NUM_CORES=2`` (data-parallel). Examples using ``NEURON_RT_NUM_CORES`` include: * :ref:`PyTorch example &lt;/src/examples/pytorch/resnet50.ipynb&gt;` * :ref:`MXNet example &lt;/src/examples/mxnet/resnet50_neuroncore_groups.ipynb&gt;` Auto Model Replication in TensorFlow Neuron (``tensorflow-neuron``) (Experimental) ---------------------------------------------------------------------------------- Please see the below API documentation to see how to perform automatic replication on multiple cores. Note that the automatic replication will only work on models compiled with pipeline size 1: via ``--neuroncore-pipeline-cores=1``. If auto replication is not enabled, the model will default to replicate on up to 4 cores. Python API (TF 2.x only): :ref:`tensorflow-ref-auto-replication-python-api` CLI API (TF 1.x and TF 2.x): :ref:`tensorflow-ref-auto-replication-cli-api` Auto Model Replication (Being Deprecated) ----------------------------------------- The Auto Model Replication feature in TensorFlow-Neuron enables you to load the model once and the data parallel replication would happen automatically. This reduces framework memory usage as you are not loading the same model multiple times. This feature is experimental and available in TensorFlow-Neuron only. To enable Auto Model Replication, set NEURONCORE_GROUP_SIZES to Nx1 where N is the desired replication count (the number of NeuronCore groups, each group has size 1). For example, NEURONCORE_GROUP_SIZES=8x1 would automatically replicate the single-NeuronCore model 8 times. .. code :: python os.environ['NEURONCORE_GROUP_SIZES'] = '4x1' or .. code :: bash NEURONCORE_GROUP_SIZES=4x1 python3 application.py When NEURONCORE_GROUP_SIZES is not set, the default is 4x1 where a single-NeuronCore model is replicated 4 times on any sized inf1 machine. This feature is only available for models compiled with neuroncore-pipeline-cores set to 1 (default). You will still need to use threads in the scaffolding code to feed the loaded replicated model instance in order to achieve high throughput. Example of auto model replication: :ref:`/src/examples/tensorflow/openpose_demo/openpose.ipynb` FAQ --- Can I mix data parallel and NeuronCore Pipeline? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Yes. You can compile the model using neuroncore-pipeline-cores option. This tells the compiler to set compilation to the specified number of cores for :ref:`neuroncore-pipeline`. The Neuron Compiler will return a NEFF which fits within this limit. See the :ref:`neuron-compiler-cli-reference` on how to use this option. For example, on an inf1.2xlarge, you can load two model instances, each compiled with neuroncore-pipeline-cores set to 2, so that they can run in parallel. The model instances can be loaded from different saved models or from the same saved model. Can I have a mix of multiple models in one Neuroncore group and single model in another one Neuroncore group? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Currently, you can do this in MXNet by setting up two Neuroncore groups, then load for example multiple models in one NCG using context mx.neuron(0), and load single model in the second NCG using context mx.neuron(2). You can also load single model in the first NCG and multiple models in the second NCG. For example: .. code :: python # Set Environment os.environ['NEURON_RT_NUM_CORES']='6' # Load models (MXNet) # loaded into the first group of NC0-NC1 sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0) model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null') # loaded into the second group of NC2-NC5 sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0) model1 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null') # loaded into the second group of NC2-NC5 sym, args, aux = mx.model.load_checkpoint(mx_model2_file, 0) model2 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null') # loaded into the second group of NC2-NC5 sym, args, aux = mx.model.load_checkpoint(mx_model3_file, 0) model3 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null') # run inference by simply calling the loaded model results0 = model0.forward(data=inputs0) results1 = model1.forward(data=inputs1) results2 = model2.forward(data=inputs2) results3 = model3.forward(data=inputs3) Loading multiple models in one NCG and single model in another NCG is currently not supported in TensorFlow and PyTorch. </pre></body></html>
2023-09-29T20:55:21.730Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/benchmarks/inf2/inf2-performance.rst.txt
``` .. _inf2-performance: Inf2 Performance ================ .. contents:: Table of contents :local: :depth: 1 *Last update: September 15th, 2023* .. _inf2_inference_perf: Language Models Inference Performance --------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('throughput_data_language.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (inference/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/second)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Sequence Length', 'Model Data Type','Compilation Autocast Data Type', 'OS Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (inference/second)'] = df['Throughput (inference/second)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('latency_data_language.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (inference/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/second)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Sequence Length', 'Model Data Type','Compilation Autocast Data Type', 'OS Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (inference/second)'] = df['Throughput (inference/second)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) Large Language Models Inference Performance ------------------------------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('throughput_data_LLM.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (tokens/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (tokens/second)', 'Latency per Token P50 (ms)', 'Latency per Token P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'TP Degree', 'DP Degree', 'Batch Size', 'Sequence Length', 'Input Length', 'Output Length', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (tokens/second)'] = df['Throughput (tokens/second)'].round(2).astype('float',copy=True) int_cols = ['Latency per Token P50 (ms)', 'Latency per Token P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Throughput (tokens/second)** counts both input and output tokens **Latency per Token** counts both input and output tokens .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('latency_data_LLM.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (tokens/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (tokens/second)', 'Latency per Token P50 (ms)', 'Latency per Token P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'TP Degree', 'DP Degree', 'Batch Size', 'Sequence Length', 'Input Length', 'Output Length', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (tokens/second)'] = df['Throughput (tokens/second)'].round(2).astype('float',copy=True) int_cols = ['Latency per Token P50 (ms)', 'Latency per Token P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Throughput (tokens/second)** counts both input and output tokens **Latency per Token** counts both input and output tokens Vision Models Inference Performance --------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('throughput_data_vision.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M images'] = ((1.0e6 / df['Throughput (inference/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Image Size','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M images', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Image Size', 'Cost per 1M images']) df['Throughput (inference/sec)'] = df['Throughput (inference/sec)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Cost per 1M images** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('latency_data_vision.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M images'] = ((1.0e6 / df['Throughput (inference/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Image Size','Scripts','Framework','Inst. Type','Task', 'Throughput (inference/sec)','Latency P50 (ms)','Latency P99 (ms)','Cost per 1M images','Application Type','Neuron Version','Run Mode','Batch Size','Model Data Type', 'Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Image Size', 'Cost per 1M images']) df['Throughput (inference/sec)'] = df['Throughput (inference/sec)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Cost per 1M images** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. .. note:: See :ref:`neuron_hw_glossary` for abbreviations and terms ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _inf2-performance: Inf2 Performance ================ .. contents:: Table of contents :local: :depth: 1 *Last update: September 15th, 2023* .. _inf2_inference_perf: Language Models Inference Performance --------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('throughput_data_language.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (inference/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/second)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Sequence Length', 'Model Data Type','Compilation Autocast Data Type', 'OS Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (inference/second)'] = df['Throughput (inference/second)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('latency_data_language.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (inference/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/second)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Sequence Length', 'Model Data Type','Compilation Autocast Data Type', 'OS Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (inference/second)'] = df['Throughput (inference/second)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) Large Language Models Inference Performance ------------------------------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('throughput_data_LLM.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (tokens/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (tokens/second)', 'Latency per Token P50 (ms)', 'Latency per Token P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'TP Degree', 'DP Degree', 'Batch Size', 'Sequence Length', 'Input Length', 'Output Length', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (tokens/second)'] = df['Throughput (tokens/second)'].round(2).astype('float',copy=True) int_cols = ['Latency per Token P50 (ms)', 'Latency per Token P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Throughput (tokens/second)** counts both input and output tokens **Latency per Token** counts both input and output tokens .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('latency_data_LLM.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (tokens/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (tokens/second)', 'Latency per Token P50 (ms)', 'Latency per Token P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'TP Degree', 'DP Degree', 'Batch Size', 'Sequence Length', 'Input Length', 'Output Length', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (tokens/second)'] = df['Throughput (tokens/second)'].round(2).astype('float',copy=True) int_cols = ['Latency per Token P50 (ms)', 'Latency per Token P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Throughput (tokens/second)** counts both input and output tokens **Latency per Token** counts both input and output tokens Vision Models Inference Performance --------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('throughput_data_vision.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M images'] = ((1.0e6 / df['Throughput (inference/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Image Size','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M images', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Image Size', 'Cost per 1M images']) df['Throughput (inference/sec)'] = df['Throughput (inference/sec)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Cost per 1M images** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('latency_data_vision.csv') df_prices = pd.read_csv('inf2_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M images'] = ((1.0e6 / df['Throughput (inference/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Image Size','Scripts','Framework','Inst. Type','Task', 'Throughput (inference/sec)','Latency P50 (ms)','Latency P99 (ms)','Cost per 1M images','Application Type','Neuron Version','Run Mode','Batch Size','Model Data Type', 'Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Image Size', 'Cost per 1M images']) df['Throughput (inference/sec)'] = df['Throughput (inference/sec)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Cost per 1M images** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. .. note:: See :ref:`neuron_hw_glossary` for abbreviations and terms </pre></body></html>
2023-09-29T20:55:21.744Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/announce-eos-pytorch-1-9.rst.txt
``` .. post:: August 28, 2023 :language: en :tags: announce-eol, torch-neuron .. _announce-eol-pytorch19: Announcing end of support for ``torch-neuron`` version 1.9 ------------------------------------------- :ref:`Neuron release 2.13 <neuron-2.13.0-whatsnew>` will be the last release that will include support for ``torch-neuron`` version 1.9. Future Neuron releases will not include support for ``torch-neuron`` version 1.9. Current users of ``torch-neuron`` version 1.9 are advised to migrate to latest supported ``torch-neuron`` version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: August 28, 2023 :language: en :tags: announce-eol, torch-neuron .. _announce-eol-pytorch19: Announcing end of support for ``torch-neuron`` version 1.9 ------------------------------------------- :ref:`Neuron release 2.13 &lt;neuron-2.13.0-whatsnew&gt;` will be the last release that will include support for ``torch-neuron`` version 1.9. Future Neuron releases will not include support for ``torch-neuron`` version 1.9. Current users of ``torch-neuron`` version 1.9 are advised to migrate to latest supported ``torch-neuron`` version. </pre></body></html>
2023-09-29T20:55:21.813Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/benchmarks/inf1/index.rst.txt
``` .. _appnote-performance-benchmark: Inf1 Inference Performance =========================== .. contents:: Table of contents :local: The following tables contain the reference inference performance for models in the tutorials. Follow the links on each row to replicate similar results in your own environment. Refer to :ref:`ec2-then-ec2-setenv` documentation to create a new environment based on the latest Neuron release. *Last update: September 15th, 2023* .. _NLP: Natural Language Processing --------------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('neuronperf_nlp_throughput_optimized.csv') df_prices = pd.read_csv('instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Avg Throughput (/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model', 'Scripts', 'Framework', 'Inst. Type', 'Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model details' ] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) int_cols = ['Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(0).astype('int',copy=True) .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('neuronperf_nlp_latency_optimized.csv') df_prices = pd.read_csv('instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Avg Throughput (/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model', 'Scripts', 'Framework', 'Inst. Type', 'Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model details' ] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) int_cols = ['Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(0).astype('int',copy=True) \*\ *Throughput and latency numbers in this table were computed using* NeuronPerf_\ *. To reproduce these results, install NeuronPerf and run the provided scripts.* .. _NeuronPerf: https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuronperf/index.html .. df-table:: :header-rows: 1 df = pd.read_csv('data.csv') df_prices = pd.read_csv('instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type').query('`Application`=="NLP"') df['Cost per 1M inferences'] = ((1.0e6 / df['Avg Throughput (/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model', 'Tutorial', 'Framework', 'Inst. Type', 'Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model details' ] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) int_cols = ['Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(0).astype('int',copy=True) \*\ *Throughput and latency numbers in this table were generated using Neuron Tutorials.* Computer Vision --------------- .. df-table:: :header-rows: 1 df = pd.read_csv('data.csv') df_prices = pd.read_csv('instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type').query('`Application`=="CV"') df['Cost per 1M inferences'] = ((1.0e6 / df['Avg Throughput (/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model', 'Tutorial', 'Framework', 'Inst. Type', 'Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model details' ] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']).groupby('Model').head(2) int_cols = ['Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(0).astype('int',copy=True) \*\ *Throughput and latency numbers in this table were generated using Neuron Tutorials.* .. note:: **Cost per 1M inferences** is calculated using US East (N. Virginia) On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _appnote-performance-benchmark: Inf1 Inference Performance =========================== .. contents:: Table of contents :local: The following tables contain the reference inference performance for models in the tutorials. Follow the links on each row to replicate similar results in your own environment. Refer to :ref:`ec2-then-ec2-setenv` documentation to create a new environment based on the latest Neuron release. *Last update: September 15th, 2023* .. _NLP: Natural Language Processing --------------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('neuronperf_nlp_throughput_optimized.csv') df_prices = pd.read_csv('instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Avg Throughput (/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model', 'Scripts', 'Framework', 'Inst. Type', 'Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model details' ] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) int_cols = ['Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(0).astype('int',copy=True) .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('neuronperf_nlp_latency_optimized.csv') df_prices = pd.read_csv('instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Avg Throughput (/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model', 'Scripts', 'Framework', 'Inst. Type', 'Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model details' ] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) int_cols = ['Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(0).astype('int',copy=True) \*\ *Throughput and latency numbers in this table were computed using* NeuronPerf_\ *. To reproduce these results, install NeuronPerf and run the provided scripts.* .. _NeuronPerf: https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuronperf/index.html .. df-table:: :header-rows: 1 df = pd.read_csv('data.csv') df_prices = pd.read_csv('instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type').query('`Application`=="NLP"') df['Cost per 1M inferences'] = ((1.0e6 / df['Avg Throughput (/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model', 'Tutorial', 'Framework', 'Inst. Type', 'Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model details' ] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) int_cols = ['Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(0).astype('int',copy=True) \*\ *Throughput and latency numbers in this table were generated using Neuron Tutorials.* Computer Vision --------------- .. df-table:: :header-rows: 1 df = pd.read_csv('data.csv') df_prices = pd.read_csv('instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type').query('`Application`=="CV"') df['Cost per 1M inferences'] = ((1.0e6 / df['Avg Throughput (/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model', 'Tutorial', 'Framework', 'Inst. Type', 'Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size', 'Model details' ] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']).groupby('Model').head(2) int_cols = ['Avg Throughput (/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(0).astype('int',copy=True) \*\ *Throughput and latency numbers in this table were generated using Neuron Tutorials.* .. note:: **Cost per 1M inferences** is calculated using US East (N. Virginia) On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. </pre></body></html>
2023-09-29T20:55:21.972Z
Contributing Guidelines FAQs — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/faq/contributing-faq.html#contribute-faq
# Contributing Guidelines FAQs — AWS Neuron Documentation ## Contents - [How to reporting Bugs/Feature Requests](#how-to-reporting-bugs-feature-requests) - [Contributing via Pull Requests](#contributing-via-pull-requests) - [How to find contributions to work on](#how-to-find-contributions-to-work-on) - [What is the code of conduct](#what-is-the-code-of-conduct) - [How to notify for a security issue](#how-to-notify-for-a-security-issue) - [What is the licensing](#what-is-the-licensing) _This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n` ## Contributing Guidelines FAQs[#](#contributing-guidelines-faqs "Permalink to this headline") Table of contents - [How to reporting Bugs/Feature Requests](#how-to-reporting-bugs-feature-requests) - [Contributing via Pull Requests](#contributing-via-pull-requests) - [How to find contributions to work on](#how-to-find-contributions-to-work-on) - [What is the code of conduct](#what-is-the-code-of-conduct) - [How to notify for a security issue](#how-to-notify-for-a-security-issue) - [What is the licensing](#what-is-the-licensing) Whether it’s a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community. Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution. ## [How to reporting Bugs/Feature Requests](#id2)[#](#how-to-reporting-bugs-feature-requests "Permalink to this headline") We welcome you to use the GitHub issue tracker to report bugs or suggest features. When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn’t already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: - A reproducible test case or series of steps - The version of our code being used - Any modifications you’ve made relevant to the bug - Anything unusual about your environment or deployment ## [Contributing via Pull Requests](#id3)[#](#contributing-via-pull-requests "Permalink to this headline") Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 1. You are working against the latest source on the _master_ branch. 2. You check existing open, and recently merged, pull requests to make sure someone else hasn’t addressed the problem already. 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. To send us a pull request, please: 1. Fork the repository. 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 3. Ensure local tests pass. 4. Commit to your fork using clear commit messages. 5. Send us a pull request, answering any default questions in the pull request interface. 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). ## [How to find contributions to work on](#id4)[#](#how-to-find-contributions-to-work-on "Permalink to this headline") Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ‘help wanted’ issues is a great place to start. ## [How to notify for a security issue](#id6)[#](#how-to-notify-for-a-security-issue "Permalink to this headline") If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. ## [What is the licensing](#id7)[#](#what-is-the-licensing "Permalink to this headline") See the [link](https://github.com/aws/aws-neuron-sdk/blob/master/LICENSE-DOCUMENTATION) and [link](https://github.com/aws/aws-neuron-sdk/blob/master/LICENSE-SUMMARY-DOCS-SAMPLES) files for our project’s licensing. We will ask you to confirm the licensing of your contribution. We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. _This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Contributing Guidelines FAQs — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../_static/pygments.css"> <link rel="stylesheet" href="../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script> <script src="../../_static/jquery.js"></script> <script src="../../_static/underscore.js"></script> <script src="../../_static/doctools.js"></script> <script src="../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../_static/contentui.js"></script> <script src="../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../genindex.html"> <link rel="search" title="Search" href="../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/faq/contributing-faq", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/faq/contributing-faq.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/faq/contributing-faq.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../_sources/general/faq/contributing-faq.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-reporting-bugs-feature-requests"> How to reporting Bugs/Feature Requests </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#contributing-via-pull-requests"> Contributing via Pull Requests </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-find-contributions-to-work-on"> How to find contributions to work on </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-is-the-code-of-conduct"> What is the code of conduct </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-notify-for-a-security-issue"> How to notify for a security issue </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-is-the-licensing"> What is the licensing </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Contributing Guidelines FAQs</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-reporting-bugs-feature-requests"> How to reporting Bugs/Feature Requests </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#contributing-via-pull-requests"> Contributing via Pull Requests </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-find-contributions-to-work-on"> How to find contributions to work on </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-is-the-code-of-conduct"> What is the code of conduct </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-notify-for-a-security-issue"> How to notify for a security issue </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-is-the-licensing"> What is the licensing </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> <div class="section" id="contributing-guidelines-faqs"> <span id="contribute-faq"></span><h1>Contributing Guidelines FAQs<a class="headerlink" href="#contributing-guidelines-faqs" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#how-to-reporting-bugs-feature-requests" id="id2">How to reporting Bugs/Feature Requests</a></p></li> <li><p><a class="reference internal" href="#contributing-via-pull-requests" id="id3">Contributing via Pull Requests</a></p></li> <li><p><a class="reference internal" href="#how-to-find-contributions-to-work-on" id="id4">How to find contributions to work on</a></p></li> <li><p><a class="reference internal" href="#what-is-the-code-of-conduct" id="id5">What is the code of conduct</a></p></li> <li><p><a class="reference internal" href="#how-to-notify-for-a-security-issue" id="id6">How to notify for a security issue</a></p></li> <li><p><a class="reference internal" href="#what-is-the-licensing" id="id7">What is the licensing</a></p></li> </ul> </div> <p>Whether it’s a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.</p> <p>Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.</p> <div class="section" id="how-to-reporting-bugs-feature-requests"> <h2><a class="toc-backref" href="#id2">How to reporting Bugs/Feature Requests</a><a class="headerlink" href="#how-to-reporting-bugs-feature-requests" title="Permalink to this headline">#</a></h2> <p>We welcome you to use the GitHub issue tracker to report bugs or suggest features.</p> <p>When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn’t already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:</p> <ul class="simple"> <li><p>A reproducible test case or series of steps</p></li> <li><p>The version of our code being used</p></li> <li><p>Any modifications you’ve made relevant to the bug</p></li> <li><p>Anything unusual about your environment or deployment</p></li> </ul> </div> <div class="section" id="contributing-via-pull-requests"> <h2><a class="toc-backref" href="#id3">Contributing via Pull Requests</a><a class="headerlink" href="#contributing-via-pull-requests" title="Permalink to this headline">#</a></h2> <p>Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:</p> <ol class="arabic simple"> <li><p>You are working against the latest source on the <em>master</em> branch.</p></li> <li><p>You check existing open, and recently merged, pull requests to make sure someone else hasn’t addressed the problem already.</p></li> <li><p>You open an issue to discuss any significant work - we would hate for your time to be wasted.</p></li> </ol> <p>To send us a pull request, please:</p> <ol class="arabic simple"> <li><p>Fork the repository.</p></li> <li><p>Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.</p></li> <li><p>Ensure local tests pass.</p></li> <li><p>Commit to your fork using clear commit messages.</p></li> <li><p>Send us a pull request, answering any default questions in the pull request interface.</p></li> <li><p>Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.</p></li> </ol> <p>GitHub provides additional document on <a class="reference external" href="https://help.github.com/articles/fork-a-repo/">forking a repository</a> and <a class="reference external" href="https://help.github.com/articles/creating-a-pull-request/">creating a pull request</a>.</p> </div> <div class="section" id="how-to-find-contributions-to-work-on"> <h2><a class="toc-backref" href="#id4">How to find contributions to work on</a><a class="headerlink" href="#how-to-find-contributions-to-work-on" title="Permalink to this headline">#</a></h2> <p>Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ‘help wanted’ issues is a great place to start.</p> </div> <div class="section" id="what-is-the-code-of-conduct"> <h2><a class="toc-backref" href="#id5">What is the code of conduct</a><a class="headerlink" href="#what-is-the-code-of-conduct" title="Permalink to this headline">#</a></h2> <p>This project has adopted the <a class="reference external" href="https://aws.github.io/code-of-conduct">Amazon Open Source Code of Conduct</a>. For more information see the <a class="reference external" href="https://aws.github.io/code-of-conduct-faq">Code of Conduct FAQ</a> or contact <a class="reference external" href="mailto:opensource-codeofconduct%40amazon.com">opensource-codeofconduct<span>@</span>amazon<span>.</span>com</a> with any additional questions or comments.</p> </div> <div class="section" id="how-to-notify-for-a-security-issue"> <h2><a class="toc-backref" href="#id6">How to notify for a security issue</a><a class="headerlink" href="#how-to-notify-for-a-security-issue" title="Permalink to this headline">#</a></h2> <p>If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our <a class="reference external" href="http://aws.amazon.com/security/vulnerability-reporting/">vulnerability reporting page</a>. Please do <strong>not</strong> create a public github issue.</p> </div> <div class="section" id="what-is-the-licensing"> <h2><a class="toc-backref" href="#id7">What is the licensing</a><a class="headerlink" href="#what-is-the-licensing" title="Permalink to this headline">#</a></h2> <p>See the <a class="reference external" href="https://github.com/aws/aws-neuron-sdk/blob/master/LICENSE-DOCUMENTATION">link</a> and <a class="reference external" href="https://github.com/aws/aws-neuron-sdk/blob/master/LICENSE-SUMMARY-DOCS-SAMPLES">link</a> files for our project’s licensing. We will ask you to confirm the licensing of your contribution.</p> <p>We may ask you to sign a <a class="reference external" href="http://en.wikipedia.org/wiki/Contributor_License_Agreement">Contributor License Agreement (CLA)</a> for larger changes.</p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:22.114Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/benchmarks/trn1/trn1-performance.rst.txt
``` .. _trn1-performance: Trn1/Trn1n Performance ======================= .. contents:: Table of contents :local: *Last update: September 15th, 2023* .. _NLP: Training Performance (Trn1 / Trn1n) ----------------------------------- .. csv-table:: :file: trn1_trn1n_nlp_data.csv :header-rows: 1 .. note:: **TP (Tensor Parallel), PP (Pipeline Parallel) and DP (Data Parallel)** Topology configuration refers to the degrees of 3D Parallelism (How the model and data is sharded across neuron cores). TP and PP are specified in the run script and DP is calculated by dividing **world size**(Number of nodes/instances * Number of neuron cores per instance) by TP * PP degrees. For example : TP = 4, PP = 4 and Number of instances is 32 (trn1.32xlarge). The world size will be : 32 (num instances) * 32(neuron cores per instance) = 1024. Now, DP degree = 1024 (World size)/ 4 (TP) * 4 (PP) = 64 .. note:: Read more about strong vs weak scaling here :ref:`neuron-training-faq` Inference Performance --------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('throughput_data.csv') df_prices = pd.read_csv('trn1_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (inference/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size','Sequence Length', 'Model Data Type','Compilation Autocast Data Type','OS Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (inference/sec)'] = df['Throughput (inference/sec)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Cost per 1M inferences** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('latency_data.csv') df_prices = pd.read_csv('trn1_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (inference/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size','Sequence Length', 'Model Data Type','Compilation Autocast Data Type','OS Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (inference/sec)'] = df['Throughput (inference/sec)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Cost per 1M inferences** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. Large Language Models Inference Performance ------------------------------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('trn1_throughput_data_LLM.csv') df_prices = pd.read_csv('trn1_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (tokens/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (tokens/second)', 'Latency per Token P50 (ms)', 'Latency per Token P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'TP Degree', 'DP Degree', 'Batch Size', 'Sequence Length', 'Input Length', 'Output Length', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (tokens/second)'] = df['Throughput (tokens/second)'].round(2).astype('float',copy=True) int_cols = ['Latency per Token P50 (ms)', 'Latency per Token P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Throughput (tokens/second)** counts both input and output tokens **Latency per Token** counts both input and output tokens **Cost per 1M inferences** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('trn1_latency_data_LLM.csv') df_prices = pd.read_csv('trn1_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (tokens/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (tokens/second)', 'Latency per Token P50 (ms)', 'Latency per Token P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'TP Degree', 'DP Degree', 'Batch Size', 'Sequence Length', 'Input Length', 'Output Length', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (tokens/second)'] = df['Throughput (tokens/second)'].round(2).astype('float',copy=True) int_cols = ['Latency per Token P50 (ms)', 'Latency per Token P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Throughput (tokens/second)** counts both input and output tokens **Latency per Token** counts both input and output tokens **Cost per 1M inferences** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _trn1-performance: Trn1/Trn1n Performance ======================= .. contents:: Table of contents :local: *Last update: September 15th, 2023* .. _NLP: Training Performance (Trn1 / Trn1n) ----------------------------------- .. csv-table:: :file: trn1_trn1n_nlp_data.csv :header-rows: 1 .. note:: **TP (Tensor Parallel), PP (Pipeline Parallel) and DP (Data Parallel)** Topology configuration refers to the degrees of 3D Parallelism (How the model and data is sharded across neuron cores). TP and PP are specified in the run script and DP is calculated by dividing **world size**(Number of nodes/instances * Number of neuron cores per instance) by TP * PP degrees. For example : TP = 4, PP = 4 and Number of instances is 32 (trn1.32xlarge). The world size will be : 32 (num instances) * 32(neuron cores per instance) = 1024. Now, DP degree = 1024 (World size)/ 4 (TP) * 4 (PP) = 64 .. note:: Read more about strong vs weak scaling here :ref:`neuron-training-faq` Inference Performance --------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('throughput_data.csv') df_prices = pd.read_csv('trn1_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (inference/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size','Sequence Length', 'Model Data Type','Compilation Autocast Data Type','OS Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (inference/sec)'] = df['Throughput (inference/sec)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Cost per 1M inferences** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('latency_data.csv') df_prices = pd.read_csv('trn1_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (inference/sec)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (inference/sec)', 'Latency P50 (ms)', 'Latency P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'Batch Size','Sequence Length', 'Model Data Type','Compilation Autocast Data Type','OS Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (inference/sec)'] = df['Throughput (inference/sec)'].round(2).astype('float',copy=True) int_cols = ['Latency P50 (ms)', 'Latency P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Cost per 1M inferences** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. Large Language Models Inference Performance ------------------------------------------- .. tab-set:: .. tab-item:: Throughput optimized .. df-table:: :header-rows: 1 df = pd.read_csv('trn1_throughput_data_LLM.csv') df_prices = pd.read_csv('trn1_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (tokens/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (tokens/second)', 'Latency per Token P50 (ms)', 'Latency per Token P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'TP Degree', 'DP Degree', 'Batch Size', 'Sequence Length', 'Input Length', 'Output Length', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (tokens/second)'] = df['Throughput (tokens/second)'].round(2).astype('float',copy=True) int_cols = ['Latency per Token P50 (ms)', 'Latency per Token P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Throughput (tokens/second)** counts both input and output tokens **Latency per Token** counts both input and output tokens **Cost per 1M inferences** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. .. tab-item:: Latency optimized .. df-table:: :header-rows: 1 df = pd.read_csv('trn1_latency_data_LLM.csv') df_prices = pd.read_csv('trn1_instance_prices.csv') df = pd.merge(df,df_prices,on='Inst. Type') df['Cost per 1M inferences'] = ((1.0e6 / df['Throughput (tokens/second)']) * (df['On-Demand hourly rate'] / 3.6e3 )).map('${:,.3f}'.format) cols_to_show = ['Model','Scripts','Framework', 'Inst. Type', 'Task', 'Throughput (tokens/second)', 'Latency per Token P50 (ms)', 'Latency per Token P99 (ms)', 'Cost per 1M inferences', 'Application Type', 'Neuron Version', 'Run Mode', 'TP Degree', 'DP Degree', 'Batch Size', 'Sequence Length', 'Input Length', 'Output Length', 'Model Data Type','Compilation Autocast Data Type'] df = df[cols_to_show].sort_values(['Model', 'Cost per 1M inferences']) df['Throughput (tokens/second)'] = df['Throughput (tokens/second)'].round(2).astype('float',copy=True) int_cols = ['Latency per Token P50 (ms)', 'Latency per Token P99 (ms)'] df[int_cols] = df[int_cols].round(2).astype('float',copy=True) .. note:: **Throughput (tokens/second)** counts both input and output tokens **Latency per Token** counts both input and output tokens **Cost per 1M inferences** is calculated using On-Demand hourly rate. **Real Time** application refers to batch size 1 inference for minimal latency. **Batch** application refers to maximum throughput with minimum cost-per-inference. </pre></body></html>
2023-09-29T20:55:22.208Z
Neuron Setup Troubleshooting — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/setup/setup-troubleshooting.html#neuron-setup-troubleshooting
# Neuron Setup Troubleshooting — AWS Neuron Documentation Table of contents - [How to update Neuron repository GNU Privacy Guard (GPG) key for Ubuntu installation](#how-to-update-neuron-repository-gnu-privacy-guard-gpg-key-for-ubuntu-installation) - [Description](#description) - [Solution](#solution) - [`pip install --upgrade` wouldn’t upgrade `neuron-cc`](#pip-install-upgrade-wouldn-t-upgrade-neuron-cc) - [Description](#id2) - [Solution](#id3) ## [How to update Neuron repository GNU Privacy Guard (GPG) key for Ubuntu installation](#id4)[#](#how-to-update-neuron-repository-gnu-privacy-guard-gpg-key-for-ubuntu-installation "Permalink to this headline") ### [Description](#id5)[#](#description "Permalink to this headline") The GPG key for the Neuron repository ([https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB](https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB)) is installed on the Ubuntu (Canonical) server, the key was uploaded originally with an expiry date of three (3) years, which has expired on 11/10/22. Any customer of Ubuntu or Debian using Neuron `apt` repository will get the following error: ``` While running an apt-get update command on an AWS deep learning image (us-east-1/ami-01fce297f68912e45) I get this output: Err:6 https://apt.repos.neuron.amazonaws.com (https://apt.repos.neuron.amazonaws.com/) bionic InRelease The following signatures were invalid: EXPKEYSIG 5749CAD8646D9185 Amazon AWS Neuron <[email protected]> Fetched 172 kB in 1s (161 kB/s) Reading package lists... Done W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error:https://apt.repos.neuron.amazonaws.com (https://apt.repos.neuron.amazonaws.com/) bionic InRelease: The following signatures were invalid: EXPKEYSIG 5749CAD8646D9185 Amazon AWS Neuron <[email protected]> ``` ### [Solution](#id6)[#](#solution "Permalink to this headline") To solve this issue, you need to run the following commands to fetch the new key before running `apt-get update` ``` wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | sudo apt-key add - # Update OS packages sudo apt-get update -y ``` ## [`pip install --upgrade` wouldn’t upgrade `neuron-cc`](#id7)[#](#pip-install-upgrade-wouldn-t-upgrade-neuron-cc "Permalink to this headline") ### [Solution](#id9)[#](#id3 "Permalink to this headline") To solve this issue you can either upgrade to a newer `pip` version or use `--force` when trying to upgrade, for example: `pip install --force torch-neuron neuron-cc[tensorflow] torchvision` _This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Neuron Setup Troubleshooting — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../_static/pygments.css"> <link rel="stylesheet" href="../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script> <script src="../../_static/jquery.js"></script> <script src="../../_static/underscore.js"></script> <script src="../../_static/doctools.js"></script> <script src="../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../_static/contentui.js"></script> <script src="../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../genindex.html"> <link rel="search" title="Search" href="../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/setup/setup-troubleshooting", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/setup/setup-troubleshooting.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/setup/setup-troubleshooting.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../_sources/general/setup/setup-troubleshooting.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-update-neuron-repository-gnu-privacy-guard-gpg-key-for-ubuntu-installation"> How to update Neuron repository GNU Privacy Guard (GPG) key for Ubuntu installation </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#description"> Description </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#solution"> Solution </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#pip-install-upgrade-wouldn-t-upgrade-neuron-cc"> <code class="docutils literal notranslate"> <span class="pre"> pip </span> <span class="pre"> install </span> <span class="pre"> --upgrade </span> </code> wouldn’t upgrade <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#id2"> Description </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#id3"> Solution </a> </li> </ul> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Neuron Setup Troubleshooting</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-to-update-neuron-repository-gnu-privacy-guard-gpg-key-for-ubuntu-installation"> How to update Neuron repository GNU Privacy Guard (GPG) key for Ubuntu installation </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#description"> Description </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#solution"> Solution </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#pip-install-upgrade-wouldn-t-upgrade-neuron-cc"> <code class="docutils literal notranslate"> <span class="pre"> pip </span> <span class="pre"> install </span> <span class="pre"> --upgrade </span> </code> wouldn’t upgrade <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#id2"> Description </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#id3"> Solution </a> </li> </ul> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> <div class="section" id="neuron-setup-troubleshooting"> <span id="id1"></span><h1>Neuron Setup Troubleshooting<a class="headerlink" href="#neuron-setup-troubleshooting" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#how-to-update-neuron-repository-gnu-privacy-guard-gpg-key-for-ubuntu-installation" id="id4">How to update Neuron repository GNU Privacy Guard (GPG) key for Ubuntu installation</a></p> <ul> <li><p><a class="reference internal" href="#description" id="id5">Description</a></p></li> <li><p><a class="reference internal" href="#solution" id="id6">Solution</a></p></li> </ul> </li> <li><p><a class="reference internal" href="#pip-install-upgrade-wouldn-t-upgrade-neuron-cc" id="id7"><code class="docutils literal notranslate"><span class="pre">pip</span> <span class="pre">install</span> <span class="pre">--upgrade</span></code> wouldn’t upgrade <code class="docutils literal notranslate"><span class="pre">neuron-cc</span></code></a></p> <ul> <li><p><a class="reference internal" href="#id2" id="id8">Description</a></p></li> <li><p><a class="reference internal" href="#id3" id="id9">Solution</a></p></li> </ul> </li> </ul> </div> <div class="section" id="how-to-update-neuron-repository-gnu-privacy-guard-gpg-key-for-ubuntu-installation"> <span id="gpg-key-update"></span><h2><a class="toc-backref" href="#id4">How to update Neuron repository GNU Privacy Guard (GPG) key for Ubuntu installation</a><a class="headerlink" href="#how-to-update-neuron-repository-gnu-privacy-guard-gpg-key-for-ubuntu-installation" title="Permalink to this headline">#</a></h2> <div class="section" id="description"> <h3><a class="toc-backref" href="#id5">Description</a><a class="headerlink" href="#description" title="Permalink to this headline">#</a></h3> <p>The GPG key for the Neuron repository (<a class="reference external" href="https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB">https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB</a>) is installed on the Ubuntu (Canonical) server, the key was uploaded originally with an expiry date of three (3) years, which has expired on 11/10/22.</p> <p>Any customer of Ubuntu or Debian using Neuron <code class="docutils literal notranslate"><span class="pre">apt</span></code> repository will get the following error:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">While</span> <span class="n">running</span> <span class="n">an</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">update</span> <span class="n">command</span> <span class="n">on</span> <span class="n">an</span> <span class="n">AWS</span> <span class="n">deep</span> <span class="n">learning</span> <span class="n">image</span> <span class="p">(</span><span class="n">us</span><span class="o">-</span><span class="n">east</span><span class="o">-</span><span class="mi">1</span><span class="o">/</span><span class="n">ami</span><span class="o">-</span><span class="mi">01</span><span class="n">fce297f68912e45</span><span class="p">)</span> <span class="n">I</span> <span class="n">get</span> <span class="n">this</span> <span class="n">output</span><span class="p">:</span> <span class="n">Err</span><span class="p">:</span><span class="mi">6</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">apt</span><span class="o">.</span><span class="n">repos</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">amazonaws</span><span class="o">.</span><span class="n">com</span> <span class="p">(</span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">apt</span><span class="o">.</span><span class="n">repos</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">amazonaws</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="p">)</span> <span class="n">bionic</span> <span class="n">InRelease</span> <span class="n">The</span> <span class="n">following</span> <span class="n">signatures</span> <span class="n">were</span> <span class="n">invalid</span><span class="p">:</span> <span class="n">EXPKEYSIG</span> <span class="mi">5749</span><span class="n">CAD8646D9185</span> <span class="n">Amazon</span> <span class="n">AWS</span> <span class="n">Neuron</span> <span class="o">&lt;</span><span class="n">neuron</span><span class="o">-</span><span class="n">maintainers</span><span class="nd">@amazon</span><span class="o">.</span><span class="n">com</span><span class="o">&gt;</span> <span class="n">Fetched</span> <span class="mi">172</span> <span class="n">kB</span> <span class="ow">in</span> <span class="mi">1</span><span class="n">s</span> <span class="p">(</span><span class="mi">161</span> <span class="n">kB</span><span class="o">/</span><span class="n">s</span><span class="p">)</span> <span class="n">Reading</span> <span class="n">package</span> <span class="n">lists</span><span class="o">...</span> <span class="n">Done</span> <span class="n">W</span><span class="p">:</span> <span class="n">An</span> <span class="n">error</span> <span class="n">occurred</span> <span class="n">during</span> <span class="n">the</span> <span class="n">signature</span> <span class="n">verification</span><span class="o">.</span> <span class="n">The</span> <span class="n">repository</span> <span class="ow">is</span> <span class="ow">not</span> <span class="n">updated</span> <span class="ow">and</span> <span class="n">the</span> <span class="n">previous</span> <span class="n">index</span> <span class="n">files</span> <span class="n">will</span> <span class="n">be</span> <span class="n">used</span><span class="o">.</span> <span class="n">GPG</span> <span class="n">error</span><span class="p">:</span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">apt</span><span class="o">.</span><span class="n">repos</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">amazonaws</span><span class="o">.</span><span class="n">com</span> <span class="p">(</span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">apt</span><span class="o">.</span><span class="n">repos</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">amazonaws</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="p">)</span> <span class="n">bionic</span> <span class="n">InRelease</span><span class="p">:</span> <span class="n">The</span> <span class="n">following</span> <span class="n">signatures</span> <span class="n">were</span> <span class="n">invalid</span><span class="p">:</span> <span class="n">EXPKEYSIG</span> <span class="mi">5749</span><span class="n">CAD8646D9185</span> <span class="n">Amazon</span> <span class="n">AWS</span> <span class="n">Neuron</span> <span class="o">&lt;</span><span class="n">neuron</span><span class="o">-</span><span class="n">maintainers</span><span class="nd">@amazon</span><span class="o">.</span><span class="n">com</span><span class="o">&gt;</span> </pre></div> </div> </div> <div class="section" id="solution"> <h3><a class="toc-backref" href="#id6">Solution</a><a class="headerlink" href="#solution" title="Permalink to this headline">#</a></h3> <p>To solve this issue, you need to run the following commands to fetch the new key before running <code class="docutils literal notranslate"><span class="pre">apt-get</span> <span class="pre">update</span></code></p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">wget</span> <span class="o">-</span><span class="n">qO</span> <span class="o">-</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">apt</span><span class="o">.</span><span class="n">repos</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">amazonaws</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">GPG</span><span class="o">-</span><span class="n">PUB</span><span class="o">-</span><span class="n">KEY</span><span class="o">-</span><span class="n">AMAZON</span><span class="o">-</span><span class="n">AWS</span><span class="o">-</span><span class="n">NEURON</span><span class="o">.</span><span class="n">PUB</span> <span class="o">|</span> <span class="n">sudo</span> <span class="n">apt</span><span class="o">-</span><span class="n">key</span> <span class="n">add</span> <span class="o">-</span> <span class="c1"># Update OS packages</span> <span class="n">sudo</span> <span class="n">apt</span><span class="o">-</span><span class="n">get</span> <span class="n">update</span> <span class="o">-</span><span class="n">y</span> </pre></div> </div> </div> </div> <div class="section" id="pip-install-upgrade-wouldn-t-upgrade-neuron-cc"> <h2><a class="toc-backref" href="#id7"><code class="docutils literal notranslate"><span class="pre">pip</span> <span class="pre">install</span> <span class="pre">--upgrade</span></code> wouldn’t upgrade <code class="docutils literal notranslate"><span class="pre">neuron-cc</span></code></a><a class="headerlink" href="#pip-install-upgrade-wouldn-t-upgrade-neuron-cc" title="Permalink to this headline">#</a></h2> <div class="section" id="id2"> <h3><a class="toc-backref" href="#id8">Description</a><a class="headerlink" href="#id2" title="Permalink to this headline">#</a></h3> <p>When trying to upgrade to a newer Neuron release, for example by calling:</p> <p><code class="docutils literal notranslate"><span class="pre">pip</span> <span class="pre">install</span> <span class="pre">--upgrade</span> <span class="pre">torch-neuron</span> <span class="pre">neuron-cc[tensorflow]</span> <span class="pre">torchvision</span></code></p> <p><code class="docutils literal notranslate"><span class="pre">neuron-cc</span></code> is not upgraded.</p> <p>This can be a result of a bug in certain <code class="docutils literal notranslate"><span class="pre">pip</span></code> versions, for example <a class="reference external" href="https://github.com/pypa/pip/issues/10173">pip install upgrade will not upgrade package if extras_require specified</a></p> </div> <div class="section" id="id3"> <h3><a class="toc-backref" href="#id9">Solution</a><a class="headerlink" href="#id3" title="Permalink to this headline">#</a></h3> <p>To solve this issue you can either upgrade to a newer <code class="docutils literal notranslate"><span class="pre">pip</span></code> version or use <code class="docutils literal notranslate"><span class="pre">--force</span></code> when trying to upgrade, for example:</p> <p><code class="docutils literal notranslate"><span class="pre">pip</span> <span class="pre">install</span> <span class="pre">--force</span> <span class="pre">torch-neuron</span> <span class="pre">neuron-cc[tensorflow]</span> <span class="pre">torchvision</span></code></p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> </div> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:22.305Z
Roadmap FAQ — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/faq/roadmap-faq.html#neuron-roadmap-faq
# Roadmap FAQ — AWS Neuron Documentation _This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n` ## Roadmap FAQ[#](#roadmap-faq "Permalink to this headline") Table of contents - [Why did you build this?](#why-did-you-build-this) - [What do the roadmap categories mean?](#what-do-the-roadmap-categories-mean) - [Why are there no dates on your roadmap?](#why-are-there-no-dates-on-your-roadmap) - [Is everything on the roadmap?](#is-everything-on-the-roadmap) - [How can I provide feedback or ask for more information?](#how-can-i-provide-feedback-or-ask-for-more-information) - [How can I request a feature be added to the roadmap?](#how-can-i-request-a-feature-be-added-to-the-roadmap) - [Can I “+1” existing issues?](#can-i-1-existing-issues) ## [Why did you build this?](#id1)[#](#why-did-you-build-this "Permalink to this headline") A: We know that our customers are making decisions and plans based on what we are developing, and we want to provide them with the right visibility to what we are working on, as well as the opportunity to provide direct feedback. ## [What do the roadmap categories mean?](#id2)[#](#what-do-the-roadmap-categories-mean "Permalink to this headline") - **Roadmap Requests** - Requests we recieved and we are considering to add to the roadmap, this is a great phase to give us feedback and let us know if you need this feature as well. - **Working on it** - In progress, we might still be working through the implementation details, or scoping stuff out. This is a great phase to give us feedback as to how you want to see something implemented. We’ll benefit from your specific use cases here. - **Completed** - Feature complete and supported by Neuron. ## [Why are there no dates on your roadmap?](#id3)[#](#why-are-there-no-dates-on-your-roadmap "Permalink to this headline") A: We are not providing exact target dates for releases because we prioritize operational excellence, security and quality over hitting a specific date. If you have an urgent need for a feature, please contact us directly at [[email protected]](mailto:aws-neuron-support%40amazon.com). ## [Is everything on the roadmap?](#id4)[#](#is-everything-on-the-roadmap "Permalink to this headline") A: We are focusing on upgrades for existing features, as well as building new features. We will keep adding features and capabilities to this roadmap as time progresses. ## [How can I request a feature be added to the roadmap?](#id6)[#](#how-can-i-request-a-feature-be-added-to-the-roadmap "Permalink to this headline") A: We encourage you to open an issue. All community-submitted issues will be reviewed by the roadmap maintainers. ## [Can I “+1” existing issues?](#id7)[#](#can-i-1-existing-issues "Permalink to this headline") A:We strongly encourage you to do so, as it helps us understand which issues will have the widest impact. You can navigate to the issue details page and add a reaction (thumbs up). There are six types of reactions supported (thumbs down “-1”, confused, heart, watching, laugh, hooray, and thumbs up +1). _This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Roadmap FAQ — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../_static/pygments.css"> <link rel="stylesheet" href="../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script> <script src="../../_static/jquery.js"></script> <script src="../../_static/underscore.js"></script> <script src="../../_static/doctools.js"></script> <script src="../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../_static/contentui.js"></script> <script src="../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../genindex.html"> <link rel="search" title="Search" href="../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/faq/roadmap-faq", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/torch/index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/torch/torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/torch/training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../frameworks/mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../frameworks/mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../frameworks/mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/faq/roadmap-faq.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/faq/roadmap-faq.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../_sources/general/faq/roadmap-faq.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#why-did-you-build-this"> Why did you build this? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-do-the-roadmap-categories-mean"> What do the roadmap categories mean? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#why-are-there-no-dates-on-your-roadmap"> Why are there no dates on your roadmap? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#is-everything-on-the-roadmap"> Is everything on the roadmap? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-can-i-provide-feedback-or-ask-for-more-information"> How can I provide feedback or ask for more information? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-can-i-request-a-feature-be-added-to-the-roadmap"> How can I request a feature be added to the roadmap? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-i-1-existing-issues"> Can I “+1” existing issues? </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Roadmap FAQ</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#why-did-you-build-this"> Why did you build this? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#what-do-the-roadmap-categories-mean"> What do the roadmap categories mean? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#why-are-there-no-dates-on-your-roadmap"> Why are there no dates on your roadmap? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#is-everything-on-the-roadmap"> Is everything on the roadmap? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-can-i-provide-feedback-or-ask-for-more-information"> How can I provide feedback or ask for more information? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#how-can-i-request-a-feature-be-added-to-the-roadmap"> How can I request a feature be added to the roadmap? </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#can-i-1-existing-issues"> Can I “+1” existing issues? </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> <div class="section" id="roadmap-faq"> <span id="neuron-roadmap-faq"></span><h1>Roadmap FAQ<a class="headerlink" href="#roadmap-faq" title="Permalink to this headline">#</a></h1> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#why-did-you-build-this" id="id1">Why did you build this?</a></p></li> <li><p><a class="reference internal" href="#what-do-the-roadmap-categories-mean" id="id2">What do the roadmap categories mean?</a></p></li> <li><p><a class="reference internal" href="#why-are-there-no-dates-on-your-roadmap" id="id3">Why are there no dates on your roadmap?</a></p></li> <li><p><a class="reference internal" href="#is-everything-on-the-roadmap" id="id4">Is everything on the roadmap?</a></p></li> <li><p><a class="reference internal" href="#how-can-i-provide-feedback-or-ask-for-more-information" id="id5">How can I provide feedback or ask for more information?</a></p></li> <li><p><a class="reference internal" href="#how-can-i-request-a-feature-be-added-to-the-roadmap" id="id6">How can I request a feature be added to the roadmap?</a></p></li> <li><p><a class="reference internal" href="#can-i-1-existing-issues" id="id7">Can I “+1” existing issues?</a></p></li> </ul> </div> <div class="section" id="why-did-you-build-this"> <h2><a class="toc-backref" href="#id1">Why did you build this?</a><a class="headerlink" href="#why-did-you-build-this" title="Permalink to this headline">#</a></h2> <p>A: We know that our customers are making decisions and plans based on what we are developing, and we want to provide them with the right visibility to what we are working on, as well as the opportunity to provide direct feedback.</p> </div> <div class="section" id="what-do-the-roadmap-categories-mean"> <h2><a class="toc-backref" href="#id2">What do the roadmap categories mean?</a><a class="headerlink" href="#what-do-the-roadmap-categories-mean" title="Permalink to this headline">#</a></h2> <ul class="simple"> <li><p><strong>Roadmap Requests</strong> - Requests we recieved and we are considering to add to the roadmap, this is a great phase to give us feedback and let us know if you need this feature as well.</p></li> <li><p><strong>Working on it</strong> - In progress, we might still be working through the implementation details, or scoping stuff out. This is a great phase to give us feedback as to how you want to see something implemented. We’ll benefit from your specific use cases here.</p></li> <li><p><strong>Completed</strong> - Feature complete and supported by Neuron.</p></li> </ul> </div> <div class="section" id="why-are-there-no-dates-on-your-roadmap"> <h2><a class="toc-backref" href="#id3">Why are there no dates on your roadmap?</a><a class="headerlink" href="#why-are-there-no-dates-on-your-roadmap" title="Permalink to this headline">#</a></h2> <p>A: We are not providing exact target dates for releases because we prioritize operational excellence, security and quality over hitting a specific date. If you have an urgent need for a feature, please contact us directly at <a class="reference external" href="mailto:aws-neuron-support%40amazon.com">aws-neuron-support<span>@</span>amazon<span>.</span>com</a>.</p> </div> <div class="section" id="is-everything-on-the-roadmap"> <h2><a class="toc-backref" href="#id4">Is everything on the roadmap?</a><a class="headerlink" href="#is-everything-on-the-roadmap" title="Permalink to this headline">#</a></h2> <p>A: We are focusing on upgrades for existing features, as well as building new features. We will keep adding features and capabilities to this roadmap as time progresses.</p> </div> <div class="section" id="how-can-i-provide-feedback-or-ask-for-more-information"> <h2><a class="toc-backref" href="#id5">How can I provide feedback or ask for more information?</a><a class="headerlink" href="#how-can-i-provide-feedback-or-ask-for-more-information" title="Permalink to this headline">#</a></h2> <p>A: When in doubt, please create an issue or post a question on the <a class="reference external" href="https://forums.aws.amazon.com/forum.jspa?forumID=355">AWS Neuron support forum</a>.</p> </div> <div class="section" id="how-can-i-request-a-feature-be-added-to-the-roadmap"> <h2><a class="toc-backref" href="#id6">How can I request a feature be added to the roadmap?</a><a class="headerlink" href="#how-can-i-request-a-feature-be-added-to-the-roadmap" title="Permalink to this headline">#</a></h2> <p>A: We encourage you to open an issue. All community-submitted issues will be reviewed by the roadmap maintainers.</p> </div> <div class="section" id="can-i-1-existing-issues"> <h2><a class="toc-backref" href="#id7">Can I “+1” existing issues?</a><a class="headerlink" href="#can-i-1-existing-issues" title="Permalink to this headline">#</a></h2> <p>A:We strongly encourage you to do so, as it helps us understand which issues will have the widest impact. You can navigate to the issue details page and add a reaction (thumbs up). There are six types of reactions supported (thumbs down “-1”, confused, heart, watching, laugh, hooray, and thumbs up +1).</p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:22.371Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/dlami-neuron-2.12.rst.txt
``` .. post:: July 26, 2023 11:00 :language: en :tags: dlami, pytorch, trn1, inf2, inf1 .. _announce-dlami-neuron-2.12: AWS Deep Learning AMIs now available with Neuron 2.12 version ------------------------------------------------------------- We are happy to announce that the following Deep Learning AMIs are now available with latest Neuron Version 2.12. You can see more about the AMIs at the following URLs * `AWS Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-ubuntu-20-04/>`__ * `AWS Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-amazon-linux-2/>`__ * `AWS Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 20.04) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-tensorflow-2-10-ubuntu-20-04/>`__ * `AWS Deep Learning AMI Neuron TensorFlow 2.10 (Amazon Linux 2) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-tensorflow-2-10-amazon-linux-2/>`__ * `AWS Deep Learning AMI Base Neuron (Ubuntu 20.04) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`__ * `AWS Deep Learning AMI Base Neuron (Amazon Linux 2) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`__ ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: July 26, 2023 11:00 :language: en :tags: dlami, pytorch, trn1, inf2, inf1 .. _announce-dlami-neuron-2.12: AWS Deep Learning AMIs now available with Neuron 2.12 version ------------------------------------------------------------- We are happy to announce that the following Deep Learning AMIs are now available with latest Neuron Version 2.12. You can see more about the AMIs at the following URLs * `AWS Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-ubuntu-20-04/&gt;`__ * `AWS Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-amazon-linux-2/&gt;`__ * `AWS Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 20.04) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-tensorflow-2-10-ubuntu-20-04/&gt;`__ * `AWS Deep Learning AMI Neuron TensorFlow 2.10 (Amazon Linux 2) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-tensorflow-2-10-amazon-linux-2/&gt;`__ * `AWS Deep Learning AMI Base Neuron (Ubuntu 20.04) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/&gt;`__ * `AWS Deep Learning AMI Base Neuron (Amazon Linux 2) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/&gt;`__ </pre></body></html>
2023-09-29T20:55:22.398Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/announce-eol-tf-before-2-7.rst.txt
``` .. post:: May 01, 2023 01:00 :language: en :tags: announce-eol tensorflow-neuron .. _announce-eol-tf-before-2-7: Announcing end of support for ``tensorflow-neuron`` versions 2.7 ---------------------------------------------------------------- :ref:`Neuron release 2.10 <neuron-2.10.0-whatsnew>` will be the last release that will include ``tensorflow-neuron`` versions 2.7. Future Neuron releases will not include ``tensorflow-neuron`` versions 2.7 Current users of those versions are advised to migrate to latest ``tensorflow-neuron`` version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: May 01, 2023 01:00 :language: en :tags: announce-eol tensorflow-neuron .. _announce-eol-tf-before-2-7: Announcing end of support for ``tensorflow-neuron`` versions 2.7 ---------------------------------------------------------------- :ref:`Neuron release 2.10 &lt;neuron-2.10.0-whatsnew&gt;` will be the last release that will include ``tensorflow-neuron`` versions 2.7. Future Neuron releases will not include ``tensorflow-neuron`` versions 2.7 Current users of those versions are advised to migrate to latest ``tensorflow-neuron`` version. </pre></body></html>
2023-09-29T20:55:22.404Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/announce-eol-python-3-7.rst.txt
``` .. post:: Jul 26, 2023 10:00 :language: en :tags: announce-eol, python37 .. _announce-eol-python37: Announcing end of support for ``Python 3.7`` ------------------------------------------- :ref:`Neuron release 2.12 <neuron-2.12.0-whatsnew>` will be the last release that will include support for ``Python 3.7`` . Future Neuron releases will not include support for ``Python 3.7`` Current users using ``Python 3.7`` are advised to migrate to latest supported Python version. (``Python 3.10`` ) ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Jul 26, 2023 10:00 :language: en :tags: announce-eol, python37 .. _announce-eol-python37: Announcing end of support for ``Python 3.7`` ------------------------------------------- :ref:`Neuron release 2.12 &lt;neuron-2.12.0-whatsnew&gt;` will be the last release that will include support for ``Python 3.7`` . Future Neuron releases will not include support for ``Python 3.7`` Current users using ``Python 3.7`` are advised to migrate to latest supported Python version. (``Python 3.10`` )</pre></body></html>
2023-09-29T20:55:22.418Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/announce-eos-megatronlm-2-13.rst.txt
``` .. post:: Aug 28, 2023 :language: en :tags: announce-eos, trn1, trn1n .. _announce-eol-megatronlm: AWS Neuron reference for Megatron-LM no longer supported ---------------------------------------------------------- :ref:`Neuron release 2.13 <neuron-2.13.0-whatsnew>` no longer includes support for `AWS Neuron reference for Megatron-LM <https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm>`_. Current Neuron Megatron-LM users are required to migrate to `AWS Neuron reference for NeMo Megatron <https://github.com/aws-neuron/neuronx-nemo-megatron>`_ or `Neuron Distributed <https://github.com/aws-neuron/neuronx-distributed>`_. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Aug 28, 2023 :language: en :tags: announce-eos, trn1, trn1n .. _announce-eol-megatronlm: AWS Neuron reference for Megatron-LM no longer supported ---------------------------------------------------------- :ref:`Neuron release 2.13 &lt;neuron-2.13.0-whatsnew&gt;` no longer includes support for `AWS Neuron reference for Megatron-LM &lt;https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm&gt;`_. Current Neuron Megatron-LM users are required to migrate to `AWS Neuron reference for NeMo Megatron &lt;https://github.com/aws-neuron/neuronx-nemo-megatron&gt;`_ or `Neuron Distributed &lt;https://github.com/aws-neuron/neuronx-distributed&gt;`_. </pre></body></html>
2023-09-29T20:55:22.488Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/announce-eol-megatron-lm.rst.txt
``` .. post:: Aug 8, 2023 :language: en :tags: announce-eol, trn1, trn1n .. _announce-eol-megatronlm: Announcing end of support for AWS Neuron reference for Megatron-LM ------------------------------------------------------------------- :ref:`Neuron release 2.12 <neuron-2.12.0-whatsnew>` will be the last release that will include support for `AWS Neuron reference for Megatron-LM <https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm>`_. Future releases will not include Neuron support for Megatron-LM. Current Neuron Megatron-LM users are advised to migrate to `AWS Neuron reference for NeMo Megatron <https://github.com/aws-neuron/neuronx-nemo-megatron>`_ or `Neuron Distributed <https://github.com/aws-neuron/neuronx-distributed>`_. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Aug 8, 2023 :language: en :tags: announce-eol, trn1, trn1n .. _announce-eol-megatronlm: Announcing end of support for AWS Neuron reference for Megatron-LM ------------------------------------------------------------------- :ref:`Neuron release 2.12 &lt;neuron-2.12.0-whatsnew&gt;` will be the last release that will include support for `AWS Neuron reference for Megatron-LM &lt;https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm&gt;`_. Future releases will not include Neuron support for Megatron-LM. Current Neuron Megatron-LM users are advised to migrate to `AWS Neuron reference for NeMo Megatron &lt;https://github.com/aws-neuron/neuronx-nemo-megatron&gt;`_ or `Neuron Distributed &lt;https://github.com/aws-neuron/neuronx-distributed&gt;`_. </pre></body></html>
2023-09-29T20:55:22.572Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/announce-deprecation-transformer-flag.rst.txt
``` .. post:: September 15, 2023 :language: en :tags: announce-deprecation, transformer-flag .. _announce-deprecation-transformer-flag: Announcing deprecation for ``--model-type=transformer-inference`` compiler flag ------------------------------------------- Starting with :ref:`Neuron release 2.14 <neuron-2.14.0-whatsnew>`, the ``--model-type=transformer-inference`` compiler flag is deprecated. Neuron SDK users using the ``--model-type=transformer-inference`` compiler flag are highly encouraged to migrate to the ``--model-type=transformer`` compiler flag. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: September 15, 2023 :language: en :tags: announce-deprecation, transformer-flag .. _announce-deprecation-transformer-flag: Announcing deprecation for ``--model-type=transformer-inference`` compiler flag ------------------------------------------- Starting with :ref:`Neuron release 2.14 &lt;neuron-2.14.0-whatsnew&gt;`, the ``--model-type=transformer-inference`` compiler flag is deprecated. Neuron SDK users using the ``--model-type=transformer-inference`` compiler flag are highly encouraged to migrate to the ``--model-type=transformer`` compiler flag. </pre></body></html>
2023-09-29T20:55:22.684Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/announce-eol-mx-before-1-5.rst.txt
``` .. post:: May 01, 2023 01:00 :language: en :tags: announce-eol mxnet-neuron .. _announce-eol-mxnet-before-1-5: Announcing end of support for ``mxnet-neuron`` versions 1.5 ----------------------------------------------------------- :ref:`Neuron release 2.10 <neuron-2.10.0-whatsnew>` will be the last release that will include ``mxnet-neuron`` versions 1.5. Future Neuron releases will not include ``mxnet-neuron`` versions 1.5 Current users of those versions are advised to migrate to latest ``mxnet-neuron`` version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: May 01, 2023 01:00 :language: en :tags: announce-eol mxnet-neuron .. _announce-eol-mxnet-before-1-5: Announcing end of support for ``mxnet-neuron`` versions 1.5 ----------------------------------------------------------- :ref:`Neuron release 2.10 &lt;neuron-2.10.0-whatsnew&gt;` will be the last release that will include ``mxnet-neuron`` versions 1.5. Future Neuron releases will not include ``mxnet-neuron`` versions 1.5 Current users of those versions are advised to migrate to latest ``mxnet-neuron`` version. </pre></body></html>
2023-09-29T20:55:22.804Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/announce-eol-ubuntu-18.rst.txt
``` .. post:: Jul 13, 2023 11:00 :language: en :tags: announce-eol, ubuntu18 .. _announce-eol-ubuntu18: Announcing end of support for ``Ubuntu 18`` ------------------------------------------- :ref:`Neuron release 2.12 <neuron-2.12.0-whatsnew>` will be the last release that will include support for ``Ubuntu 18`` . Future Neuron releases will not include support for ``Ubuntu 18`` Current users using ``Ubuntu 18`` are advised to migrate to ``Ubuntu 20`` version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Jul 13, 2023 11:00 :language: en :tags: announce-eol, ubuntu18 .. _announce-eol-ubuntu18: Announcing end of support for ``Ubuntu 18`` ------------------------------------------- :ref:`Neuron release 2.12 &lt;neuron-2.12.0-whatsnew&gt;` will be the last release that will include support for ``Ubuntu 18`` . Future Neuron releases will not include support for ``Ubuntu 18`` Current users using ``Ubuntu 18`` are advised to migrate to ``Ubuntu 20`` version. </pre></body></html>
2023-09-29T20:55:22.838Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/announce-eol-tf-before-2-5.rst.txt
``` .. post:: Nov 22, 2022 01:00 :language: en :tags: announce-eol tensorflow-neuron .. _announce-eol-tf-before-2-5: Announcing end of support for ``tensorflow-neuron`` versions 2.5 and 2.6 ------------------------------------------------------------------------ :ref:`Neuron release 2.5 <neuron-2.5.0-whatsnew>` will be the last release that will include ``tensorflow-neuron`` versions 2.5 and 2.6. Future Neuron releases will not include ``tensorflow-neuron`` versions 2.5 and 2.6. Current users of those versions are advised to migrate to latest ``tensorflow-neuron`` version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Nov 22, 2022 01:00 :language: en :tags: announce-eol tensorflow-neuron .. _announce-eol-tf-before-2-5: Announcing end of support for ``tensorflow-neuron`` versions 2.5 and 2.6 ------------------------------------------------------------------------ :ref:`Neuron release 2.5 &lt;neuron-2.5.0-whatsnew&gt;` will be the last release that will include ``tensorflow-neuron`` versions 2.5 and 2.6. Future Neuron releases will not include ``tensorflow-neuron`` versions 2.5 and 2.6. Current users of those versions are advised to migrate to latest ``tensorflow-neuron`` version. </pre></body></html>
2023-09-29T20:55:22.848Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/prev/rn.rst.txt
``` .. _prev-rn: Previous Releases Notes (Neuron 2.x) ==================================== .. contents:: Table of contents :local: :depth: 1 .. _neuron-2.13.0-whatsnew: Neuron 2.13.2 (09/01/2023) --------------------------- This is a patch release that fixes issues in Kubernetes (K8) deployments related to Neuron Device Plugin crashes and other pod scheduling issues. This release also adds support for zero-based Neuron Device indexing in K8 deployments, see the :ref:`Neuron K8 release notes <neuron-k8-rn>` for more details on the specific bug fixes. Updating to latest Neuron Kubernetes components and Neuron Driver is highly encouraged for customers using Kubernetes. Please :ref:`follow these instructions in setup guide <setup-guide-index>` to upgrade to latest Neuron release. Neuron 2.13.1 (08/29/2023) -------------------------- This release adds support for ``Llama 2`` model training (`tutorial <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-llamav2-job.md>`_) using `neuronx-nemo-megatron <https://github.com/aws-neuron/neuronx-nemo-megatron>`_ library, and adds support for ``Llama 2`` model inference using ``transformers-neuronx`` library (`tutorial <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb>`_) . Please :ref:`follow these instructions in setup guide <setup-guide-index>` to upgrade to latest Neuron release. .. note:: Please install ``transformers-neuronx`` from https://pip.repos.neuron.amazonaws.com to get latest features and improvements. This release does not support LLama 2 model with Grouped-Query Attention Neuron 2.13.0 (08/28/2023) -------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces support for ``GPT-NeoX`` 20B model training in ``neuronx-distributed`` including Zero-1 optimizer capability. It also adds support for ``Stable Diffusion XL`` and ``CLIP`` models inference in ``torch-neuronx``. Neuron 2.13 also introduces `AWS Neuron Reference for Nemo Megatron <https://github.com/aws-neuron/neuronx-nemo-megatron>`_ library supporting distributed training of LLMs like ``GPT-3 175B``. This release also introduces other new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - AWS Neuron Reference for Nemo Megatron library - * Modified versions of the open-source packages `NeMo <https://github.com/NVIDIA/NeMo>`_ and `Apex <https://github.com/NVIDIA/apex>`_ that have been adapted for use with AWS Neuron and AWS EC2 Trn1 instances. * ``GPT-3`` model training support ( `tutorial <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-gpt-job.md>`_ ) * See more at `neuronx-nemo-megatron github repo <https://github.com/aws-neuron/neuronx-nemo-megatron>`_ - Trn1/Trn1n * - Transformers Neuron (transformers-neuronx) for Inference - * Latency optimizations for ``Llama`` and ``GPT-2`` models inference. * Neuron Persistent Cache support (:ref:`developer guide <transformers_neuronx_developer_guide>`) * See more at :ref:`transformers-neuronx-rn` - Inf2, Trn1/Trn1n * - Neuron Distributed (neuronx-distributed) for Training - * Now Stable, removed Experimental support * ZeRO-1 Optimizer support with tensor parallel. (:ref:`tutorial <gpt_neox_tp_zero1_tutorial>`) * Sequence Parallel support. (:ref:`api guide <api_guide>`) * GPT-NeoX model training support. (`sample script <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training>`_) (:ref:`tutorial <gpt_neox_tp_zero1_tutorial>`) * See more at :ref:`neuronx-distributed-rn` and :ref:`api_guide` - Trn1/Trn1n * - Neuron Distributed (neuronx-distributed) for Inference - * KV Cache Support for LLM Inference (:ref:`release notes <neuronx-distributed-rn>`) - Inf2,Trn1/Trn1n * - PyTorch Neuron (torch-neuronx) - * Seedable dropout enabled by default for training * KV Cache inference support ( :pytorch-neuron-src:`tutorial <torch-neuronx/t5-inference-tutorial.ipynb>` ) * ``camembert-base`` training script. (`sample script <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_text_classification/CamembertBase.ipynb>`_) * New models inference support that include `Stable Diffusion XL <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_sdxl_1024_inference.ipynb>`_ , CLIP (`clip-vit-base-patch32 <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_clip_base_inference_on_inf2.ipynb>`_ , `clip-vit-large-patch14 <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_clip_large_inference_on_inf2.ipynb>`_ ) , `Vision Perceiver <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_perceiver_vision_inference.ipynb>`_ , `Language Perceiver <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_perceiver_language_inference.ipynb>`_ and :pytorch-neuron-src:`T5 <torch-neuronx/t5-inference-tutorial.ipynb>` - Trn1/Trn1n,Inf2 * - Neuron Tools - * New data types support for Neuron Collective Communication Test Utility (NCCOM-TEST) --check option: fp16, bf16, (u)int8, (u)int16, and (u)int32 * Neuron SysFS support for FLOP count(flop_count) and connected Neuron Device ids (connected_devices). See :ref:`neuron-sysfs-ug` * See more at :ref:`neuron-tools-rn` - Inf1/Inf2/Trn1/Trn1n * - Neuron Runtime - * Runtime version and Capture Time support to NTFF * Async DMA copies support to improve Neuron Device copy times for all instance types * Logging and error messages improvements for Collectives timeouts and when loading NEFFs. * See more at :ref:`neuron-runtime-rn` - Inf1, Inf2, Trn1/Trn1n * - End of Support Announcements and Documentation Updates - * Announcing End of support for ``AWS Neuron reference for Megatron-LM`` starting Neuron 2.13. See more at :ref:`announce-eol-megatronlm` * Announcing end of support for ``torch-neuron`` version 1.9 starting Neuron 2.14. See more at :ref:`announce-eol-pytorch19` * Added TensorFlow 2.x (``tensorflow-neuronx``) analyze_model API section. See more at :ref:`tensorflow-ref-neuron-analyze_model-api` * Upgraded ``numpy`` version to ``1.21.6`` in various training scripts for `Text Classification <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training>`_ * Updated ``bert-japanese`` training Script to use ``multilingual-sentiments`` dataset. See `hf-bert-jp <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_bert_jp> `_ * See more at :ref:`neuron-documentation-rn` - Inf1, Inf2, Trn1/Trn1n * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1/Trn1n , Inf2, Inf1 * - Known Issues and Limitations - * See :ref:`neuron-2.13.0-known-issues` - Trn1/Trn1n , Inf2, Inf1 * - Release Artifacts - * see :ref:`latest-neuron-release-artifacts` - Trn1/Trn1n , Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.13.0-known-issues: 2.13.0 Known Issues and Limitations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * Currently we see a NaN generated when the model implementation uses torch.dtype(float32.min) or torch.dtype(float32.max) along with XLA_USE_BF16/XLA_DOWNCAST_BF16. This is because, float32.min or float32.max gets downcasted to Inf in bf16 thereby producing a NaN. Short term fix is that we can use a small/large fp32 number instead of using float32.min/float32.max. Example, for mask creation, we can use -/+1e4 instead of min/max values. The issue will be addressed in future Neuron releases. .. _neuron-2.12.0-whatsnew: Neuron 2.12.2 (08/19/2023) -------------------------- Patch release to fix a jemalloc conflict for all Neuron customers that use Ubuntu 22. The previous releases shipped with a dependency on jemalloc that may lead to compilation failures in Ubuntu 22 only. Please :ref:`follow these instructions in setup guide<setup-guide-index>` to upgrade to latest Neuron release. Neuron 2.12.1 (08/09/2023) -------------------------- Patch release to improve reliability of Neuron Runtime when running applications on memory constrained instances. The Neuron Runtime has reduced the contiguous memory requirement for initializing the Neuron Cores associated with applications. This reduction allows bringup when only small amounts of contiguous memory remain on an instance. Please :ref:`upgrade to latest Neuron release<setup-guide-index>` to use the latest Neuron Runtime. Neuron 2.12.0 (07/19/2023) -------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces ZeRO-1 optimizer for model training in ``torch-neuronx`` , introduces experimental support for ``GPT-NeoX``, ``BLOOM`` , ``Llama`` and ``Llama 2(coming soon)`` models in ``transformers-neuronx``. This release also adds support for model inference serving on Triton Inference Server for Inf2 & Trn1 instances, ``lazy_load`` API and ``async_load`` API for model loading in ``torch-neuronx``, as well as other new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - ZeRO-1 optimizer for model training in ``torch-neuronx`` - * Support of ZeRO-Stage-1 optimizer ( ZeroRedundancyOptimizer() API) for training models using ``torch-neuronx`` * See tutorial at :ref:`zero1-gpt2-pretraining-tutorial` - Inf2, Trn1/Trn1n * - Support for new models and Enhancements in ``transformers-neuronx`` - * [Experimental] Support for inference of ``GPT-NeoX``, ``BLOOM`` and ``Llama`` models. * [Experimental] Support for ``Llama 2`` coming soon. Please monitor the `transformers-neuronx repository <https://github.com/aws-neuron/transformers-neuronx/tree/main/src/transformers_neuronx>`_ for updates. * Removed constraints on ``tp_degree`` in tensor-parallel configurations for ``GPT2``, ``OPT``, and ``BLOOM`` . See more at :ref:`transformers-neuronx-rn` * Added multi-query / multi-group attention support for ``GPT2``. * See more at :ref:`transformers-neuronx-rn` - Inf2, Trn1/Trn1n * - Support for Inf2 and Trn1 instances on Triton Inference Server - * Support for Model Inference serving on Triton for Inf2 and Trn1 instances. See more at `Triton Server Python Backend <https://github.com/triton-inference-server/python_backend/tree/main/inferentia#using-triton-with-inferentia-2-or-trn1>`_ * See tutorial at `Triton on SageMaker - Deploying on Inf2 <https://github.com/aws/amazon-sagemaker-examples/tree/main/sagemaker-triton/inferentia2>`_ - Inf2, Trn1 * - Support for new computer vision models - * Performance optimizations in Stable Diffusion 2.1 model script and added [experimental] support for Stable Diffusion 1.5 models. * [Experimental] Script for training CLIP model for Image Classification. * [Experimental] Script for inference of Multimodal perceiver model * Please check `aws-neuron-samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx>`_ - Inf2, Trn1/Trn1n * - New Features in ``neuronx-distributed`` for training - * Added parallel cross entropy loss function. * See more at :ref:`tp_api_guide` - Trn1/Trn1n * - ``lazy_load`` and ``async_load`` API for model loading in inference and performance enhancements in ``torch-neuronx`` - * Added ``lazy_load`` and ``async_load`` API to accelerate model loading for Inference. See more at :ref:`torch_neuronx_lazy_async_load_api` * Optimize DataParallel API to load onto multiple cores simultaneously when device IDs specified are consecutive. * See more at :ref:`torch-neuronx-rn` - Inf2, Trn1/Trn1n * - [Experimental]Asynchronous Execution support and Enhancements in Neuron Runtime - * Added experimental asynchronous execution feature which can reduce latency by roughly 12% for training workloads. See more at :ref:`nrt-configuration` * AllReduce with All-to-all communication pattern enabled for 16 ranks on TRN1/TRN1N within the instance (intranode) * See more at :ref:`neuron-runtime-rn` - Inf1, Inf2, Trn1/Trn1n * - Support for ``distribution_strategy`` compiler option in ``neuronx-cc`` - * Support for optional ``--distribution_strategy`` compiler option to enable compiler specific optimizations based on distribution strategy used. * See more at :ref:`neuron-compiler-cli-reference-guide` - Inf2, Trn1/Trn1n * - New Micro Benchmarking Performance User Guide and Documentation Updates - * Added best practices user guide for benchmarking performance of Neuron devices. See more at `Benchmarking Guide and Helper scripts <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/microbenchmark>`_ * Announcing end of support for Ubuntu 18. See more at :ref:`announce-eol-ubuntu18` * Removed support for Distributed Data Parallel(DDP) Tutorial. * Improved sidebar navigation in Documentation. * See more at :ref:`neuron-documentation-rn` - Inf1, Inf2, Trn1/Trn1n * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1/Trn1n , Inf2, Inf1 * - Known Issues and Limitations - * See :ref:`neuron-2.12.0-known-issues` - Trn1/Trn1n , Inf2, Inf1 * - Release Artifacts - * see :ref:`latest-neuron-release-artifacts` - Trn1/Trn1n , Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.12.0-known-issues: 2.12.0 Known Issues and Limitations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Known Issues in Ubuntu 22 Support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Several Vision and NLP models on Ubuntu 22 are not supported due to Compilation issues. Issues will be addressed in upcoming releases. * CustomOp feature failing with seg fault on Ubuntu 22. Issue will be addressed in upcoming releases. Known issues in certain resnet models on Ubuntu 20 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Known issue with support for resnet-18, resnet-34, resnet-50, resnet-101 and resnet-152 models on Ubuntu 20. Issues will be addressed in upcoming releases. .. _neuron-2.11.0-whatsnew: Neuron 2.11.0 (06/14/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces Neuron Distributed, a new python library to simplify training and inference of large models, improving usability with features like S3 model caching, standalone profiler tool, support for Ubuntu22, as well as other new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - New Features and Performance Enhancements in ``transformers-neuronx`` - * Support for ``int8`` inference. See example at :ref:`int8_weight_storage_support` * Improved prompt context encoding performance. See more at :ref:`transformers_neuronx_developer_guide` * Improved collective communications performance for Tensor Parallel inference on Inf2 and Trn1. * See more at :ref:`transformers-neuronx-rn` - Inf2, Trn1/Trn1n * - Neuron Profiler Tool - * Profiling and visualization of model execution on Trainium and Inferentia devices now supported as a stand-alone tool. * See more at :ref:`neuron-profile-ug` - Inf1, Inf2, Trn1/Trn1n * - Neuron Compilation Cache through S3 - * Support for sharing compiled models across Inf2 and Trn1 nodes through S3 * See more at :ref:`pytorch-neuronx-parallel-compile-cli` - Inf2, Trn1/Trn1n * - New script to scan a model for supported/unsupported operators - * Script to scan a model for supported/unsupported operators before training, scan output includes supported and unsupported operators at both XLA operators and PyTorch operators level. * See a sample tutorial at :ref:`torch-analyze-for-training-tutorial` - Inf2, Trn1/Trn1n * - Neuron Distributed Library [Experimental] - * New Python Library based on PyTorch enabling distributed training and inference of large models. * Initial support for tensor-parallelism. * See more at :ref:`neuronx-distributed-index` - Inf2, Trn1/Trn1n * - Neuron Calculator and Documentation Updates - * New :ref:`neuron_calculator` Documentation section to help determine number of Neuron Cores needed for LLM Inference. * Added App Note :ref:`neuron_llm_inference` * See more at :ref:`neuron-documentation-rn` - Inf1, Inf2, Trn1/Trn1n * - Enhancements to Neuron SysFS - * Support for detailed breakdown of memory usage across the NeuronCores * See more at :ref:`neuron-sysfs-ug` - Inf1, Inf2, Trn1/Trn1n * - Support for Ubuntu 22 - * See more at :ref:`setup-guide-index` for setup instructions on Ubuntu22 - Inf1, Inf2, Trn1/Trn1n * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1/Trn1n , Inf2, Inf1 * - Release Artifacts - * see :ref:`latest-neuron-release-artifacts` - Trn1/Trn1n , Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.10.0-whatsnew: Neuron 2.10.0 (05/01/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - Initial support for computer vision models inference - * Added Stable Diffusion 2.1 model script for Text to Image Generation * Added VGG model script for Image Classification Task * Added UNet model script for Image Segmentation Task * Please check `aws-neuron-samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx>`_ - Inf2, Trn1/Trn1n * - Profiling support in PyTorch Neuron(``torch-neuronx``) for Inference with TensorBoard - * See more at :ref:`torch-neuronx-profiling-with-tb` - Inf2, Trn1/Trn1n * - New Features and Performance Enhancements in transformers-neuronx - * Support for the HuggingFace generate function. * Model Serialization support for GPT2 models. (including model saving, loading, and weight swapping) * Improved prompt context encoding performance. * See :ref:`transformers_neuronx_readme` for examples and usage * See more at :ref:`transformers-neuronx-rn` - Inf2, Trn1/Trn1n * - Support models larger than 2GB in TensorFlow 2.x Neuron (``tensorflow-neuronx``) - * See :ref:`tensorflow-neuronx-special-flags` for details. (``tensorflow-neuronx``) - Trn1/Trn1n, Inf2 * - Support models larger than 2GB in TensorFlow 2.x Neuron (``tensorflow-neuron``) - * See :ref:`Special Flags <tensorflow-ref-neuron-tracing-api>` for details. (``tensorflow-neuron``) - Inf1 * - Performance Enhancements in PyTorch C++ Custom Operators (Experimental) - * Support for using multiple GPSIMD Cores in Custom C++ Operators * See :ref:`custom-ops-api-ref-guide` - Trn1/Trn1n * - Weight Deduplication Feature (Inf1) - * Support for Sharing weights when loading multiple instance versions of the same model on different NeuronCores. * See more at :ref:`nrt-configuration` - Inf1 * - ``nccom-test`` - Collective Communication Benchmarking Tool - * Supports enabling benchmarking sweeps on various Neuron Collective Communication operations. See :ref:`nccom-test` for more details. - Trn1/Trn1n , Inf2 * - Announcing end of support for tensorflow-neuron 2.7 & mxnet-neuron 1.5 versions - * See :ref:`announce-eol-tf-before-2-7` * See :ref:`announce-eol-mxnet-before-1-5` - Inf1 * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1/Trn1n , Inf2, Inf1 * - Release Artifacts - * see :ref:`latest-neuron-release-artifacts` - Trn1/Trn1n , Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.9.0-whatsnew: Neuron 2.9.1 (04/19/2023) ------------------------- Minor patch release to add support for deserialized torchscript model compilation and support for multi-node training in EKS. Fixes included in this release are critical to enable training and deploying models with Amazon Sagemaker or Amazon EKS. Neuron 2.9.0 (03/28/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release adds support for EC2 Trn1n instances, introduces new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - Support for EC2 Trn1n instances - * Updated Neuron Runtime for Trn1n instances * Overall documentation update to include Trn1n instances - Trn1n * - New Analyze API in PyTorch Neuron (``torch-neuronx``) - * A new API that return list of supported and unsupported PyTorch operators for a model. See :ref:`torch_neuronx_analyze_api` - Trn1, Inf2 * - Support models that are larger than 2GB in PyTorch Neuron (``torch-neuron``) on Inf1 - * See ``separate_weights`` flag to :func:`torch_neuron.trace` to support models that are larger than 2GB - Inf1 * - Performance Improvements - * Up to 10% higher throughput when training GPT3 6.7B model on multi-node - Trn1 * - Dynamic Batching support in TensorFlow 2.x Neuron (``tensorflow-neuronx``) - * See :ref:`tensorflow-neuronx-special-flags` for details. - Trn1, Inf2 * - NeuronPerf support for Trn1/Inf2 instances - * Added Trn1/Inf2 support for PyTorch Neuron (``torch-neuronx``) and TensorFlow 2.x Neuron (``tensorflow-neuronx``) - Trn1, Inf2 * - Hierarchical All-Reduce and Reduce-Scatter collective communication - * Added support for hierarchical All-Reduce and Reduce-Scatter in Neuron Runtime to enable better scalability of distributed workloads . - Trn1, Inf2 * - New Tutorials added - * :ref:`Added tutorial to fine-tune T5 model <torch-hf-t5-finetune>` * Added tutorial to demonstrate use of Libtorch with PyTorch Neuron (``torch-neuronx``) for inference :ref:`[html] <pytorch-tutorials-libtorch>` - Trn1, Inf2 * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1, Inf2, Inf1 * - Release included packages - * see :ref:`neuron-release-content` - Trn1, Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.8.0-whatsnew: Neuron 2.8.0 (02/24/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release adds support for `EC2 Inf2 <https://aws.amazon.com/ec2/instance-types/inf2/>`_ instances, introduces initial inference support with TensorFlow 2.x Neuron (``tensorflow-neuronx``) on Trn1 and Inf2, and introduces minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details * - Support for `EC2 Inf2 <https://aws.amazon.com/ec2/instance-types/inf2/>`_ instances - * Inference support for Inf2 instances in PyTorch Neuron (``torch-neuronx``) * Inference support for Inf2 instances in TensorFlow 2.x Neuron (``tensorflow-neuronx``) * Overall documentation update to include Inf2 instances * - TensorFlow 2.x Neuron (``tensorflow-neuronx``) support - * This releases introduces initial inference support with TensorFlow 2.x Neuron (``tensorflow-neuronx``) on Trn1 and Inf2 * - New Neuron GitHub samples - * New sample scripts for deploying LLM models with ``transformer-neuronx`` under `aws-neuron-samples <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx/inference>`_ GitHub repository. * New sample scripts for deploying models with ``torch-neuronx`` under `aws-neuron-samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx>`_ GitHub repository. * - Minor enhancements and bug fixes. - * See :ref:`components-rn` * - Release included packages - * see :ref:`neuron-release-content` For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. .. _neuron-2.7.0-whatsnew: Neuron 2.7.0 (02/08/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces new capabilities and libraries, as well as features and tools that improves usability. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details * - PyTorch 1.13 - Support of PyTorch 1.13 version for PyTorch Neuron (``torch-neuronx``). For resources see :ref:`pytorch-neuronx-main` * - PyTorch DistributedDataParallel (DDP) API - Support of PyTorch DistributedDataParallel (DDP) API in PyTorch Neuron (``torch-neuronx``). For resources how to use PyTorch DDP API with Neuron, please check :ref:`neuronx-ddp-tutorial`. * - Inference support in ``torch-neuronx`` - For more details please visit :ref:`pytorch-neuronx-main`` page. You can also try Neuron Inference samples `<https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx>`_ in the ``aws-neuron-samples`` GitHub repo. * - Neuron Custom C++ Operators[Experimental] - Initial support for Neuron Custom C++ Operators [Experimental] , with Neuron Custom C++ Operators (“CustomOps”) you can now write CustomOps that run on NeuronCore-v2 chips. For more resources please check :ref:`neuron_c++customops` section. * - ``transformers-neuronx`` [Experimental] - ``transformers-neuronx`` is a new library enabling LLM model inference. It contains models that are checkpoint-compatible with HuggingFace Transformers, and currently supports Transformer Decoder models like GPT2, GPT-J and OPT. Please check `aws-neuron-samples repository <https://github.com/aws-neuron/transformers-neuronx>`_ * - Neuron sysfs filesystem - Neuron sysfs filesystem exposes Neuron Devices under ``/sys/devices/virtual/neuron_device`` providing visibility to Neuron Driver and Runtime at the system level. By performing several simple CLIs such as reading or writing to a sysfs file, you can get information such as Neuron Runtime status, memory usage, Driver info etc. For resources about Neuron sysfs filesystem visit :ref:`neuron-sysfs-ug`. * - TFLOPS support in Neuron System Tools - Neuron System Tools now also report model actual TFLOPs rate in both ``neuron-monitor`` and ``neuron-top``. More details can be found in the :ref:`Neuron Tools documentation <neuron-tools>`. * - New sample scripts for training - This release adds multiple new sample scripts for training models with ``torch-neuronx``, Please check `aws-neuron-samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx>`_ * - New sample scripts for inference - This release adds multiple new sample scripts for deploying models with ``torch-neuronx``, Please check `aws-neuron-samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx>`_ * - Neuron GitHub samples repository for Amazon EKS - A new AWS Neuron GitHub samples repository for Amazon EKS, Please check `aws-neuron-samples repository <https://github.com/aws-neuron/aws-neuron-eks-samples>`_ For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. .. _neuron-2.6.0-whatsnew: Neuron 2.6.0 (12/12/2022) ------------------------- This release introduces the support of PyTorch 1.12 version, and introduces PyTorch Neuron (``torch-neuronx``) profiling through Neuron Plugin for TensorBoard. Pytorch Neuron (``torch-neuronx``) users can now profile their models through the following TensorBoard views: * Operator Framework View * Operator HLO View * Operator Trace View This release introduces the support of LAMB optimizer for FP32 mode, and adds support for :ref:`capturing snapshots <torch-neuronx-snapshotting>` of inputs, outputs and graph HLO for debugging. In addition, this release introduces the support of new operators and resolves issues that improve stability for Trn1 customers. For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. .. _neuron-2.5.0-whatsnew: Neuron 2.5.0 (11/23/2022) ------------------------- Neuron 2.5.0 is a major release which introduces new features and resolves issues that improve stability for Inf1 customers. .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - Component - New in this release * - PyTorch Neuron ``(torch-neuron)`` - * PyTorch 1.12 support * Python 3.8 support * :ref:`LSTM <torch_neuron_lstm_support>` support on Inf1 * :ref:`R-CNN <torch-neuron-r-cnn-app-note>` support on Inf1 * Support for new :ref:`API for core placement <torch_neuron_core_placement_api>` * Support for :ref:`improved logging <pytorch-neuron-rn>` * Improved :func:`torch_neuron.trace` performance when using large graphs * Reduced host memory usage of loaded models in ``libtorchneuron.so`` * :ref:`Additional operators <neuron-cc-ops-pytorch>` support * - TensorFlow Neuron ``(tensorflow-neuron)`` - * ``tf-neuron-auto-multicore`` tool to enable automatic data parallel on multiple NeuronCores. * Experimental support for tracing models larger than 2GB using ``extract-weights`` flag (TF2.x only), see :ref:`tensorflow-ref-neuron-tracing-api` * ``tfn.auto_multicore`` Python API to enable automatic data parallel (TF2.x only) This Neuron release is the last release that will include ``torch-neuron`` :ref:`versions 1.7 and 1.8 <announce-eol-pt-before-1-8>`, and that will include ``tensorflow-neuron`` :ref:`versions 2.5 and 2.6 <announce-eol-tf-before-2-5>`. In addition, this release introduces changes to the Neuron packaging and installation instructions for Inf1 customers, see :ref:`neuron250-packages-changes` for more information. For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. .. _neuron-2.4.0-whatsnew: Neuron 2.4.0 (10/27/2022) ------------------------- This release introduces new features and resolves issues that improve stability. The release introduces "memory utilization breakdown" feature in both :ref:`Neuron Monitor <neuron-monitor-ug>` and :ref:`Neuron Top <neuron-top-ug>` system tools. The release introduces support for "NeuronCore Based Sheduling" capability to the Neuron Kubernetes Scheduler and introduces new operators support in :ref:`Neuron Compiler <neuronx-cc>` and :ref:`PyTorch Neuron <torch-neuronx-rn>`. This release introduces also additional eight (8) samples of models' fine tuning using PyTorch Neuron. The new samples can be found in the `AWS Neuron Samples GitHub <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx>`_ repository. .. _neuron-2.3.0-whatsnew: Neuron 2.3.0 (10/10/2022) ------------------------- .. contents:: Table of contents :local: :depth: 3 Overview ~~~~~~~~ This Neuron 2.3.0 release extends Neuron 1.x and adds support for the new AWS Trainium powered Amazon EC2 Trn1 instances. With this release, you can now run deep learning training workloads on Trn1 instances to save training costs by up to 50% over equivalent GPU-based EC2 instances, while getting the highest training performance in AWS cloud for popular NLP models. .. list-table:: :widths: auto :align: left :class: table-smaller-font-size * - What's New - * :ref:`rn2.3.0_new` * :ref:`neuron-packages-changes` * :ref:`announce-aws-neuron-github-org` * :ref:`announce-neuron-rtd-eol` * - Tested workloads and known issues - * :ref:`rn2.3.0_tested` * :ref:`rn2.3.0-known-issues` .. _rn2.3.0_new: New features and capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: /release-notes/templates/n2.x-trn1ga-whats-new.txt .. _rn2.3.0_tested: Tested Workloads ~~~~~~~~~~~~~~~~ The following workloads were tested in this release: * Distributed data-parallel pre-training of Hugging Face BERT model on single Trn1.32xl instance (32 NeuronCores). * Distributed data-parallel pre-training of Hugging Face BERT model on multiple Trn1.32xl instances. * HuggingFace BERT MRPC task finetuning on single NeuronCore or multiple NeuronCores (data-parallel). * Megatron-LM GPT3 (6.7B parameters) pre-training on single Trn1.32xl instance. * Megatron-LM GPT3 (6.7B parameters) pre-training on multi Trn1.32xl instances. * Multi-Layer Perceptron (ML) model training on single NeuronCore or multiple NeuronCores (data-parallel). .. _rn2.3.0-known-issues: Known Issues ~~~~~~~~~~~~ * For maximum training performance, please set environment variables ``XLA_USE_BF16=1`` to enable full BF16 and Stochastic Rounding. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _prev-rn: Previous Releases Notes (Neuron 2.x) ==================================== .. contents:: Table of contents :local: :depth: 1 .. _neuron-2.13.0-whatsnew: Neuron 2.13.2 (09/01/2023) --------------------------- This is a patch release that fixes issues in Kubernetes (K8) deployments related to Neuron Device Plugin crashes and other pod scheduling issues. This release also adds support for zero-based Neuron Device indexing in K8 deployments, see the :ref:`Neuron K8 release notes &lt;neuron-k8-rn&gt;` for more details on the specific bug fixes. Updating to latest Neuron Kubernetes components and Neuron Driver is highly encouraged for customers using Kubernetes. Please :ref:`follow these instructions in setup guide &lt;setup-guide-index&gt;` to upgrade to latest Neuron release. Neuron 2.13.1 (08/29/2023) -------------------------- This release adds support for ``Llama 2`` model training (`tutorial &lt;https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-llamav2-job.md&gt;`_) using `neuronx-nemo-megatron &lt;https://github.com/aws-neuron/neuronx-nemo-megatron&gt;`_ library, and adds support for ``Llama 2`` model inference using ``transformers-neuronx`` library (`tutorial &lt;https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb&gt;`_) . Please :ref:`follow these instructions in setup guide &lt;setup-guide-index&gt;` to upgrade to latest Neuron release. .. note:: Please install ``transformers-neuronx`` from https://pip.repos.neuron.amazonaws.com to get latest features and improvements. This release does not support LLama 2 model with Grouped-Query Attention Neuron 2.13.0 (08/28/2023) -------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces support for ``GPT-NeoX`` 20B model training in ``neuronx-distributed`` including Zero-1 optimizer capability. It also adds support for ``Stable Diffusion XL`` and ``CLIP`` models inference in ``torch-neuronx``. Neuron 2.13 also introduces `AWS Neuron Reference for Nemo Megatron &lt;https://github.com/aws-neuron/neuronx-nemo-megatron&gt;`_ library supporting distributed training of LLMs like ``GPT-3 175B``. This release also introduces other new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - AWS Neuron Reference for Nemo Megatron library - * Modified versions of the open-source packages `NeMo &lt;https://github.com/NVIDIA/NeMo&gt;`_ and `Apex &lt;https://github.com/NVIDIA/apex&gt;`_ that have been adapted for use with AWS Neuron and AWS EC2 Trn1 instances. * ``GPT-3`` model training support ( `tutorial &lt;https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-gpt-job.md&gt;`_ ) * See more at `neuronx-nemo-megatron github repo &lt;https://github.com/aws-neuron/neuronx-nemo-megatron&gt;`_ - Trn1/Trn1n * - Transformers Neuron (transformers-neuronx) for Inference - * Latency optimizations for ``Llama`` and ``GPT-2`` models inference. * Neuron Persistent Cache support (:ref:`developer guide &lt;transformers_neuronx_developer_guide&gt;`) * See more at :ref:`transformers-neuronx-rn` - Inf2, Trn1/Trn1n * - Neuron Distributed (neuronx-distributed) for Training - * Now Stable, removed Experimental support * ZeRO-1 Optimizer support with tensor parallel. (:ref:`tutorial &lt;gpt_neox_tp_zero1_tutorial&gt;`) * Sequence Parallel support. (:ref:`api guide &lt;api_guide&gt;`) * GPT-NeoX model training support. (`sample script &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training&gt;`_) (:ref:`tutorial &lt;gpt_neox_tp_zero1_tutorial&gt;`) * See more at :ref:`neuronx-distributed-rn` and :ref:`api_guide` - Trn1/Trn1n * - Neuron Distributed (neuronx-distributed) for Inference - * KV Cache Support for LLM Inference (:ref:`release notes &lt;neuronx-distributed-rn&gt;`) - Inf2,Trn1/Trn1n * - PyTorch Neuron (torch-neuronx) - * Seedable dropout enabled by default for training * KV Cache inference support ( :pytorch-neuron-src:`tutorial &lt;torch-neuronx/t5-inference-tutorial.ipynb&gt;` ) * ``camembert-base`` training script. (`sample script &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_text_classification/CamembertBase.ipynb&gt;`_) * New models inference support that include `Stable Diffusion XL &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_sdxl_1024_inference.ipynb&gt;`_ , CLIP (`clip-vit-base-patch32 &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_clip_base_inference_on_inf2.ipynb&gt;`_ , `clip-vit-large-patch14 &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_clip_large_inference_on_inf2.ipynb&gt;`_ ) , `Vision Perceiver &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_perceiver_vision_inference.ipynb&gt;`_ , `Language Perceiver &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_perceiver_language_inference.ipynb&gt;`_ and :pytorch-neuron-src:`T5 &lt;torch-neuronx/t5-inference-tutorial.ipynb&gt;` - Trn1/Trn1n,Inf2 * - Neuron Tools - * New data types support for Neuron Collective Communication Test Utility (NCCOM-TEST) --check option: fp16, bf16, (u)int8, (u)int16, and (u)int32 * Neuron SysFS support for FLOP count(flop_count) and connected Neuron Device ids (connected_devices). See :ref:`neuron-sysfs-ug` * See more at :ref:`neuron-tools-rn` - Inf1/Inf2/Trn1/Trn1n * - Neuron Runtime - * Runtime version and Capture Time support to NTFF * Async DMA copies support to improve Neuron Device copy times for all instance types * Logging and error messages improvements for Collectives timeouts and when loading NEFFs. * See more at :ref:`neuron-runtime-rn` - Inf1, Inf2, Trn1/Trn1n * - End of Support Announcements and Documentation Updates - * Announcing End of support for ``AWS Neuron reference for Megatron-LM`` starting Neuron 2.13. See more at :ref:`announce-eol-megatronlm` * Announcing end of support for ``torch-neuron`` version 1.9 starting Neuron 2.14. See more at :ref:`announce-eol-pytorch19` * Added TensorFlow 2.x (``tensorflow-neuronx``) analyze_model API section. See more at :ref:`tensorflow-ref-neuron-analyze_model-api` * Upgraded ``numpy`` version to ``1.21.6`` in various training scripts for `Text Classification &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training&gt;`_ * Updated ``bert-japanese`` training Script to use ``multilingual-sentiments`` dataset. See `hf-bert-jp &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_bert_jp&gt; `_ * See more at :ref:`neuron-documentation-rn` - Inf1, Inf2, Trn1/Trn1n * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1/Trn1n , Inf2, Inf1 * - Known Issues and Limitations - * See :ref:`neuron-2.13.0-known-issues` - Trn1/Trn1n , Inf2, Inf1 * - Release Artifacts - * see :ref:`latest-neuron-release-artifacts` - Trn1/Trn1n , Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.13.0-known-issues: 2.13.0 Known Issues and Limitations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ * Currently we see a NaN generated when the model implementation uses torch.dtype(float32.min) or torch.dtype(float32.max) along with XLA_USE_BF16/XLA_DOWNCAST_BF16. This is because, float32.min or float32.max gets downcasted to Inf in bf16 thereby producing a NaN. Short term fix is that we can use a small/large fp32 number instead of using float32.min/float32.max. Example, for mask creation, we can use -/+1e4 instead of min/max values. The issue will be addressed in future Neuron releases. .. _neuron-2.12.0-whatsnew: Neuron 2.12.2 (08/19/2023) -------------------------- Patch release to fix a jemalloc conflict for all Neuron customers that use Ubuntu 22. The previous releases shipped with a dependency on jemalloc that may lead to compilation failures in Ubuntu 22 only. Please :ref:`follow these instructions in setup guide&lt;setup-guide-index&gt;` to upgrade to latest Neuron release. Neuron 2.12.1 (08/09/2023) -------------------------- Patch release to improve reliability of Neuron Runtime when running applications on memory constrained instances. The Neuron Runtime has reduced the contiguous memory requirement for initializing the Neuron Cores associated with applications. This reduction allows bringup when only small amounts of contiguous memory remain on an instance. Please :ref:`upgrade to latest Neuron release&lt;setup-guide-index&gt;` to use the latest Neuron Runtime. Neuron 2.12.0 (07/19/2023) -------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces ZeRO-1 optimizer for model training in ``torch-neuronx`` , introduces experimental support for ``GPT-NeoX``, ``BLOOM`` , ``Llama`` and ``Llama 2(coming soon)`` models in ``transformers-neuronx``. This release also adds support for model inference serving on Triton Inference Server for Inf2 &amp; Trn1 instances, ``lazy_load`` API and ``async_load`` API for model loading in ``torch-neuronx``, as well as other new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - ZeRO-1 optimizer for model training in ``torch-neuronx`` - * Support of ZeRO-Stage-1 optimizer ( ZeroRedundancyOptimizer() API) for training models using ``torch-neuronx`` * See tutorial at :ref:`zero1-gpt2-pretraining-tutorial` - Inf2, Trn1/Trn1n * - Support for new models and Enhancements in ``transformers-neuronx`` - * [Experimental] Support for inference of ``GPT-NeoX``, ``BLOOM`` and ``Llama`` models. * [Experimental] Support for ``Llama 2`` coming soon. Please monitor the `transformers-neuronx repository &lt;https://github.com/aws-neuron/transformers-neuronx/tree/main/src/transformers_neuronx&gt;`_ for updates. * Removed constraints on ``tp_degree`` in tensor-parallel configurations for ``GPT2``, ``OPT``, and ``BLOOM`` . See more at :ref:`transformers-neuronx-rn` * Added multi-query / multi-group attention support for ``GPT2``. * See more at :ref:`transformers-neuronx-rn` - Inf2, Trn1/Trn1n * - Support for Inf2 and Trn1 instances on Triton Inference Server - * Support for Model Inference serving on Triton for Inf2 and Trn1 instances. See more at `Triton Server Python Backend &lt;https://github.com/triton-inference-server/python_backend/tree/main/inferentia#using-triton-with-inferentia-2-or-trn1&gt;`_ * See tutorial at `Triton on SageMaker - Deploying on Inf2 &lt;https://github.com/aws/amazon-sagemaker-examples/tree/main/sagemaker-triton/inferentia2&gt;`_ - Inf2, Trn1 * - Support for new computer vision models - * Performance optimizations in Stable Diffusion 2.1 model script and added [experimental] support for Stable Diffusion 1.5 models. * [Experimental] Script for training CLIP model for Image Classification. * [Experimental] Script for inference of Multimodal perceiver model * Please check `aws-neuron-samples repository &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx&gt;`_ - Inf2, Trn1/Trn1n * - New Features in ``neuronx-distributed`` for training - * Added parallel cross entropy loss function. * See more at :ref:`tp_api_guide` - Trn1/Trn1n * - ``lazy_load`` and ``async_load`` API for model loading in inference and performance enhancements in ``torch-neuronx`` - * Added ``lazy_load`` and ``async_load`` API to accelerate model loading for Inference. See more at :ref:`torch_neuronx_lazy_async_load_api` * Optimize DataParallel API to load onto multiple cores simultaneously when device IDs specified are consecutive. * See more at :ref:`torch-neuronx-rn` - Inf2, Trn1/Trn1n * - [Experimental]Asynchronous Execution support and Enhancements in Neuron Runtime - * Added experimental asynchronous execution feature which can reduce latency by roughly 12% for training workloads. See more at :ref:`nrt-configuration` * AllReduce with All-to-all communication pattern enabled for 16 ranks on TRN1/TRN1N within the instance (intranode) * See more at :ref:`neuron-runtime-rn` - Inf1, Inf2, Trn1/Trn1n * - Support for ``distribution_strategy`` compiler option in ``neuronx-cc`` - * Support for optional ``--distribution_strategy`` compiler option to enable compiler specific optimizations based on distribution strategy used. * See more at :ref:`neuron-compiler-cli-reference-guide` - Inf2, Trn1/Trn1n * - New Micro Benchmarking Performance User Guide and Documentation Updates - * Added best practices user guide for benchmarking performance of Neuron devices. See more at `Benchmarking Guide and Helper scripts &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/microbenchmark&gt;`_ * Announcing end of support for Ubuntu 18. See more at :ref:`announce-eol-ubuntu18` * Removed support for Distributed Data Parallel(DDP) Tutorial. * Improved sidebar navigation in Documentation. * See more at :ref:`neuron-documentation-rn` - Inf1, Inf2, Trn1/Trn1n * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1/Trn1n , Inf2, Inf1 * - Known Issues and Limitations - * See :ref:`neuron-2.12.0-known-issues` - Trn1/Trn1n , Inf2, Inf1 * - Release Artifacts - * see :ref:`latest-neuron-release-artifacts` - Trn1/Trn1n , Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.12.0-known-issues: 2.12.0 Known Issues and Limitations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Known Issues in Ubuntu 22 Support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Several Vision and NLP models on Ubuntu 22 are not supported due to Compilation issues. Issues will be addressed in upcoming releases. * CustomOp feature failing with seg fault on Ubuntu 22. Issue will be addressed in upcoming releases. Known issues in certain resnet models on Ubuntu 20 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * Known issue with support for resnet-18, resnet-34, resnet-50, resnet-101 and resnet-152 models on Ubuntu 20. Issues will be addressed in upcoming releases. .. _neuron-2.11.0-whatsnew: Neuron 2.11.0 (06/14/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces Neuron Distributed, a new python library to simplify training and inference of large models, improving usability with features like S3 model caching, standalone profiler tool, support for Ubuntu22, as well as other new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - New Features and Performance Enhancements in ``transformers-neuronx`` - * Support for ``int8`` inference. See example at :ref:`int8_weight_storage_support` * Improved prompt context encoding performance. See more at :ref:`transformers_neuronx_developer_guide` * Improved collective communications performance for Tensor Parallel inference on Inf2 and Trn1. * See more at :ref:`transformers-neuronx-rn` - Inf2, Trn1/Trn1n * - Neuron Profiler Tool - * Profiling and visualization of model execution on Trainium and Inferentia devices now supported as a stand-alone tool. * See more at :ref:`neuron-profile-ug` - Inf1, Inf2, Trn1/Trn1n * - Neuron Compilation Cache through S3 - * Support for sharing compiled models across Inf2 and Trn1 nodes through S3 * See more at :ref:`pytorch-neuronx-parallel-compile-cli` - Inf2, Trn1/Trn1n * - New script to scan a model for supported/unsupported operators - * Script to scan a model for supported/unsupported operators before training, scan output includes supported and unsupported operators at both XLA operators and PyTorch operators level. * See a sample tutorial at :ref:`torch-analyze-for-training-tutorial` - Inf2, Trn1/Trn1n * - Neuron Distributed Library [Experimental] - * New Python Library based on PyTorch enabling distributed training and inference of large models. * Initial support for tensor-parallelism. * See more at :ref:`neuronx-distributed-index` - Inf2, Trn1/Trn1n * - Neuron Calculator and Documentation Updates - * New :ref:`neuron_calculator` Documentation section to help determine number of Neuron Cores needed for LLM Inference. * Added App Note :ref:`neuron_llm_inference` * See more at :ref:`neuron-documentation-rn` - Inf1, Inf2, Trn1/Trn1n * - Enhancements to Neuron SysFS - * Support for detailed breakdown of memory usage across the NeuronCores * See more at :ref:`neuron-sysfs-ug` - Inf1, Inf2, Trn1/Trn1n * - Support for Ubuntu 22 - * See more at :ref:`setup-guide-index` for setup instructions on Ubuntu22 - Inf1, Inf2, Trn1/Trn1n * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1/Trn1n , Inf2, Inf1 * - Release Artifacts - * see :ref:`latest-neuron-release-artifacts` - Trn1/Trn1n , Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.10.0-whatsnew: Neuron 2.10.0 (05/01/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - Initial support for computer vision models inference - * Added Stable Diffusion 2.1 model script for Text to Image Generation * Added VGG model script for Image Classification Task * Added UNet model script for Image Segmentation Task * Please check `aws-neuron-samples repository &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx&gt;`_ - Inf2, Trn1/Trn1n * - Profiling support in PyTorch Neuron(``torch-neuronx``) for Inference with TensorBoard - * See more at :ref:`torch-neuronx-profiling-with-tb` - Inf2, Trn1/Trn1n * - New Features and Performance Enhancements in transformers-neuronx - * Support for the HuggingFace generate function. * Model Serialization support for GPT2 models. (including model saving, loading, and weight swapping) * Improved prompt context encoding performance. * See :ref:`transformers_neuronx_readme` for examples and usage * See more at :ref:`transformers-neuronx-rn` - Inf2, Trn1/Trn1n * - Support models larger than 2GB in TensorFlow 2.x Neuron (``tensorflow-neuronx``) - * See :ref:`tensorflow-neuronx-special-flags` for details. (``tensorflow-neuronx``) - Trn1/Trn1n, Inf2 * - Support models larger than 2GB in TensorFlow 2.x Neuron (``tensorflow-neuron``) - * See :ref:`Special Flags &lt;tensorflow-ref-neuron-tracing-api&gt;` for details. (``tensorflow-neuron``) - Inf1 * - Performance Enhancements in PyTorch C++ Custom Operators (Experimental) - * Support for using multiple GPSIMD Cores in Custom C++ Operators * See :ref:`custom-ops-api-ref-guide` - Trn1/Trn1n * - Weight Deduplication Feature (Inf1) - * Support for Sharing weights when loading multiple instance versions of the same model on different NeuronCores. * See more at :ref:`nrt-configuration` - Inf1 * - ``nccom-test`` - Collective Communication Benchmarking Tool - * Supports enabling benchmarking sweeps on various Neuron Collective Communication operations. See :ref:`nccom-test` for more details. - Trn1/Trn1n , Inf2 * - Announcing end of support for tensorflow-neuron 2.7 &amp; mxnet-neuron 1.5 versions - * See :ref:`announce-eol-tf-before-2-7` * See :ref:`announce-eol-mxnet-before-1-5` - Inf1 * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1/Trn1n , Inf2, Inf1 * - Release Artifacts - * see :ref:`latest-neuron-release-artifacts` - Trn1/Trn1n , Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.9.0-whatsnew: Neuron 2.9.1 (04/19/2023) ------------------------- Minor patch release to add support for deserialized torchscript model compilation and support for multi-node training in EKS. Fixes included in this release are critical to enable training and deploying models with Amazon Sagemaker or Amazon EKS. Neuron 2.9.0 (03/28/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release adds support for EC2 Trn1n instances, introduces new features, performance optimizations, minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details - Instances * - Support for EC2 Trn1n instances - * Updated Neuron Runtime for Trn1n instances * Overall documentation update to include Trn1n instances - Trn1n * - New Analyze API in PyTorch Neuron (``torch-neuronx``) - * A new API that return list of supported and unsupported PyTorch operators for a model. See :ref:`torch_neuronx_analyze_api` - Trn1, Inf2 * - Support models that are larger than 2GB in PyTorch Neuron (``torch-neuron``) on Inf1 - * See ``separate_weights`` flag to :func:`torch_neuron.trace` to support models that are larger than 2GB - Inf1 * - Performance Improvements - * Up to 10% higher throughput when training GPT3 6.7B model on multi-node - Trn1 * - Dynamic Batching support in TensorFlow 2.x Neuron (``tensorflow-neuronx``) - * See :ref:`tensorflow-neuronx-special-flags` for details. - Trn1, Inf2 * - NeuronPerf support for Trn1/Inf2 instances - * Added Trn1/Inf2 support for PyTorch Neuron (``torch-neuronx``) and TensorFlow 2.x Neuron (``tensorflow-neuronx``) - Trn1, Inf2 * - Hierarchical All-Reduce and Reduce-Scatter collective communication - * Added support for hierarchical All-Reduce and Reduce-Scatter in Neuron Runtime to enable better scalability of distributed workloads . - Trn1, Inf2 * - New Tutorials added - * :ref:`Added tutorial to fine-tune T5 model &lt;torch-hf-t5-finetune&gt;` * Added tutorial to demonstrate use of Libtorch with PyTorch Neuron (``torch-neuronx``) for inference :ref:`[html] &lt;pytorch-tutorials-libtorch&gt;` - Trn1, Inf2 * - Minor enhancements and bug fixes. - * See :ref:`components-rn` - Trn1, Inf2, Inf1 * - Release included packages - * see :ref:`neuron-release-content` - Trn1, Inf2, Inf1 For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. To learn about the model architectures currently supported on Inf1, Inf2, Trn1 and Trn1n instances, please see :ref:`model_architecture_fit`. .. _neuron-2.8.0-whatsnew: Neuron 2.8.0 (02/24/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release adds support for `EC2 Inf2 &lt;https://aws.amazon.com/ec2/instance-types/inf2/&gt;`_ instances, introduces initial inference support with TensorFlow 2.x Neuron (``tensorflow-neuronx``) on Trn1 and Inf2, and introduces minor enhancements and bug fixes. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details * - Support for `EC2 Inf2 &lt;https://aws.amazon.com/ec2/instance-types/inf2/&gt;`_ instances - * Inference support for Inf2 instances in PyTorch Neuron (``torch-neuronx``) * Inference support for Inf2 instances in TensorFlow 2.x Neuron (``tensorflow-neuronx``) * Overall documentation update to include Inf2 instances * - TensorFlow 2.x Neuron (``tensorflow-neuronx``) support - * This releases introduces initial inference support with TensorFlow 2.x Neuron (``tensorflow-neuronx``) on Trn1 and Inf2 * - New Neuron GitHub samples - * New sample scripts for deploying LLM models with ``transformer-neuronx`` under `aws-neuron-samples &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx/inference&gt;`_ GitHub repository. * New sample scripts for deploying models with ``torch-neuronx`` under `aws-neuron-samples repository &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx&gt;`_ GitHub repository. * - Minor enhancements and bug fixes. - * See :ref:`components-rn` * - Release included packages - * see :ref:`neuron-release-content` For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. .. _neuron-2.7.0-whatsnew: Neuron 2.7.0 (02/08/2023) ------------------------- .. contents:: Table of contents :local: :depth: 3 What's New ^^^^^^^^^^ This release introduces new capabilities and libraries, as well as features and tools that improves usability. This release introduces the following: .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - What's New - Details * - PyTorch 1.13 - Support of PyTorch 1.13 version for PyTorch Neuron (``torch-neuronx``). For resources see :ref:`pytorch-neuronx-main` * - PyTorch DistributedDataParallel (DDP) API - Support of PyTorch DistributedDataParallel (DDP) API in PyTorch Neuron (``torch-neuronx``). For resources how to use PyTorch DDP API with Neuron, please check :ref:`neuronx-ddp-tutorial`. * - Inference support in ``torch-neuronx`` - For more details please visit :ref:`pytorch-neuronx-main`` page. You can also try Neuron Inference samples `&lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx&gt;`_ in the ``aws-neuron-samples`` GitHub repo. * - Neuron Custom C++ Operators[Experimental] - Initial support for Neuron Custom C++ Operators [Experimental] , with Neuron Custom C++ Operators (“CustomOps”) you can now write CustomOps that run on NeuronCore-v2 chips. For more resources please check :ref:`neuron_c++customops` section. * - ``transformers-neuronx`` [Experimental] - ``transformers-neuronx`` is a new library enabling LLM model inference. It contains models that are checkpoint-compatible with HuggingFace Transformers, and currently supports Transformer Decoder models like GPT2, GPT-J and OPT. Please check `aws-neuron-samples repository &lt;https://github.com/aws-neuron/transformers-neuronx&gt;`_ * - Neuron sysfs filesystem - Neuron sysfs filesystem exposes Neuron Devices under ``/sys/devices/virtual/neuron_device`` providing visibility to Neuron Driver and Runtime at the system level. By performing several simple CLIs such as reading or writing to a sysfs file, you can get information such as Neuron Runtime status, memory usage, Driver info etc. For resources about Neuron sysfs filesystem visit :ref:`neuron-sysfs-ug`. * - TFLOPS support in Neuron System Tools - Neuron System Tools now also report model actual TFLOPs rate in both ``neuron-monitor`` and ``neuron-top``. More details can be found in the :ref:`Neuron Tools documentation &lt;neuron-tools&gt;`. * - New sample scripts for training - This release adds multiple new sample scripts for training models with ``torch-neuronx``, Please check `aws-neuron-samples repository &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx&gt;`_ * - New sample scripts for inference - This release adds multiple new sample scripts for deploying models with ``torch-neuronx``, Please check `aws-neuron-samples repository &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx&gt;`_ * - Neuron GitHub samples repository for Amazon EKS - A new AWS Neuron GitHub samples repository for Amazon EKS, Please check `aws-neuron-samples repository &lt;https://github.com/aws-neuron/aws-neuron-eks-samples&gt;`_ For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. .. _neuron-2.6.0-whatsnew: Neuron 2.6.0 (12/12/2022) ------------------------- This release introduces the support of PyTorch 1.12 version, and introduces PyTorch Neuron (``torch-neuronx``) profiling through Neuron Plugin for TensorBoard. Pytorch Neuron (``torch-neuronx``) users can now profile their models through the following TensorBoard views: * Operator Framework View * Operator HLO View * Operator Trace View This release introduces the support of LAMB optimizer for FP32 mode, and adds support for :ref:`capturing snapshots &lt;torch-neuronx-snapshotting&gt;` of inputs, outputs and graph HLO for debugging. In addition, this release introduces the support of new operators and resolves issues that improve stability for Trn1 customers. For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. .. _neuron-2.5.0-whatsnew: Neuron 2.5.0 (11/23/2022) ------------------------- Neuron 2.5.0 is a major release which introduces new features and resolves issues that improve stability for Inf1 customers. .. list-table:: :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - Component - New in this release * - PyTorch Neuron ``(torch-neuron)`` - * PyTorch 1.12 support * Python 3.8 support * :ref:`LSTM &lt;torch_neuron_lstm_support&gt;` support on Inf1 * :ref:`R-CNN &lt;torch-neuron-r-cnn-app-note&gt;` support on Inf1 * Support for new :ref:`API for core placement &lt;torch_neuron_core_placement_api&gt;` * Support for :ref:`improved logging &lt;pytorch-neuron-rn&gt;` * Improved :func:`torch_neuron.trace` performance when using large graphs * Reduced host memory usage of loaded models in ``libtorchneuron.so`` * :ref:`Additional operators &lt;neuron-cc-ops-pytorch&gt;` support * - TensorFlow Neuron ``(tensorflow-neuron)`` - * ``tf-neuron-auto-multicore`` tool to enable automatic data parallel on multiple NeuronCores. * Experimental support for tracing models larger than 2GB using ``extract-weights`` flag (TF2.x only), see :ref:`tensorflow-ref-neuron-tracing-api` * ``tfn.auto_multicore`` Python API to enable automatic data parallel (TF2.x only) This Neuron release is the last release that will include ``torch-neuron`` :ref:`versions 1.7 and 1.8 &lt;announce-eol-pt-before-1-8&gt;`, and that will include ``tensorflow-neuron`` :ref:`versions 2.5 and 2.6 &lt;announce-eol-tf-before-2-5&gt;`. In addition, this release introduces changes to the Neuron packaging and installation instructions for Inf1 customers, see :ref:`neuron250-packages-changes` for more information. For more detailed release notes of the new features and resolved issues, see :ref:`components-rn`. .. _neuron-2.4.0-whatsnew: Neuron 2.4.0 (10/27/2022) ------------------------- This release introduces new features and resolves issues that improve stability. The release introduces "memory utilization breakdown" feature in both :ref:`Neuron Monitor &lt;neuron-monitor-ug&gt;` and :ref:`Neuron Top &lt;neuron-top-ug&gt;` system tools. The release introduces support for "NeuronCore Based Sheduling" capability to the Neuron Kubernetes Scheduler and introduces new operators support in :ref:`Neuron Compiler &lt;neuronx-cc&gt;` and :ref:`PyTorch Neuron &lt;torch-neuronx-rn&gt;`. This release introduces also additional eight (8) samples of models' fine tuning using PyTorch Neuron. The new samples can be found in the `AWS Neuron Samples GitHub &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx&gt;`_ repository. .. _neuron-2.3.0-whatsnew: Neuron 2.3.0 (10/10/2022) ------------------------- .. contents:: Table of contents :local: :depth: 3 Overview ~~~~~~~~ This Neuron 2.3.0 release extends Neuron 1.x and adds support for the new AWS Trainium powered Amazon EC2 Trn1 instances. With this release, you can now run deep learning training workloads on Trn1 instances to save training costs by up to 50% over equivalent GPU-based EC2 instances, while getting the highest training performance in AWS cloud for popular NLP models. .. list-table:: :widths: auto :align: left :class: table-smaller-font-size * - What's New - * :ref:`rn2.3.0_new` * :ref:`neuron-packages-changes` * :ref:`announce-aws-neuron-github-org` * :ref:`announce-neuron-rtd-eol` * - Tested workloads and known issues - * :ref:`rn2.3.0_tested` * :ref:`rn2.3.0-known-issues` .. _rn2.3.0_new: New features and capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: /release-notes/templates/n2.x-trn1ga-whats-new.txt .. _rn2.3.0_tested: Tested Workloads ~~~~~~~~~~~~~~~~ The following workloads were tested in this release: * Distributed data-parallel pre-training of Hugging Face BERT model on single Trn1.32xl instance (32 NeuronCores). * Distributed data-parallel pre-training of Hugging Face BERT model on multiple Trn1.32xl instances. * HuggingFace BERT MRPC task finetuning on single NeuronCore or multiple NeuronCores (data-parallel). * Megatron-LM GPT3 (6.7B parameters) pre-training on single Trn1.32xl instance. * Megatron-LM GPT3 (6.7B parameters) pre-training on multi Trn1.32xl instances. * Multi-Layer Perceptron (ML) model training on single NeuronCore or multiple NeuronCores (data-parallel). .. _rn2.3.0-known-issues: Known Issues ~~~~~~~~~~~~ * For maximum training performance, please set environment variables ``XLA_USE_BF16=1`` to enable full BF16 and Stochastic Rounding. </pre></body></html>
2023-09-29T20:55:22.867Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/announce-eol-pt-before-1-8.rst.txt
``` .. post:: Nov 22, 2022 :language: en :tags: announce-eol torch-neuron .. _announce-eol-pt-before-1-8: Announcing end of support for ``torch-neuron`` versions 1.7 and 1.8 ------------------------------------------------------------------- :ref:`Neuron release 2.5 <neuron-2.5.0-whatsnew>` will be the last release that will include ``torch-neuron`` versions 1.7 and 1.8. Future Neuron releases will not include ``torch-neuron`` versions 1.7 and 1.8. Current users of those versions are advised to migrate to latest ``torch-neuron`` version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Nov 22, 2022 :language: en :tags: announce-eol torch-neuron .. _announce-eol-pt-before-1-8: Announcing end of support for ``torch-neuron`` versions 1.7 and 1.8 ------------------------------------------------------------------- :ref:`Neuron release 2.5 &lt;neuron-2.5.0-whatsnew&gt;` will be the last release that will include ``torch-neuron`` versions 1.7 and 1.8. Future Neuron releases will not include ``torch-neuron`` versions 1.7 and 1.8. Current users of those versions are advised to migrate to latest ``torch-neuron`` version. </pre></body></html>
2023-09-29T20:55:22.987Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/gpg-expiration.rst.txt
``` .. post:: Nov 10, 2022 00:01 :language: en :tags: dlami, pytorch .. _announce-dlami-neuron-pytorch: Neuron GPG key for Ubuntu installation has expired -------------------------------------------------- GPG, or GNU Privacy Guard, is a public key cryptography implementation. This allows for the secure transmission of information between parties and can be used to verify that the origin of a message is genuine. The GPG key for the Neuron repository (https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB) is installed on the Ubuntu (Canonical) server, the key was uploaded originally with an expiry date of three (3) years, which has expired on 11/10/22. Please see :ref:`gpg_key_update` for instructions how to update the Neuron repository GPG keys. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Nov 10, 2022 00:01 :language: en :tags: dlami, pytorch .. _announce-dlami-neuron-pytorch: Neuron GPG key for Ubuntu installation has expired -------------------------------------------------- GPG, or GNU Privacy Guard, is a public key cryptography implementation. This allows for the secure transmission of information between parties and can be used to verify that the origin of a message is genuine. The GPG key for the Neuron repository (https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB) is installed on the Ubuntu (Canonical) server, the key was uploaded originally with an expiry date of three (3) years, which has expired on 11/10/22. Please see :ref:`gpg_key_update` for instructions how to update the Neuron repository GPG keys.</pre></body></html>
2023-09-29T20:55:23.008Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/eol-pt-15.rst.txt
``` .. post:: Apr 29, 2022 :language: en :tags: eol .. _eol-pt-15: End of support for torch-neuron version 1.5 ------------------------------------------- Starting with *Neuron 1.19.0* release, *torch-neuron 1.5* will no longer be supported, and no further releases of *torch-neuron version 1.5* will be issued. Current users of torch-neuron version 1.5 are advised to migrate to latest *torch-neuron* version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Apr 29, 2022 :language: en :tags: eol .. _eol-pt-15: End of support for torch-neuron version 1.5 ------------------------------------------- Starting with *Neuron 1.19.0* release, *torch-neuron 1.5* will no longer be supported, and no further releases of *torch-neuron version 1.5* will be issued. Current users of torch-neuron version 1.5 are advised to migrate to latest *torch-neuron* version.</pre></body></html>
2023-09-29T20:55:23.075Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/dlami-neuron-2.10.rst.txt
``` .. post:: May 02, 2023 11:00 :language: en :tags: dlami, pytorch, trn1, inf2, inf1 .. _announce-dlc-sm-neuron-2.9.1: AWS Deep Learning AMIs now available with Neuron 2.10 version ------------------------------------------------------------- We are happy to announce that the following Deep Learning AMIs are now available with latest Neuron Version 2.10. These DLAMIs now support all the Neuron EC2 instances including Inf1, Inf2, Trn1/Trn1n. You can access the AMIs at the following URLs * `AWS Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-ubuntu-20-04/>`__ * `AWS Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-amazon-linux-2/>`__ * `AWS Deep Learning AMI Base Neuron (Ubuntu 20.04) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/>`__ * `AWS Deep Learning AMI Base Neuron (Amazon Linux 2) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/>`__ ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: May 02, 2023 11:00 :language: en :tags: dlami, pytorch, trn1, inf2, inf1 .. _announce-dlc-sm-neuron-2.9.1: AWS Deep Learning AMIs now available with Neuron 2.10 version ------------------------------------------------------------- We are happy to announce that the following Deep Learning AMIs are now available with latest Neuron Version 2.10. These DLAMIs now support all the Neuron EC2 instances including Inf1, Inf2, Trn1/Trn1n. You can access the AMIs at the following URLs * `AWS Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-ubuntu-20-04/&gt;`__ * `AWS Deep Learning AMI Neuron PyTorch 1.13 (Amazon Linux 2) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-13-amazon-linux-2/&gt;`__ * `AWS Deep Learning AMI Base Neuron (Ubuntu 20.04) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-ubuntu-20-04/&gt;`__ * `AWS Deep Learning AMI Base Neuron (Amazon Linux 2) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-base-neuron-amazon-linux-2/&gt;`__ </pre></body></html>
2023-09-29T20:55:23.081Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/neuron250-packages-changes.rst.txt
``` .. post:: Nov 22, 2022 03:00 :language: en :tags: neuron2.x .. _neuron250-packages-changes: Introducing Neuron packaging and installation changes for Inf1 customers ------------------------------------------------------------------------ Starting with :ref:`Neuron release 2.5 <neuron-2.5.0-whatsnew>`, Neuron introduces changes in Neuron packages and installation instructions for Inf1, the following Neuron packages will change names: .. list-table:: Neuron package with changed names for Inf1 :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - New name - Old name (deprecated package) - Package Type - Description - Supported Instances * - ``aws-neuronx-tools`` - ``aws-neuron-tools`` - .deb (apt), .rpm (yum) - System Tools - Trn1, Inf1 * - ``aws-neuronx-dkms`` - ``aws-neuron-dkms`` - .deb (apt), .rpm (yum) - Neuron Driver - Trn1, Inf1 * - ``aws-neuronx-k8-plugin`` - ``aws-neuron-k8-plugin`` - .deb (apt), .rpm (yum) - Neuron Kubernetes plugin - Trn1, Inf1 * - ``aws-neuronx-k8-scheduler`` - ``aws-neuron-k8-scheduler`` - .deb (apt), .rpm (yum) - Neuron Scheduler plugin - Trn1, Inf1 * - ``tensorflow-model-server-neuronx`` - ``tensorflow-model-server-neuron`` - .deb (apt), .rpm (yum) - tensorflow-model-server - Trn1, Inf1 Please follow the :ref:`Neuron setup guide <setup-guide-index>` to update to latest Neuron releases. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Nov 22, 2022 03:00 :language: en :tags: neuron2.x .. _neuron250-packages-changes: Introducing Neuron packaging and installation changes for Inf1 customers ------------------------------------------------------------------------ Starting with :ref:`Neuron release 2.5 &lt;neuron-2.5.0-whatsnew&gt;`, Neuron introduces changes in Neuron packages and installation instructions for Inf1, the following Neuron packages will change names: .. list-table:: Neuron package with changed names for Inf1 :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - New name - Old name (deprecated package) - Package Type - Description - Supported Instances * - ``aws-neuronx-tools`` - ``aws-neuron-tools`` - .deb (apt), .rpm (yum) - System Tools - Trn1, Inf1 * - ``aws-neuronx-dkms`` - ``aws-neuron-dkms`` - .deb (apt), .rpm (yum) - Neuron Driver - Trn1, Inf1 * - ``aws-neuronx-k8-plugin`` - ``aws-neuron-k8-plugin`` - .deb (apt), .rpm (yum) - Neuron Kubernetes plugin - Trn1, Inf1 * - ``aws-neuronx-k8-scheduler`` - ``aws-neuron-k8-scheduler`` - .deb (apt), .rpm (yum) - Neuron Scheduler plugin - Trn1, Inf1 * - ``tensorflow-model-server-neuronx`` - ``tensorflow-model-server-neuron`` - .deb (apt), .rpm (yum) - tensorflow-model-server - Trn1, Inf1 Please follow the :ref:`Neuron setup guide &lt;setup-guide-index&gt;` to update to latest Neuron releases. </pre></body></html>
2023-09-29T20:55:23.251Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/sm-training-dlc-2.9.1.rst.txt
``` .. post:: Apr 26, 2023 11:00 :language: en :tags: sagemaker, pytorch, trn1, inf2 .. _announce-dlc-sm-neuron-2.9.1: PyTorch 1.13 Deep Learning Container for Inf2 & Trn1/Trn1n now available for SageMaker -------------------------------------------------------------------------------------- We are happy to announce that an updated Deep Learning Container that supports PyTorch 1.13 and Neuron 2.9.1 versions is now available for Sagemaker Training. For more information see `Neuron Containers <https://github.com/aws/deep-learning-containers/blob/master/available_images.md#neuron-containers>`_ ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Apr 26, 2023 11:00 :language: en :tags: sagemaker, pytorch, trn1, inf2 .. _announce-dlc-sm-neuron-2.9.1: PyTorch 1.13 Deep Learning Container for Inf2 &amp; Trn1/Trn1n now available for SageMaker -------------------------------------------------------------------------------------- We are happy to announce that an updated Deep Learning Container that supports PyTorch 1.13 and Neuron 2.9.1 versions is now available for Sagemaker Training. For more information see `Neuron Containers &lt;https://github.com/aws/deep-learning-containers/blob/master/available_images.md#neuron-containers&gt;`_ </pre></body></html>
2023-09-29T20:55:23.276Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/neuron-rtd-eol.rst.txt
``` .. post:: Oct 10, 2022 01:00 :language: en :tags: eol, neuron2.x .. _announce-neuron-rtd-eol: Announcing Neuron Runtime 1.x (``neuron-rtd``) end-of-support ------------------------------------------------------------- Starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, Neuron components like Neuron System Tools and Neuron Driver will no longer support Neuron Runtime 1.x. In addition, starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, the `AWS Neuron Runtime Proto GitHub <https://github.com/aws-neuron/aws-neuron-runtime-proto>`_ and `AWS Neuron Driver GitHub <https://github.com/aws-neuron/aws-neuron-driver>`_ repositories will no longer be supported. Why are we removing support for Neuron Runtime 1.x? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Neuron Runtime 1.x (``neuron-rtd``) entered :ref:`maintenance mode <maintenance_rtd>` when Neuron 1.16.0 was released. While Neuron components like Neuron Driver and Neuron System Tools continued to support Neuron Runtime 1.x in addition to supporting Neuron Runtime 2.x, Neuron supported frameworks (e.g. PyTorch Neuron, TensorFlow Neuron, and MXNet Neuron) stopped supporting Neuron Runtime 1.x starting with Neuron 1.16.0. For detailed information see :ref:`introduce-libnrt`. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Oct 10, 2022 01:00 :language: en :tags: eol, neuron2.x .. _announce-neuron-rtd-eol: Announcing Neuron Runtime 1.x (``neuron-rtd``) end-of-support ------------------------------------------------------------- Starting with :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;`, Neuron components like Neuron System Tools and Neuron Driver will no longer support Neuron Runtime 1.x. In addition, starting with :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;`, the `AWS Neuron Runtime Proto GitHub &lt;https://github.com/aws-neuron/aws-neuron-runtime-proto&gt;`_ and `AWS Neuron Driver GitHub &lt;https://github.com/aws-neuron/aws-neuron-driver&gt;`_ repositories will no longer be supported. Why are we removing support for Neuron Runtime 1.x? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Neuron Runtime 1.x (``neuron-rtd``) entered :ref:`maintenance mode &lt;maintenance_rtd&gt;` when Neuron 1.16.0 was released. While Neuron components like Neuron Driver and Neuron System Tools continued to support Neuron Runtime 1.x in addition to supporting Neuron Runtime 2.x, Neuron supported frameworks (e.g. PyTorch Neuron, TensorFlow Neuron, and MXNet Neuron) stopped supporting Neuron Runtime 1.x starting with Neuron 1.16.0. For detailed information see :ref:`introduce-libnrt`. </pre></body></html>
2023-09-29T20:55:23.325Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/announce-eol-pt-1-5.rst.txt
``` .. post:: Mar 25, 2022 :language: en :tags: announce-eol torch-neuron .. _announce-eol-pt-1-5: Announcing end of support for torch-neuron version 1.5 starting with Neuron 1.19.0 release ------------------------------------------------------------------------------------------ Starting with *Neuron 1.19.0* release, *torch-neuron version 1.5* will no longer be supported. Last release of *torch-neuron version 1.5* will be issued as part of *Neuron 1.18.0* release. Current users of those versions are advised to migrate to latest *torch-neuron* version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Mar 25, 2022 :language: en :tags: announce-eol torch-neuron .. _announce-eol-pt-1-5: Announcing end of support for torch-neuron version 1.5 starting with Neuron 1.19.0 release ------------------------------------------------------------------------------------------ Starting with *Neuron 1.19.0* release, *torch-neuron version 1.5* will no longer be supported. Last release of *torch-neuron version 1.5* will be issued as part of *Neuron 1.18.0* release. Current users of those versions are advised to migrate to latest *torch-neuron* version. </pre></body></html>
2023-09-29T20:55:23.391Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/sm-training-trn1-introduce.rst.txt
``` .. post:: Nov 03, 2022 00:01 :language: en :tags: sagemaker, pytorch, trn1 .. _announce-dlami-neuron-pytorch: Amazon SageMaker now supports Trn1 training jobs ------------------------------------------------ We are happy to announce that Amazon SageMaker now supports running training jobs on ml.trn1 instance types. For more information see `Distributed Training with PyTorch Neuron on Trn1 instances <https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#distributed-training-with-pytorch-neuron-on-trn1-instances>`_ The Neuron Developer Flows section will be updated soon. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Nov 03, 2022 00:01 :language: en :tags: sagemaker, pytorch, trn1 .. _announce-dlami-neuron-pytorch: Amazon SageMaker now supports Trn1 training jobs ------------------------------------------------ We are happy to announce that Amazon SageMaker now supports running training jobs on ml.trn1 instance types. For more information see `Distributed Training with PyTorch Neuron on Trn1 instances &lt;https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#distributed-training-with-pytorch-neuron-on-trn1-instances&gt;`_ The Neuron Developer Flows section will be updated soon. </pre></body></html>
2023-09-29T20:55:23.410Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/dlami-pytorch-introduce.rst.txt
``` .. post:: Nov 02, 2022 00:01 :language: en :tags: dlami, pytorch .. _announce-dlami-neuron-pytorch: Introducing AWS Deep Learning AMI Neuron PyTorch ------------------------------------------------ We are happy to announce that Deep Learning AMI (DLAMI) with pre-installed PyTorch Neuron (``torch-neuronx``) is now available, for more information see: * `AWS Deep Learning AMI Neuron PyTorch 1.11 \(Amazon Linux 2\) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-11-amazon-linux-2/>`_ * `AWS Deep Learning AMI Neuron PyTorch 1.11 \(Ubuntu 20.04\) <https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-11-ubuntu-20-04/>`_ The Neuron Setup Guide will be updated soon to include the DLAMI PyTorch Neuron. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Nov 02, 2022 00:01 :language: en :tags: dlami, pytorch .. _announce-dlami-neuron-pytorch: Introducing AWS Deep Learning AMI Neuron PyTorch ------------------------------------------------ We are happy to announce that Deep Learning AMI (DLAMI) with pre-installed PyTorch Neuron (``torch-neuronx``) is now available, for more information see: * `AWS Deep Learning AMI Neuron PyTorch 1.11 \(Amazon Linux 2\) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-11-amazon-linux-2/&gt;`_ * `AWS Deep Learning AMI Neuron PyTorch 1.11 \(Ubuntu 20.04\) &lt;https://aws.amazon.com/releasenotes/aws-deep-learning-ami-neuron-pytorch-1-11-ubuntu-20-04/&gt;`_ The Neuron Setup Guide will be updated soon to include the DLAMI PyTorch Neuron.</pre></body></html>
2023-09-29T20:55:23.518Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron2.x/neuron230-packages-changes.rst.txt
``` .. post:: Oct 10, 2022 03:00 :language: en :tags: neuron2.x .. _neuron-packages-changes: Introducing Packaging and installation changes ---------------------------------------------- Starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, Neuron introduces changes in Neuron packages and installation instructions. .. contents:: Table of contents :local: :depth: 2 .. _neuron-new-packages: New Neuron packages ^^^^^^^^^^^^^^^^^^^ Starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, Neuron introduces the following new packages: .. list-table:: New Neuron packages :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - New Package - Package Type - Description - Supported Instances (At the time of releasing :ref:`Neuron release 2.3 <neuron2x-trn1ga>`) * - ``torch-neuronx`` - .whl (pip) - PyTorch Neuron package using `PyTorch XLA <https://pytorch.org/xla>`_ - Trn1 * - ``neuronx-cc`` - .whl (pip) - Neuron Compiler with XLA front-end - Trn1 * - ``aws-neuronx-runtime-lib`` - .deb (apt), .rpm (yum) - Neuron Runtime library - Trn1 * - ``aws-neuronx-collective`` - .deb (apt), .rpm (yum) - Collective Communication library - Trn1 * - ``aws-neuronx-tools`` - .deb (apt), .rpm (yum) - Neuron System Tools - Trn1 .. note:: In next releases ``aws-neuronx-tools`` and ``aws-neuronx-runtime-lib`` will add support for Inf1. Why are we introducing new Neuron packages? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To add Neuron support for training neural-networks, Neuron 2.x introduces new capabilities and major architectural updates. For example, Neuron adds support for Collective Communication Operations, in :ref:`new packages <neuron-new-packages>` such as ``aws-neuron-collective``. In addition, some of those updates and new capabilities are not backward compatible, for example the Pytorch Neuron package that adds support for training neural-networks uses `PyTorch XLA <https://pytorch.org/xla>`_ as a backend. To reduce the possibility of customers using features that are not backward compatible, the new capabilities are introduced in new Neuron packages. For example, PyTorch Neuron and Neuron Compiler will support different packages for Inf1 and for Trn1: ``torch-neuron`` and ``neuron-cc`` will support Inf1 instances, and ``torch-neuronx`` and ``neuronx-cc`` will support Trn1 instances. .. _neuron-packages-renaming: Renamed Neuron Packages ^^^^^^^^^^^^^^^^^^^^^^^ Starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, the following Neuron packages will change names: .. list-table:: Neuron package with changed names :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - New name - Old name (deprecated package) - Package Type - Description - Supported Instances * - ``aws-neuronx-oci-hooks`` - ``aws-neuron-runtime-base`` - .deb (apt), .rpm (yum) - OCI Hooks support - Trn1, Inf1 * - ``aws-neuronx-dkms`` - ``aws-neuron-dkms`` - .deb (apt), .rpm (yum) - Neuron Driver - Trn1, Inf1 * - ``aws-neuronx-k8-plugin`` - ``aws-neuron-k8-plugin`` - .deb (apt), .rpm (yum) - Neuron Kubernetes plugin - Trn1, Inf1 * - ``aws-neuronx-k8-scheduler`` - ``aws-neuron-k8-scheduler`` - .deb (apt), .rpm (yum) - Neuron Scheduler plugin - Trn1, Inf1 Why are we changing package names? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To avoid situations where customers may accidentally install Neuron packages with features that are not backward compatible, we have introduced additional packages with different names for the same Neuron component. .. _neuron-installation-instruction-change: Updated installation and update instructions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Starting with :ref:`Neuron release 2.3 <neuron2x-trn1ga>`, Neuron installation and update instructions will include pinning of the major version of the Neuron package. For example, to install latest Neuron tools package, call ``sudo apt-get install aws-neuronx-tools=2.*`` and to install latest PyTorch Neuron package for Trn1, call ``pip install torch-neuronx==1.11.0.1.*``. Why are we changing installation and update instructions? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Neuron installation and update instructions now guide customers to pin the major version of the different Neuron packages as mentioned in :ref:`neuron-installation-instruction-change`. This is done to future-proof instructions for new, backwards-incompatible major version releases. .. note:: The change of the installation and update instructions will not include instruction to install or update ``torch-neuron`` and ``neuron-cc``. What do I need to do? ~~~~~~~~~~~~~~~~~~~~~ Please follow the :ref:`Neuron setup guide <setup-guide-index>` to update to latest Neuron releases. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Oct 10, 2022 03:00 :language: en :tags: neuron2.x .. _neuron-packages-changes: Introducing Packaging and installation changes ---------------------------------------------- Starting with :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;`, Neuron introduces changes in Neuron packages and installation instructions. .. contents:: Table of contents :local: :depth: 2 .. _neuron-new-packages: New Neuron packages ^^^^^^^^^^^^^^^^^^^ Starting with :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;`, Neuron introduces the following new packages: .. list-table:: New Neuron packages :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - New Package - Package Type - Description - Supported Instances (At the time of releasing :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;`) * - ``torch-neuronx`` - .whl (pip) - PyTorch Neuron package using `PyTorch XLA &lt;https://pytorch.org/xla&gt;`_ - Trn1 * - ``neuronx-cc`` - .whl (pip) - Neuron Compiler with XLA front-end - Trn1 * - ``aws-neuronx-runtime-lib`` - .deb (apt), .rpm (yum) - Neuron Runtime library - Trn1 * - ``aws-neuronx-collective`` - .deb (apt), .rpm (yum) - Collective Communication library - Trn1 * - ``aws-neuronx-tools`` - .deb (apt), .rpm (yum) - Neuron System Tools - Trn1 .. note:: In next releases ``aws-neuronx-tools`` and ``aws-neuronx-runtime-lib`` will add support for Inf1. Why are we introducing new Neuron packages? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To add Neuron support for training neural-networks, Neuron 2.x introduces new capabilities and major architectural updates. For example, Neuron adds support for Collective Communication Operations, in :ref:`new packages &lt;neuron-new-packages&gt;` such as ``aws-neuron-collective``. In addition, some of those updates and new capabilities are not backward compatible, for example the Pytorch Neuron package that adds support for training neural-networks uses `PyTorch XLA &lt;https://pytorch.org/xla&gt;`_ as a backend. To reduce the possibility of customers using features that are not backward compatible, the new capabilities are introduced in new Neuron packages. For example, PyTorch Neuron and Neuron Compiler will support different packages for Inf1 and for Trn1: ``torch-neuron`` and ``neuron-cc`` will support Inf1 instances, and ``torch-neuronx`` and ``neuronx-cc`` will support Trn1 instances. .. _neuron-packages-renaming: Renamed Neuron Packages ^^^^^^^^^^^^^^^^^^^^^^^ Starting with :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;`, the following Neuron packages will change names: .. list-table:: Neuron package with changed names :widths: auto :header-rows: 1 :align: left :class: table-smaller-font-size * - New name - Old name (deprecated package) - Package Type - Description - Supported Instances * - ``aws-neuronx-oci-hooks`` - ``aws-neuron-runtime-base`` - .deb (apt), .rpm (yum) - OCI Hooks support - Trn1, Inf1 * - ``aws-neuronx-dkms`` - ``aws-neuron-dkms`` - .deb (apt), .rpm (yum) - Neuron Driver - Trn1, Inf1 * - ``aws-neuronx-k8-plugin`` - ``aws-neuron-k8-plugin`` - .deb (apt), .rpm (yum) - Neuron Kubernetes plugin - Trn1, Inf1 * - ``aws-neuronx-k8-scheduler`` - ``aws-neuron-k8-scheduler`` - .deb (apt), .rpm (yum) - Neuron Scheduler plugin - Trn1, Inf1 Why are we changing package names? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To avoid situations where customers may accidentally install Neuron packages with features that are not backward compatible, we have introduced additional packages with different names for the same Neuron component. .. _neuron-installation-instruction-change: Updated installation and update instructions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Starting with :ref:`Neuron release 2.3 &lt;neuron2x-trn1ga&gt;`, Neuron installation and update instructions will include pinning of the major version of the Neuron package. For example, to install latest Neuron tools package, call ``sudo apt-get install aws-neuronx-tools=2.*`` and to install latest PyTorch Neuron package for Trn1, call ``pip install torch-neuronx==1.11.0.1.*``. Why are we changing installation and update instructions? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Neuron installation and update instructions now guide customers to pin the major version of the different Neuron packages as mentioned in :ref:`neuron-installation-instruction-change`. This is done to future-proof instructions for new, backwards-incompatible major version releases. .. note:: The change of the installation and update instructions will not include instruction to install or update ``torch-neuron`` and ``neuron-cc``. What do I need to do? ~~~~~~~~~~~~~~~~~~~~~ Please follow the :ref:`Neuron setup guide &lt;setup-guide-index&gt;` to update to latest Neuron releases. </pre></body></html>
2023-09-29T20:55:23.578Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/eol-tf-21-24.rst.txt
``` .. post:: Mar 25, 2022 :language: en :tags: eol .. _eol-tf-21-24: End of support for tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4 -------------------------------------------------------------------- Starting with *Neuron 1.18.0* release, *tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4* will no longer be supported, and no further releases of *tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4* will be issued. Current users of those versions are advised to migrate to latest *tensorflow-neuron* version. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Mar 25, 2022 :language: en :tags: eol .. _eol-tf-21-24: End of support for tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4 -------------------------------------------------------------------- Starting with *Neuron 1.18.0* release, *tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4* will no longer be supported, and no further releases of *tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4* will be issued. Current users of those versions are advised to migrate to latest *tensorflow-neuron* version.</pre></body></html>
2023-09-29T20:55:23.654Z
Megatron-LM GPT Pretraining Tutorial [End of Support] — AWS Neuron Documentation
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuronx/tutorials/training/megatron_lm_gpt.html#megatron-lm-pretraining-tutorial
# Megatron-LM GPT Pretraining Tutorial \[End of Support\] — AWS Neuron Documentation ## Megatron-LM GPT Pretraining Tutorial \[End of Support\][#](#megatron-lm-gpt-pretraining-tutorial-end-of-support "Permalink to this headline") GPT is a large language model that excels at many natural language processing (NLP) tasks. It is derived from the decoder part of the Transformer. [Neuron Reference For Megatron-LM \[EOS\]](https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm) is a library that enables large-scale distributed training of language models such as GPT and is adapted from [Megatron-LM](https://github.com/NVIDIA/Megatron-LM). This tutorial explains how to run the Neuron reference for Megatron-LM GPT pretraining on Trainium. The AWS Neuron SDK provides access to Trainium devices through an extension of PyTorch/XLA - a library that includes the familiar PyTorch interface along with XLA-specific additions. For Trainium customers, this means that existing PyTorch training scripts can be executed on Trn1 instances with minimal code modifications. For additional details relating to PyTorch/XLA, please refer to the [official PyTorch/XLA documentation](https://pytorch.org/xla). To run on Trainium, Neuron Reference For Megatron-LM library includes the following changes: - GPU devices are replaced with Pytorch/XLA devices. - Pytorch/XLA distributed backend is used to bridge the PyTorch distributed APIs to XLA communication semantics. - Pytorch/XLA MpDeviceLoader is used for the data ingestion pipelines. Pytorch/XLA MpDeviceLoader helps improve performance by overlapping the three execution steps: tracing, compilation and data batch loading to the device. - CUDA APIs are mapped to generic PyTorch APIs. - CUDA fused optimizers are replaced with generic PyTorch alternatives. The GPT example in this tutorial is an adaptation of the original Megatron-LM GPT example, trained using the Wikipedia dataset. Table of Contents - [Install PyTorch Neuron](#install-pytorch-neuron) - [Download Preprocessed Wikipedia Dataset](#download-preprocessed-wikipedia-dataset) - [Setting up the training environment on trn1.32xlarge](#setting-up-the-training-environment-on-trn1-32xlarge) - [GPT Pretraining Python Script](#gpt-pretraining-python-script) - [GPT Training Shell Script](#gpt-training-shell-script) - [Initiating a Training Job](#initiating-a-training-job) - [Monitoring Training Job Progress](#monitoring-training-job-progress) - [Monitoring Training Job Progress using neuron-top](#monitoring-training-job-progress-using-neuron-top) - [Monitoring Training Job Progress using TensorBoard](#monitoring-training-job-progress-using-tensorboard) - [Finishing the tutorial](#finishing-the-tutorial) - [Running a multi-node GPT](#running-a-multi-node-gpt) - [Checkpointing GPT Model](#checkpointing-gpt-model) - [Preparing Wikipedia Dataset from Scratch](#preparing-wikipedia-dataset-from-scratch) - [Known issues and limitations](#known-issues-and-limitations) - [No broadcast support](#no-broadcast-support) - [No pipeline parallel support](#no-pipeline-parallel-support) - [Dropout is disabled](#dropout-is-disabled) - [“Failed accept4: Too many open files”](#failed-accept4-too-many-open-files) - [Error: cannot import name ‘helpers’ from ‘megatron.data’](#error-cannot-import-name-helpers-from-megatron-data) - [Error: Out of space while checkpointing](#error-out-of-space-while-checkpointing) - [Troubleshooting](#troubleshooting) Note Logs used in tutorials do not present latest performance numbers For latest performance numbers visit [Neuron Performance](../../../../../general/benchmarks/index.html#benchmark) ## [Install PyTorch Neuron](#id1)[#](#install-pytorch-neuron "Permalink to this headline") Before running the tutorial please follow the installation instructions at: [Install PyTorch Neuron on Trn1](../../../../../general/setup/torch-neuronx.html#setup-torch-neuronx) Please set the storage of instance to _512GB_ or more if you intent to run multiple experiments and save many checkpoints. ## [Download Preprocessed Wikipedia Dataset](#id2)[#](#download-preprocessed-wikipedia-dataset "Permalink to this headline") Download the vocabulary file, the merge table file, and the preprocessed Wikipedia dataset using the following commands: ``` export DATA_DIR=~/examples_datasets/gpt2 mkdir -p ${DATA_DIR} && cd ${DATA_DIR} wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/my-gpt2_text_document.bin . --no-sign-request aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/my-gpt2_text_document.idx . --no-sign-request aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/license.txt . --no-sign-request ``` See section `Preparing Wikipedia dataset from scratch` if you would like to recreate the preprocessed dataset from scratch. ## [Setting up the training environment on trn1.32xlarge](#id3)[#](#setting-up-the-training-environment-on-trn1-32xlarge "Permalink to this headline") Please follow the instructions to setup Python virtual environment with Neuron packages. Install Python3 development package needed to build the data helpers tools. If you are on Amazon Linux, do: ``` sudo yum install -y python3-devel ``` If you are on Ubuntu, do: ``` sudo apt install -y python3-dev ``` Clone the AWS Neuron Reference for Megatron-LM package, install dependencies, and build the data helpers tool: ``` cd ~/ git clone https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm.git pip install pybind11 regex pushd . cd aws-neuron-reference-for-megatron-lm/megatron/data/ make popd ``` ## [GPT Pretraining Python Script](#id4)[#](#gpt-pretraining-python-script "Permalink to this headline") The GPT pretraining python script is a wrapper that imports the Megatron-LM library modules and sets up the pieces needed by the Megatron-LM trainer: GPT model, loss function, forward pass, data provider. It is adapted from [pretrain\_gpt.py](https://github.com/NVIDIA/Megatron-LM/blob/main/pretrain_gpt.py). The Neuron changes are: - Use XLA device - Not using mpu.broadcast\_data as it is currently unsupported. Instead each worker reads the data in parallel. - Use int instead of long datatype for token data The script is available at `~/aws-neuron-reference-for-megatron-lm/pretrain_gpt.py` ## [GPT Training Shell Script](#id5)[#](#gpt-training-shell-script "Permalink to this headline") The GPT training shell script runs the above python script with following model configurations (for 6.7 billion parameters model): - Number of layers: 32 - Hidden size: 4096 - Number attention heads: 32 - Sequence length: 2048 - Max positional embeddings size: 2048 The following training parameters are used: - The number of gradient accumulation microsteps is 64, with worker batch size of 1. - The tensor parallelism degree is 8. - The data parallelism degree is 4. - The number of workers is 32. Additionally, the script uses: - CPU intitialization - AdamW optimizer (default). - Gradient clipping. - No CUDA fusions (bias-gelu, masked-softmax, bias-dropout) - Disabled contiguous buffer in local DDP - Option `--distributed-backend xla` picks the XLA distributed backend to bridge the Pytorch distributed APIs to XLA communication semantics. See [this link](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/arguments.py) for a full list of options and their descriptions. Note Not all options are supported. Currently only tensor-parallel and data-parallel modes in Neuron Reference For Megatron-LM are supported. We support tensor-parallel degree of 8 and data-parallel degree of upto 64. The script for running on a single node is available at `~/aws-neuron-reference-for-megatron-lm/examples/pretrain_gpt3_6.7B_32layers_bf16.sh` This shell script expects dataset files to be located in ~/examples\_datasets/gpt2/ following the steps above. If you place the dataset files in another location, please update the DATA\_PATH variable in the shell script. ## [Initiating a Training Job](#id6)[#](#initiating-a-training-job "Permalink to this headline") To run the GPT example, first activate the Python virtual environment, change to the Megatron-LM package location, and allow execute permission on the script: ``` source ~/aws_neuron_venv_pytorch/bin/activate cd ~/aws-neuron-reference-for-megatron-lm/ chmod +x *.sh ``` Next, run the parallel compilations of graphs in order to reduce compilation time during the actual run. ``` neuron_parallel_compile ./examples/pretrain_gpt3_6.7B_32layers_bf16.sh ``` This command performs a short trial run of the training script to extract graphs and then do parallel compilations on those graphs before populating the persistent cache with compiled graphs. This helps reduce the compilation time during the actual run of the training script. Note Please ignore the results of the trial run as they are not the actual execution results. If some or all the graphs were already compiled and cached in the persistent cache, then fewer or none of the graphs would need compilation. To force recompilation, you can remove the cache directory at `/var/tmp/neuron-compile-cache/.` Compilation is recommended if there are some changes in the script (such as batch size, number of layers, number of workers, etc.). Compilation will only happen if the model graph or its parameters/compilation flags change. Finally, run the script for the actual run: ``` ./examples/pretrain_gpt3_6.7B_32layers_bf16.sh ``` During the run, you will see outputs like below, some lines showing throughput and loss statistics every global step. ``` `iteration 4873/ 10000 | consumed samples: 311872 | elapsed time per iteration (ms): 8718.9 | learning rate: 1.500E-04 | global batch size: 64 | lm loss: 3.296875E+00 | grad norm: 0.430 | throughput: 7.340` ``` ## [Monitoring Training Job Progress](#id7)[#](#monitoring-training-job-progress "Permalink to this headline") Using a single Trn1 instance with 32 NeuronCores, the current GPT pretraining will run for ~81 hours. During this time, you will see the average loss metric begin at 11 and ultimately converge to ~3.2. Throughput for the training job will be ~7.3 seq/sec. ## [Monitoring Training Job Progress using neuron-top](#id8)[#](#monitoring-training-job-progress-using-neuron-top "Permalink to this headline") With the training job still running, launch a second SSH connection into the trn1 instance, and use the `neuron-top` command to examine the aggregate NeuronCore utilization. ## [Monitoring Training Job Progress using TensorBoard](#id9)[#](#monitoring-training-job-progress-using-tensorboard "Permalink to this headline") The demo includes TensorBoard-compatible logging, which allows the learning rate and training metrics to be monitored in real-time. By default, the training script logs metrics to the following TensorBoard log directory `~/aws-neuron-reference-for-megatron-lm/tb_*`. In order to view your training metrics in TensorBoard, first run the following commands in your SSH session: ``` source ~/aws_neuron_venv_pytorch/bin/activate cd ~/aws-neuron-reference-for-megatron-lm/ tensorboard --logdir ./ ``` Once running, open a new SSH connection to the instance and port-forward TCP port 6006 (ex: -L 6006:127.0.0.1:6006). Once the tunnel is established, TensorBoard can then be accessed via web browser at the following URL: [http://localhost:6006](http://localhost:6006/). Please note that you will not be able to access TensorBoard if you disconnect your port-forwarding SSH session to the Trainium instance. ## [Finishing the tutorial](#id10)[#](#finishing-the-tutorial "Permalink to this headline") Once you are ready, and the training throughput is as expected, there are a couple of options for finishing the GPT pretraining demo: **Allow the training script to run to completion**. If you would like to observe the training script run to completion, it is recommended to launch the training script from a terminal multiplexer such as `tmux` or `screen`, and then detach the session so that the training script can run in the background. With this approach, you can safely let the training script run unattended, without risk of an SSH disconnection causing the training job to stop running. **Stop the training job early**. To stop the training job early, press CTRL-C in the terminal window in which you launched the training script. In some cases, if you manually cancel a job using CTRL-C and then later want to run the job again, you might first need to terminate all the python processes by the command `killall -9 python3` . ## [Running a multi-node GPT](#id11)[#](#running-a-multi-node-gpt "Permalink to this headline") We use SLURM to launch multi-node GPT training jobs. Like single node runs, we have a precompilation step followed by the actual run. To precompile: ``` sbatch examples/pretrain_gpt3_6.7B_compile.slurm ``` This will precompile the script `examples/pretrain_gpt3_6.7B_32layers_bf16_bs1024_slurm.sh` on all the nodes and populate the caches. To run the compiled model: ``` sbatch examples/pretrain_gpt3_6.7B.slurm ``` The number of nodes is currently set to 16 and since the tensor-parallel degree used is 8, the data-parallel degree is automatically computed to be 64, resulting in a 8x64 two dimensional mesh parallelism. The tensorboard logs are written by the last rank and will be in the TensorBoard log directory `~/aws-neuron-reference-for-megatron-lm/tb_*`. Compared to the single-node script, we use an increased batch size of 1024 which gives us a throughput bump of ~98 seq/sec. The number of iterations is also increased with changes in the hyperparameters pertaining to learning rates, weight decay. ## [Checkpointing GPT Model](#id12)[#](#checkpointing-gpt-model "Permalink to this headline") A new mode of checkpointing using serialized tensor and staggered save/load is supported to alleviate memory pressure. To save the model, add the lines: ``` --save-xser $CHECKPOINT_PATH --save-interval 1500 ``` This will save the checkpoint at path variable provided for every 1500 iterations. Note Please note that the model saves all the model weights, optimizer and rng states (~76GB for a 32 layermodel). And if checkpointed frequently can quickly lead to low disk storage. Make sure there is enough disk space. To load the checkpoint, we first need to remove `--use-cpu-initialization` from the script and then add ``` --load-xser $CHECKPOINT_PATH ``` Note Please note not removing the –use-cpu-initialization flag may lead to out-of-memory execution and result in unstable resumption of training. ## [Preparing Wikipedia Dataset from Scratch](#id13)[#](#preparing-wikipedia-dataset-from-scratch "Permalink to this headline") The process of preparing the Wikipedia dataset follows the original [Megatron-LM documentation](https://github.com/NVIDIA/Megatron-LM#user-content-datasets). You will need a large c5 machine like c5n.18xlarge and using the latest Deep Learning AMI. First download the Wikipedia dataset. Depending on the network bandwidth, this is expected to be about ~65 minutes. ``` export WIKI_DIR=~/examples_datasets/wiki mkdir -p $WIKI_DIR && cd $WIKI_DIR wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2 ``` Download the vocabulary and merge table files for the desired model. This example uses the GPT-2 model: ``` export DATA_DIR=~/examples_datasets/gpt2 export GPT2_DATA=${DATA_DIR}/gpt2 mkdir -p ${GPT2_DATA} && cd ${GPT2_DATA} wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt mkdir -p ${GPT2_DATA}/checkpoint wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O ${GPT2_DATA}/checkpoint/megatron_lm_345m_v0.0.zip ``` Extract the downloaded data using WikiExtractor (this step takes about 2 hours): ``` git clone https://github.com/attardi/wikiextractor.git /tmp/wikiextractor cd /tmp/wikiextractor python -m wikiextractor.WikiExtractor --json ~/examples_datasets/wiki/enwiki-latest-pages-articles.xml.bz2 --output ~/examples_datasets/wiki/text/ -q --processes 70 2>&1 | tee wikiextract.out & ``` The Wikiextractor first preprocesses the template of all pages sequentially, followed by a Map/Reduce process for extracting the pages and converting to the loose json format required by Megatron-LM. Once the extraction completes, we merge the text files with (~2 minutes): ``` conda activate pytorch_latest_p37 cd ~/examples_datasets/wiki find ~/examples_datasets/wiki/text/ -name wiki* | parallel -m -j 70 "cat {} >> mergedfile.json" ``` The `mergedfile.json` size on disk is 16GB. With it, create the binary data format for Megatron GPT2. NOTE: Refer to [this solution](https://github.com/NVIDIA/Megatron-LM/issues/62) if an `IndexError: list index out of range` occurs. To create the binary data, type the following command: ``` python ~/aws-neuron-reference-for-megatron-lm/tools/preprocess_data.py \ --input ~/examples_datasets/wiki/mergedfile.json \ --output-prefix my-gpt2 \ --vocab ~/examples_datasets/gpt2/gpt2-vocab.json \ --dataset-impl mmap \ --tokenizer-type GPT2BPETokenizer \ --merge-file ~/examples_datasets/gpt2/gpt2-merges.txt \ --append-eod \ --workers 70 ``` Files my-gpt2\_text\_document.\* are generated after about 12 minutes. ## [Known issues and limitations](#id14)[#](#known-issues-and-limitations "Permalink to this headline") ### [No broadcast support](#id15)[#](#no-broadcast-support "Permalink to this headline") Currently, the mpu.broadcast\_data is unsupported on Trainium. ### [No pipeline parallel support](#id16)[#](#no-pipeline-parallel-support "Permalink to this headline") Currently, only tensor parallel and data parallel are supported and there is no pipeline parallel support in Neuron Reference For Megatron-LM. ### [Dropout is disabled](#id17)[#](#dropout-is-disabled "Permalink to this headline") Currently, dropout is disabled in the example. ### [“Failed accept4: Too many open files”](#id18)[#](#failed-accept4-too-many-open-files "Permalink to this headline") When running Megatron-LM GPT3 6.7B example above on Ubuntu Server 20.04 LTS (HVM) and Ubuntu Server 22.04 LTS (HVM) AMIs, you may encounter the following “Failed accept4: Too many open files” error: ``` E0301 08:06:14.272283286 72588 tcp_server_posix.cc:214] Failed accept4: Too many open files 2023-03-01 08:06:15.515834: F tensorflow/libtpu/neuron/neuron_compiler.cc:200] Check failed: fd != -1 Opening lock file failed with errno 24 ``` The reason is that on this AMI, the “ulimit -n” is set to 1024, which is too low compared to for example Amazon Linux 2 AMI (HVM) - Kernel 5.10 where it is set tp 65535 by default. To workaround this issue, please increase “ulimit -n” to a higher value, such as 65535 which matches Amazon Linux 2 AMI (HVM) - Kernel 5.10 and is sufficient for the Megatron-LM GPT3 6.7B example. Additionally, this can be set within the shell script (which is ran using SLURM srun command) so that it is set for each worker process. ### [Error: cannot import name ‘helpers’ from ‘megatron.data’](#id19)[#](#error-cannot-import-name-helpers-from-megatron-data "Permalink to this headline") You may encounter the error “cannot import name ‘helpers’ from ‘megatron.data’” like below: ``` Exception in device=NEURONT:0: cannot import name 'helpers' from 'megatron.data' (/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/__init__.py) Traceback (most recent call last): File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 373, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 367, in _start_fn fn(gindex, *args) File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/pretrain_gpt_mp.py", line 138, in pretrain_mp forward_step, args_defaults={'tokenizer_type': 'GPT2BPETokenizer'}) File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/training.py", line 162, in pretrain train_valid_test_dataset_provider) File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/training.py", line 1021, in build_train_valid_test_data_iterators train_val_test_num_samples) File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/pretrain_gpt_mp.py", line 128, in train_valid_test_datasets_provider skip_warmup=(not args.mmap_warmup)) File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 43, in build_train_valid_test_datasets seq_length, seed, skip_warmup) File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 118, in _build_train_valid_test_datasets train_dataset = build_dataset(0, 'train') File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 115, in build_dataset seq_length, seed) File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 156, in __init__ num_samples, seq_length, seed) File "/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py", line 274, in _build_index_mappings from megatron.data import helpers ImportError: cannot import name 'helpers' from 'megatron.data' (/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/__init__.py) ``` To fix this, please go into aws-neuron-reference-for-megatron-lm/megatron/data/ and do “make”: ``` pip install pybind11 pushd . cd aws-neuron-reference-for-megatron-lm/megatron/data/ make popd ``` ### [Error: Out of space while checkpointing](#id20)[#](#error-out-of-space-while-checkpointing "Permalink to this headline") You may seem an error as follows. The model checkpoints are large as they dump all the model weights, optimizer and rng states. And if these are frequently checkpointed, the storage can run out fast. Please make sure you have enough disk space. ``` Traceback (most recent call last): File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch/serialization.py", line 380, in save _save(obj, opened_zipfile, pickle_module, pickle_protocol) File "/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch/serialization.py", line 604, in _save zip_file.write_record(name, storage.data_ptr(), num_bytes) OSError: [Errno 28] No space left on device ```
<!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Megatron-LM GPT Pretraining Tutorial [End of Support] — AWS Neuron Documentation</title> <!-- Loaded before other Sphinx assets --> <link href="../../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link href="../../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet"> <link rel="stylesheet" href="../../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2"> <link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2"> <link rel="stylesheet" type="text/css" href="../../../../../_static/pygments.css"> <link rel="stylesheet" href="../../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css"> <link rel="stylesheet" type="text/css" href="../../../../../_static/css/custom.css"> <link rel="stylesheet" type="text/css" href="../../../../../_static/styles/sphinx-book-theme.css"> <link rel="stylesheet" type="text/css" href="../../../../../_static/contentui.css"> <link rel="stylesheet" type="text/css" href="../../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css"> <link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css"> <!-- Pre-loaded scripts that we'll load fully later --> <link rel="preload" as="script" href="../../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"> <script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&amp;l=dataLayer&amp;cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../../" id="documentation_options" src="../../../../../_static/documentation_options.js"></script> <script src="../../../../../_static/jquery.js"></script> <script src="../../../../../_static/underscore.js"></script> <script src="../../../../../_static/doctools.js"></script> <script src="../../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script> <script src="../../../../../_static/contentui.js"></script> <script src="../../../../../_static/design-tabs.js"></script> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script> <link rel="index" title="Index" href="../../../../../genindex.html"> <link rel="search" title="Search" href="../../../../../search.html"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="docsearch:language" content="en"> <!-- Google Analytics --> <style type="text/css"> ul.ablog-archive { list-style: none; overflow: auto; margin-left: 0px; } ul.ablog-archive li { float: left; margin-right: 5px; font-size: 80%; } ul.postlist a { font-style: italic; } ul.postlist-style-disc { list-style-type: disc; } ul.postlist-style-none { list-style-type: none; } ul.postlist-style-circle { list-style-type: circle; } </style> <!-- RTD Extra Head --> <link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css"> <script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/torch/torch-neuronx/tutorials/training/megatron_lm_gpt", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script> <!-- Using this variable directly instead of using `JSON.parse` is deprecated. The READTHEDOCS_DATA global variable will be removed in the future. --> <script type="text/javascript"> READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML); </script> <script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script> <!-- end RTD <extrahead> --> <script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head> <body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled"> <!-- Checkboxes to toggle the left sidebar --> <input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar"> <label class="overlay overlay-navbar" for="__navigation"> <div class="visually-hidden">Toggle navigation sidebar</div> </label> <!-- Checkboxes to toggle the in-page toc --> <input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents"> <label class="overlay overlay-pagetoc" for="__page-toc"> <div class="visually-hidden">Toggle in-page Table of Contents</div> </label> <!-- Headers at the top --> <div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div> <div class="header header-item noprint"></div> <div class="container-fluid" id="banner"></div> <div class="container-xl"> <div class="row"> <!-- Sidebar --> <div class="bd-sidebar noprint" id="site-navigation"> <div class="bd-sidebar__content"> <div class="bd-sidebar__top"><div class="navbar-brand-box"> <a class="navbar-brand text-wrap" href="../../../../../index.html"> <!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 --> <img src="../../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo"> <h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1> </a> </div><form class="bd-search d-flex align-items-center" action="../../../../../search.html" method="get"> <i class="icon fas fa-search"></i> <input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off"> </form><nav class="bd-links" id="bd-docs-nav" aria-label="Main"> <div class="bd-toc-item active"> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Overview </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../../general/quick-start/docs-quicklinks.html"> Quick Links </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../general/quick-start/index.html"> Get Started with Neuron </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../general/quick-start/github-samples.html"> GitHub Samples </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../general/benchmarks/index.html"> Performance </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../release-notes/index.html"> What’s New </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../general/announcements/index.html"> Announcements </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Frameworks </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../index.html"> PyTorch Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox"> <label for="toctree-checkbox-1"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../torch-setup.html"> Pytorch Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../inference-torch-neuronx.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox"> <label for="toctree-checkbox-2"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../inference/tutorials-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox"> <label for="toctree-checkbox-3"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html"> Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../inference/tutorial-torchserve-neuronx.html"> BERT TorchServe Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/tutorials/tutorial-libtorch.html"> LibTorch C++ Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"> Compiling and Deploying ResNet50 on Trn1 or Inf2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html"> T5 model inference on Trn1 or Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../additional-examples-inference-torch-neuronx.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox"> <label for="toctree-checkbox-4"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/"> AWS Neuron Samples GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx"> Transformers Neuron GitHub samples </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../api-reference-guide/inference/inference-api-guide-torch-neuronx.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox"> <label for="toctree-checkbox-5"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../api-reference-guide/inference/api-torch-neuronx-trace.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Tracing API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../api-reference-guide/inference/api-torch-neuronx-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) NeuronCore Placement APIs <strong> [Experimental] </strong> </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../api-reference-guide/inference/api-torch-neuronx-analyze.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Analyze API for Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../api-reference-guide/inference/api-torch-neuronx-data-parallel.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) DataParallel API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../programming-guide/inference/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox"> <label for="toctree-checkbox-6"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../programming-guide/inference/core-placement.html"> NeuronCore Allocation and Model Placement for Inference ( <span class="xref std std-ref"> torch-neuronx </span> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../programming-guide/inference/trace-vs-xla-lazytensor.html"> Comparison of Traced Inference versus XLA <span class="xref std std-ref"> Lazy Tensor </span> Inference ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html"> Data Parallel Inference on torch_neuronx </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../misc-inference-torch-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox"> <label for="toctree-checkbox-7"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../inference-torch-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox"> <label for="toctree-checkbox-8"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch-neuron/tutorials/tutorials-inference-torch-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox"> <label for="toctree-checkbox-9"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/tutorials/tutorials-torch-neuron-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/tutorials/tutorials-torch-neuron-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch-neuron/additional-examples-inference-torch-neuron.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox"> <label for="toctree-checkbox-10"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch-neuron/api-reference-guide-torch-neuron.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox"> <label for="toctree-checkbox-11"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/api-compilation-python-api.html"> PyTorch Neuron trace Python API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/api-torch-neuron-dataparallel-api.html"> torch.neuron.DataParallel API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/api-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement API [Experimental] </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch-neuron/developer-guide-torch-neuron.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox"> <label for="toctree-checkbox-12"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/appnotes/torch-neuron/bucketing-app-note.html"> Running Inference on Variable Input Shapes with Bucketing </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html"> Data Parallel Inference on PyTorch Neuron </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/guides/torch-lstm-support.html"> Developer Guide - PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) <code class="xref py py-class docutils literal notranslate"> <span class="pre"> LSTM </span> </code> Support </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/guides/core-placement/torch-core-placement.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Core Placement </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../torch-neuron/misc-inference-torch-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox"> <label for="toctree-checkbox-13"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) Supported operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../torch-neuron/troubleshooting-guide.html"> Troubleshooting Guide for PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/torch/torch-neuron/torch-neuron.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuron </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../training-torch-neuronx.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox"> <label for="toctree-checkbox-14"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="tutorials-training-torch-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox"> <label for="toctree-checkbox-15"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="bert.html"> Hugging Face BERT Pretraining Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="mlp.html"> Multi-Layer Perceptron Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="finetune_hftrainer.html"> PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="finetune_t5.html"> Fine-tune T5 model on Trn1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="zero1_gpt2.html"> ZeRO-1 Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="analyze_for_training.html"> Analyze for Training Tutorial </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../additional-examples-training.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox"> <label for="toctree-checkbox-16"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron"> AWS Neuron Reference for Nemo Megatron GitHub Repository </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples"> AWS Neuron Samples for EKS </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples"> AWS Neuron Samples for AWS ParallelCluster </a> </li> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../api-reference-guide/training/index.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox"> <label for="toctree-checkbox-17"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../api-reference-guide/training/pytorch-neuron-parallel-compile.html"> PyTorch Neuron neuron_parallel_compile CLI ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../api-reference-guide/training/torch-neuron-envvars.html"> PyTorch Neuron Environment Variables ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../api-reference-guide/torch-neuronx-profiling-api.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) Profiling API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../programming-guide/training/index.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox"> <label for="toctree-checkbox-18"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../programming-guide/training/pytorch-neuron-programming-guide.html"> Developer Guide for Training with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../programming-guide/training/pytorch-neuron-debug.html"> How to debug models in PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../programming-guide/torch-neuronx-profiling-dev-guide.html"> Developer Guide for Profiling with PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../misc-training.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox"> <label for="toctree-checkbox-19"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../pytorch-neuron-supported-operators.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) - Supported Operators </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../setup-trn1-multi-node-execution.html"> How to prepare trn1.32xlarge for multi-node execution </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../training-troubleshooting.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) for Training Troubleshooting Guide </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/torch/torch-neuronx/index.html"> PyTorch Neuron ( <code class="docutils literal notranslate"> <span class="pre"> torch-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../tensorflow/index.html"> TensorFlow Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox"> <label for="toctree-checkbox-20"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../tensorflow/tensorflow-setup.html"> Tensorflow Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuronx-inference.html"> Inference (Inf2 &amp; Trn1) </a> <input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox"> <label for="toctree-checkbox-21"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox"> <label for="toctree-checkbox-22"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html"> HuggingFace Roberta-Base </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html"> Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuronx/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox"> <label for="toctree-checkbox-23"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) analyze_model API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox"> <label for="toctree-checkbox-24"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuronx </span> </code> ) Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron-inference.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox"> <label for="toctree-checkbox-25"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox"> <label for="toctree-checkbox-26"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/additional-examples.html"> Additional Examples </a> <input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox"> <label for="toctree-checkbox-27"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference"> AWS Neuron Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox"> <label for="toctree-checkbox-28"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/api-tracing-python-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Tracing API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html"> TensorFlow 2.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) analyze_model API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/api-compilation-python-api.html"> TensorFlow 1.x ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Compilation API </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/api-auto-replication-api.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> </code> ) Auto Multicore Replication (Experimental) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox"> <label for="toctree-checkbox-29"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF2.x) </span> </code> ) Accelerated (torch-neuron) Python APIs and Graph Ops </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html"> TensorFlow Neuron ( <code class="docutils literal notranslate"> <span class="pre"> tensorflow-neuron </span> <span class="pre"> (TF1.x) </span> </code> ) Supported operators </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../tensorflow/training.html"> Training </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../mxnet-neuron/index.html"> Apache MXNet (Incubating) </a> <input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox"> <label for="toctree-checkbox-30"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../mxnet-neuron/mxnet-neuron-setup.html"> MXNet Neuron Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../mxnet-neuron/inference-mxnet-neuron.html"> Inference (Inf1) </a> <input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox"> <label for="toctree-checkbox-31"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox"> <label for="toctree-checkbox-32"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html"> Computer Vision Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html"> Natural Language Processing (NLP) Tutorials </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html"> Utilizing Neuron Capabilities Tutorials </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../mxnet-neuron/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox"> <label for="toctree-checkbox-33"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../mxnet-neuron/api-compilation-python-api.html"> Neuron Apache MXNet (Incubating) Compilation Python API </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../mxnet-neuron/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox"> <label for="toctree-checkbox-34"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/appnotes/mxnet-neuron/flex-eg.html"> Flexible Execution Group (FlexEG) in Neuron-MXNet </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../mxnet-neuron/misc-mxnet-neuron.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox"> <label for="toctree-checkbox-35"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../mxnet-neuron/troubleshooting-guide.html"> Troubleshooting Guide for Neuron Apache MXNet (Incubating) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/mxnet-neuron/mxnet-neuron.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html"> Neuron Apache MXNet (Incubating) Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> ML Libraries </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../libraries/transformers-neuronx/index.html"> Transformers Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox"> <label for="toctree-checkbox-36"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../../libraries/transformers-neuronx/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../libraries/transformers-neuronx/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox"> <label for="toctree-checkbox-37"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) Developer Guide </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox"> <label for="toctree-checkbox-38"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb"> Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb"> Hugging Face facebook/opt-13b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb"> Hugging Face facebook/opt-30b autoregressive sampling on Inf2 &amp; Trn1 </a> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb"> Hugging Face facebook/opt-66b autoregressive sampling on Inf2 </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox"> <label for="toctree-checkbox-39"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/torch/transformers-neuronx/index.html"> Transformers Neuron ( <code class="docutils literal notranslate"> <span class="pre"> transformers-neuronx </span> </code> ) release notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/index.html"> Neuron Distributed </a> <input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox"> <label for="toctree-checkbox-40"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/setup/index.html"> Setup </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/app_notes.html"> App Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox"> <label for="toctree-checkbox-41"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html"> Tensor Parallelism Overview </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox"> <label for="toctree-checkbox-42"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/api_guide.html"> API Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox"> <label for="toctree-checkbox-43"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/tp_developer_guide.html"> Developer guide for Tensor Parallelism ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/index.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox"> <label for="toctree-checkbox-44"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training.html"> Training using Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html"> Training GPT-NeoX 6.9B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html"> Training GPT-NeoX 20B using TP and ZeRO-1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html"> T5 inference with Tensor Parallelism </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/inference.html"> Inference using Tensor Parallelism </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox"> <label for="toctree-checkbox-45"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/neuronx-distributed/neuronx-distributed.html"> Neuron Distributed Release Notes ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-distributed </span> </code> ) </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../libraries/nemo-megatron/index.html"> AWS Neuron Reference for NeMo Megatron </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> User Guide </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../neuron-runtime/index.html"> Neuron Runtime </a> <input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox"> <label for="toctree-checkbox-46"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../neuron-runtime/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox"> <label for="toctree-checkbox-47"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../neuron-runtime/nrt-api-guide.html"> Runtime API </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../neuron-runtime/configuration-guide.html"> Configuration Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox"> <label for="toctree-checkbox-48"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../neuron-runtime/nrt-configurable-parameters.html"> Runtime Configuration </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../neuron-runtime/misc-runtime.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox"> <label for="toctree-checkbox-49"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../neuron-runtime/nrt-troubleshoot.html"> Troubleshooting on Inf1 and Trn1 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../neuron-runtime/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html"> Neuron Runtime Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-dkms/index.html"> Neuron Driver Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-collectives/index.html"> Neuron Collectives Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../compiler/index.html"> Neuron Compiler </a> <input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox"> <label for="toctree-checkbox-50"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../compiler/neuronx-cc.html"> Neuron Compiler for Trn1 &amp; Inf2 </a> <input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox"> <label for="toctree-checkbox-51"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../compiler/neuronx-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox"> <label for="toctree-checkbox-52"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html"> Neuron Compiler CLI Reference Guide </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../compiler/neuronx-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox"> <label for="toctree-checkbox-53"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html"> Mixed Precision and Performance-accuracy Tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuronx-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../compiler/neuronx-cc/misc-neuronx-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox"> <label for="toctree-checkbox-54"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../compiler/neuronx-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/compiler/neuronx-cc/index.html"> What's New </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../compiler/neuron-cc.html"> Neuron Compiler for Inf1 </a> <input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox"> <label for="toctree-checkbox-55"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../compiler/neuron-cc/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox"> <label for="toctree-checkbox-56"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../compiler/neuron-cc/command-line-reference.html"> Neuron compiler CLI Reference Guide ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../compiler/neuron-cc/developer-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox"> <label for="toctree-checkbox-57"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/appnotes/neuron-cc/mixed-precision.html"> Mixed precision and performance-accuracy tuning ( <code class="docutils literal notranslate"> <span class="pre"> neuron-cc </span> </code> ) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../compiler/neuron-cc/misc-neuron-cc.html"> Misc </a> <input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox"> <label for="toctree-checkbox-58"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../compiler/neuron-cc/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc.html"> What's New </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html"> Neuron Supported operators </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../neuron-customops/index.html"> Neuron C++ Custom Operators </a> <input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox"> <label for="toctree-checkbox-59"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../neuron-customops/api-reference-guide/api-reference-guide.html"> API Reference Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox"> <label for="toctree-checkbox-60"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html"> Custom Operators API Reference Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../neuron-customops/programming-guide/programming-guide.html"> Developer Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox"> <label for="toctree-checkbox-61"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html"> Neuron Custom C++ Operators Developer Guide [Experimental] </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../neuron-customops/tutorials/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox"> <label for="toctree-checkbox-62"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-training.html"> Neuron Custom C++ Operators in MLP Training </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html"> Neuron Custom C++ Operators Performance Optimization </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../neuron-customops/misc-customops.html"> Misc (Neuron Custom C++ Operators) </a> <input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox"> <label for="toctree-checkbox-63"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/customcxxps/gpsimd-tools.html"> Neuron Custom C++ Tools Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/customcxxps/gpsimd-customop-lib.html"> Neuron Custom C++ Library Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../tools/index.html"> Neuron Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox"> <label for="toctree-checkbox-64"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../tools/neuron-sys-tools/index.html"> System Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox"> <label for="toctree-checkbox-65"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html"> Neuron-Monitor User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-top-user-guide.html"> Neuron-Top User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-ls.html"> Neuron-LS User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html"> Neuron Profile User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html"> Neuron-Sysfs User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuron-sys-tools/nccom-test.html"> NCCOM-TEST User Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/tools/aws-neuronx-tools.html"> What's New </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../tools/tensorboard/index.html"> TensorBoard </a> <input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox"> <label for="toctree-checkbox-66"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html"> Track Training Progress in TensorBoard using PyTorch Neuron </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html"> TensorBoard Plugin for Neuron (Trn1) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/tools/tensorboard-neuron.html"> What's New </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html"> TensorBoard Plugin for Neuron (Inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../tools/helper-tools/index.html"> Helper Tools </a> <input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox"> <label for="toctree-checkbox-67"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/helper-tools/tutorial-neuron-check-model.html"> Check Model </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html"> GatherInfo </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../tools/neuronperf/index.html"> NeuronPerf (Beta) </a> <input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox"> <label for="toctree-checkbox-68"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_overview.html"> Overview </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_terminology.html"> Terminology </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_examples.html"> Examples </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_benchmark_guide.html"> Benchmark Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_evaluate_guide.html"> Evaluate Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_compile_guide.html"> Compile Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_model_index_guide.html"> Model Index Guide </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_api.html"> API </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_framework_notes.html"> Framework Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_troubleshooting.html"> Troubleshooting </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../tools/neuronperf/rn.html"> What’s New </a> <input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox"> <label for="toctree-checkbox-69"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/tools/neuronperf.html"> NeuronPerf 1.x Release Notes </a> </li> </ul> </li> </ul> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../general/calculator/neuron-calculator.html"> Neuron Calculator </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../general/setup/index.html"> Setup Guide </a> <input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox"> <label for="toctree-checkbox-70"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/setup/torch-neuronx.html"> PyTorch Neuron (torch-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/setup/torch-neuron.html"> PyTorch Neuron (torch-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/setup/tensorflow-neuronx.html"> Tensorflow Neuron (tensorflow-neuronx) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/setup/tensorflow-neuron.html"> Tensorflow Neuron (tensorflow-neuron) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/setup/mxnet-neuron.html"> MxNet Neuron (mxnet-neuron) </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../containers/index.html"> Containers Deployment </a> <input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox"> <label for="toctree-checkbox-71"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox"> <label for="toctree-checkbox-72"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../containers/tutorials/inference/index.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox"> <label for="toctree-checkbox-73"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/tutorials/inference/tutorial-infer.html"> Run inference in pytorch neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/tutorials/inference/k8s_rn50_demo.html"> Deploy a TensorFlow Resnet50 model as a Kubernetes service </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../containers/tutorials/training/index.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox"> <label for="toctree-checkbox-74"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/tutorials/training/tutorial-training.html"> Run training in Pytorch Neuron container </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/tutorials/training/k8s_mlp_train_demo.html"> Deploy a simple mlp training script as a Kubernetes job </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox"> <label for="toctree-checkbox-75"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox"> <label for="toctree-checkbox-76"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../general/devflows/index.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox"> <label for="toctree-checkbox-77"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../containers/index.html"> Deploy Containers with Neuron </a> <input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox"> <label for="toctree-checkbox-78"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/locate-neuron-dlc-image.html"> Locate Neuron DLC Image </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/getting-started.html"> Getting Started </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../containers/kubernetes-getting-started.html"> Kubernetes Getting Started </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../containers/tutorials.html"> Tutorials </a> <input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox"> <label for="toctree-checkbox-79"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/tutorials/inference/index.html"> Inference </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/tutorials/training/index.html"> Training </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../containers/developerflows.html"> Developer Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox"> <label for="toctree-checkbox-80"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/dlc-then-ec2-devflow.html"> Deploy Neuron Container on EC2 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/container-sm-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../containers/faq-troubleshooting-releasenote.html"> FAQ, Troubleshooting and Release Note </a> <input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox"> <label for="toctree-checkbox-81"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/faq.html"> FAQ </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../containers/troubleshooting.html"> Troubleshooting Neuron Containers </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/containers/neuron-containers.html"> Neuron Containers Release Notes </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../release-notes/containers/neuron-k8.html"> Neuron K8 Release Notes </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../general/devflows/ec2-flows.html"> AWS EC2 </a> <input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox"> <label for="toctree-checkbox-82"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../general/devflows/inference/ec2-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox"> <label for="toctree-checkbox-83"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/inference/ec2-then-ec2-devflow.html"> Compile with Framework API and Deploy on EC2 Inf1 </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html"> Compile with Framework API and Deploy on EC2 Inf2 </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../general/devflows/training/ec2-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox"> <label for="toctree-checkbox-84"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/training/ec2/ec2-training.html"> Train your model on EC2 </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../general/devflows/eks-flows.html"> Amazon EKS </a> <input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox"> <label for="toctree-checkbox-85"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../general/devflows/inference/eks-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox"> <label for="toctree-checkbox-86"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/inference/dlc-then-eks-devflow.html"> Deploy Neuron Container on Elastic Kubernetes Service (EKS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../general/devflows/training/eks-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../general/devflows/ecs-flows.html"> AWS ECS </a> <input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox"> <label for="toctree-checkbox-87"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../general/devflows/inference/ecs-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox"> <label for="toctree-checkbox-88"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/inference/dlc-then-ecs-devflow.html"> Deploy Neuron Container on Elastic Container Service (ECS) </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../general/devflows/training/ecs-flows.html"> Training </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../general/devflows/sagemaker-flows.html"> Sagemaker </a> <input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox"> <label for="toctree-checkbox-89"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../general/devflows/inference/sagemaker-flows.html"> Inference </a> <input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox"> <label for="toctree-checkbox-90"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/inference/byoc-hosting-devflow.html"> Bring Your Own Neuron Container to Sagemaker Hosting (inf1) </a> </li> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/inference/neo-then-hosting-devflow.html"> Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1) </a> </li> </ul> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../general/devflows/training/sagemaker-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox"> <label for="toctree-checkbox-91"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/training/sm-devflow/sm-training-devflow.html"> Train your model on SageMaker </a> </li> </ul> </li> <li class="toctree-l3"> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples"> AWS Neuron Sagemaker Samples GitHub Repository </a> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../general/devflows/parallelcluster-flows.html"> Parallel Cluster </a> <input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox"> <label for="toctree-checkbox-92"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../general/devflows/inference/parallelcluster-flows.html"> Inference </a> </li> <li class="toctree-l3 has-children"> <a class="reference internal" href="../../../../../general/devflows/training/parallelcluster-flows.html"> Training </a> <input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox"> <label for="toctree-checkbox-93"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l4"> <a class="reference internal" href="../../../../../general/devflows/training/parallelcluster/parallelcluster-training.html"> Train your model on ParallelCluster </a> </li> </ul> </li> </ul> </li> <li class="toctree-l2 has-children"> <a class="reference internal" href="../../../../../general/devflows/aws-batch-flows.html"> AWS Batch Flows </a> <input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox"> <label for="toctree-checkbox-94"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l3"> <a class="reference internal" href="../../../../../general/devflows/inference/aws-batch-flows.html"> Inference </a> </li> <li class="toctree-l3"> <a class="reference internal" href="../../../../../general/devflows/training/aws-batch-flows.html"> Training </a> </li> </ul> </li> </ul> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> Learning Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../general/arch/index.html"> Architecture </a> <input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox"> <label for="toctree-checkbox-95"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-hardware/inf1-arch.html"> AWS Inf1 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-hardware/trn1-arch.html"> AWS Trn1/Trn1n Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-hardware/inf2-arch.html"> AWS Inf2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-hardware/inferentia.html"> Inferentia Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-hardware/inferentia2.html"> Inferentia2 Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-hardware/trainium.html"> Trainium Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-hardware/neuroncores-arch.html"> AWS NeuronCore Architecture </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/model-architecture-fit.html"> Neuron Model Architecture Fit Guidelines </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/glossary.html"> Neuron Glossary </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../general/arch/neuron-features/index.html"> Features </a> <input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox"> <label for="toctree-checkbox-96"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/data-types.html"> Data Types </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/rounding-modes.html"> Rounding Modes </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/neuroncore-batching.html"> Neuron Batching </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/neuroncore-pipeline.html"> NeuronCore Pipeline </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/neuron-caching.html"> Neuron Persistent Cache </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/collective-communication.html"> Collective Communication </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/control-flow.html"> Neuron Control Flow </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html"> Neuron Custom C++ Operators </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/arch/neuron-features/dynamic-shapes.html"> Neuron Dynamic Shapes </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../general/appnotes/index.html"> Application Notes </a> <input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox"> <label for="toctree-checkbox-97"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/announcements/neuron2.x/neuron2-intro.html"> Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/appnotes/neuron1x/introducing-libnrt.html"> Introducing Neuron Runtime 2.x (libnrt.so) </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/appnotes/perf/neuron-cc/performance-tuning.html"> Performance Tuning </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html"> Parallel Execution using NEURON_RT_NUM_CORES </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/appnotes/torch-neuron/rcnn-app-note.html"> Running R-CNNs on Inf1 </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html"> Generative LLM inference with Neuron </a> </li> </ul> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../general/faq.html"> FAQ </a> </li> <li class="toctree-l1"> <a class="reference internal" href="../../../../../general/troubleshooting.html"> Troubleshooting </a> </li> </ul> <p aria-level="2" class="caption" role="heading"> <span class="caption-text"> About Neuron </span> </p> <ul class="nav bd-sidenav"> <li class="toctree-l1"> <a class="reference internal" href="../../../../../release-notes/release.html"> Release Details </a> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../general/roadmap-readme.html"> Roadmap </a> <input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox"> <label for="toctree-checkbox-98"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1"> Neuron Public Roadmap </a> </li> </ul> </li> <li class="toctree-l1 has-children"> <a class="reference internal" href="../../../../../general/support.html"> Support </a> <input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox"> <label for="toctree-checkbox-99"> <i class="fas fa-chevron-down"> </i> </label> <ul> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/sdk-policy.html"> SDK Maintenance Policy </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/security.html"> Security Disclosures </a> </li> <li class="toctree-l2"> <a class="reference internal" href="../../../../../general/contact.html"> Contact Us </a> </li> </ul> </li> </ul> </div> </nav></div> <div class="bd-sidebar__bottom"> <!-- To handle the deprecated key --> <div class="navbar_extra_footer"> Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a> </div> </div> </div> <div id="rtd-footer-container"></div> </div> <!-- A tiny helper pixel to detect if we've scrolled --> <div class="sbt-scroll-pixel-helper"></div> <!-- Main content --> <div class="col py-0 content-container"> <div class="header-article row sticky-top noprint"> <div class="col py-1 d-flex header-article-main"> <div class="header-article__left"> <label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation"> <span class="headerbtn__icon-container"> <i class="fas fa-bars"></i> </span> </label> </div> <div class="header-article__right"> <button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode"> <span class="headerbtn__icon-container"> <i class="fas fa-expand"></i> </span> </button> <div class="menu-dropdown menu-dropdown-repository-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories"> <i class="fab fa-github"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository"> <span class="headerbtn__icon-container"> <i class="fab fa-github"></i> </span> <span class="headerbtn__text-container">repository</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/torch/torch-neuronx/tutorials/training/megatron_lm_gpt.html&amp;body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue"> <span class="headerbtn__icon-container"> <i class="fas fa-lightbulb"></i> </span> <span class="headerbtn__text-container">open issue</span> </a> </li> <li> <a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/torch/torch-neuronx/tutorials/training/megatron_lm_gpt.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page"> <span class="headerbtn__icon-container"> <i class="fas fa-pencil-alt"></i> </span> <span class="headerbtn__text-container">suggest edit</span> </a> </li> </ul> </div> </div> <div class="menu-dropdown menu-dropdown-download-buttons"> <button class="headerbtn menu-dropdown__trigger" aria-label="Download this page"> <i class="fas fa-download"></i> </button> <div class="menu-dropdown__content"> <ul> <li> <a href="../../../../../_sources/frameworks/torch/torch-neuronx/tutorials/training/megatron_lm_gpt.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file"> <span class="headerbtn__icon-container"> <i class="fas fa-file"></i> </span> <span class="headerbtn__text-container">.rst</span> </a> </li> <li> <button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF"> <span class="headerbtn__icon-container"> <i class="fas fa-file-pdf"></i> </span> <span class="headerbtn__text-container">.pdf</span> </button> </li> </ul> </div> </div> <label for="__page-toc" class="headerbtn headerbtn-page-toc"> <span class="headerbtn__icon-container"> <i class="fas fa-list"></i> </span> </label> </div> </div> <!-- Table of contents --> <div class="col-md-3 bd-toc show noprint"> <div class="tocsection onthispage pt-5 pb-3"> <i class="fas fa-list"></i> Contents </div> <nav id="bd-toc-nav" aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#install-pytorch-neuron"> Install PyTorch Neuron </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#download-preprocessed-wikipedia-dataset"> Download Preprocessed Wikipedia Dataset </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#setting-up-the-training-environment-on-trn1-32xlarge"> Setting up the training environment on trn1.32xlarge </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#gpt-pretraining-python-script"> GPT Pretraining Python Script </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#gpt-training-shell-script"> GPT Training Shell Script </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#initiating-a-training-job"> Initiating a Training Job </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#monitoring-training-job-progress"> Monitoring Training Job Progress </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#monitoring-training-job-progress-using-neuron-top"> Monitoring Training Job Progress using neuron-top </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#monitoring-training-job-progress-using-tensorboard"> Monitoring Training Job Progress using TensorBoard </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#finishing-the-tutorial"> Finishing the tutorial </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#running-a-multi-node-gpt"> Running a multi-node GPT </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#checkpointing-gpt-model"> Checkpointing GPT Model </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#preparing-wikipedia-dataset-from-scratch"> Preparing Wikipedia Dataset from Scratch </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#known-issues-and-limitations"> Known issues and limitations </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#no-broadcast-support"> No broadcast support </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#no-pipeline-parallel-support"> No pipeline parallel support </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#dropout-is-disabled"> Dropout is disabled </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#failed-accept4-too-many-open-files"> “Failed accept4: Too many open files” </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#error-cannot-import-name-helpers-from-megatron-data"> Error: cannot import name ‘helpers’ from ‘megatron.data’ </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#error-out-of-space-while-checkpointing"> Error: Out of space while checkpointing </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#troubleshooting"> Troubleshooting </a> </li> </ul> </nav> </div> </div> <div class="article row"> <div class="col pl-md-3 pl-lg-5 content-container"> <!-- Table of contents that is only displayed when printing the page --> <div id="jb-print-docs-body" class="onlyprint"> <h1>Megatron-LM GPT Pretraining Tutorial [End of Support]</h1> <!-- Table of contents --> <div id="print-main-content"> <div id="jb-print-toc"> <div> <h2> Contents </h2> </div> <nav aria-label="Page"> <ul class="visible nav section-nav flex-column"> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#install-pytorch-neuron"> Install PyTorch Neuron </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#download-preprocessed-wikipedia-dataset"> Download Preprocessed Wikipedia Dataset </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#setting-up-the-training-environment-on-trn1-32xlarge"> Setting up the training environment on trn1.32xlarge </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#gpt-pretraining-python-script"> GPT Pretraining Python Script </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#gpt-training-shell-script"> GPT Training Shell Script </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#initiating-a-training-job"> Initiating a Training Job </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#monitoring-training-job-progress"> Monitoring Training Job Progress </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#monitoring-training-job-progress-using-neuron-top"> Monitoring Training Job Progress using neuron-top </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#monitoring-training-job-progress-using-tensorboard"> Monitoring Training Job Progress using TensorBoard </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#finishing-the-tutorial"> Finishing the tutorial </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#running-a-multi-node-gpt"> Running a multi-node GPT </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#checkpointing-gpt-model"> Checkpointing GPT Model </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#preparing-wikipedia-dataset-from-scratch"> Preparing Wikipedia Dataset from Scratch </a> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#known-issues-and-limitations"> Known issues and limitations </a> <ul class="nav section-nav flex-column"> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#no-broadcast-support"> No broadcast support </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#no-pipeline-parallel-support"> No pipeline parallel support </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#dropout-is-disabled"> Dropout is disabled </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#failed-accept4-too-many-open-files"> “Failed accept4: Too many open files” </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#error-cannot-import-name-helpers-from-megatron-data"> Error: cannot import name ‘helpers’ from ‘megatron.data’ </a> </li> <li class="toc-h3 nav-item toc-entry"> <a class="reference internal nav-link" href="#error-out-of-space-while-checkpointing"> Error: Out of space while checkpointing </a> </li> </ul> </li> <li class="toc-h2 nav-item toc-entry"> <a class="reference internal nav-link" href="#troubleshooting"> Troubleshooting </a> </li> </ul> </nav> </div> </div> </div> <main id="main-content" role="main"> <div> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> <div class="section" id="megatron-lm-gpt-pretraining-tutorial-end-of-support"> <span id="megatron-lm-pretraining-tutorial"></span><h1>Megatron-LM GPT Pretraining Tutorial [End of Support]<a class="headerlink" href="#megatron-lm-gpt-pretraining-tutorial-end-of-support" title="Permalink to this headline">#</a></h1> <p>GPT is a large language model that excels at many natural language processing (NLP) tasks. It is derived from the decoder part of the Transformer. <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm">Neuron Reference For Megatron-LM [EOS]</a> is a library that enables large-scale distributed training of language models such as GPT and is adapted from <a class="reference external" href="https://github.com/NVIDIA/Megatron-LM">Megatron-LM</a>. This tutorial explains how to run the Neuron reference for Megatron-LM GPT pretraining on Trainium.</p> <p>The AWS Neuron SDK provides access to Trainium devices through an extension of PyTorch/XLA - a library that includes the familiar PyTorch interface along with XLA-specific additions. For Trainium customers, this means that existing PyTorch training scripts can be executed on Trn1 instances with minimal code modifications. For additional details relating to PyTorch/XLA, please refer to the <a class="reference external" href="https://pytorch.org/xla">official PyTorch/XLA documentation</a>.</p> <p>To run on Trainium, Neuron Reference For Megatron-LM library includes the following changes:</p> <ul class="simple"> <li><p>GPU devices are replaced with Pytorch/XLA devices.</p></li> <li><p>Pytorch/XLA distributed backend is used to bridge the PyTorch distributed APIs to XLA communication semantics.</p></li> <li><p>Pytorch/XLA MpDeviceLoader is used for the data ingestion pipelines. Pytorch/XLA MpDeviceLoader helps improve performance by overlapping the three execution steps: tracing, compilation and data batch loading to the device.</p></li> <li><p>CUDA APIs are mapped to generic PyTorch APIs.</p></li> <li><p>CUDA fused optimizers are replaced with generic PyTorch alternatives.</p></li> </ul> <p>The GPT example in this tutorial is an adaptation of the original Megatron-LM GPT example, trained using the Wikipedia dataset.</p> <div class="contents local topic" id="table-of-contents"> <p class="topic-title">Table of Contents</p> <ul class="simple"> <li><p><a class="reference internal" href="#install-pytorch-neuron" id="id1">Install PyTorch Neuron</a></p></li> <li><p><a class="reference internal" href="#download-preprocessed-wikipedia-dataset" id="id2">Download Preprocessed Wikipedia Dataset</a></p></li> <li><p><a class="reference internal" href="#setting-up-the-training-environment-on-trn1-32xlarge" id="id3">Setting up the training environment on trn1.32xlarge</a></p></li> <li><p><a class="reference internal" href="#gpt-pretraining-python-script" id="id4">GPT Pretraining Python Script</a></p></li> <li><p><a class="reference internal" href="#gpt-training-shell-script" id="id5">GPT Training Shell Script</a></p></li> <li><p><a class="reference internal" href="#initiating-a-training-job" id="id6">Initiating a Training Job</a></p></li> <li><p><a class="reference internal" href="#monitoring-training-job-progress" id="id7">Monitoring Training Job Progress</a></p></li> <li><p><a class="reference internal" href="#monitoring-training-job-progress-using-neuron-top" id="id8">Monitoring Training Job Progress using neuron-top</a></p></li> <li><p><a class="reference internal" href="#monitoring-training-job-progress-using-tensorboard" id="id9">Monitoring Training Job Progress using TensorBoard</a></p></li> <li><p><a class="reference internal" href="#finishing-the-tutorial" id="id10">Finishing the tutorial</a></p></li> <li><p><a class="reference internal" href="#running-a-multi-node-gpt" id="id11">Running a multi-node GPT</a></p></li> <li><p><a class="reference internal" href="#checkpointing-gpt-model" id="id12">Checkpointing GPT Model</a></p></li> <li><p><a class="reference internal" href="#preparing-wikipedia-dataset-from-scratch" id="id13">Preparing Wikipedia Dataset from Scratch</a></p></li> <li><p><a class="reference internal" href="#known-issues-and-limitations" id="id14">Known issues and limitations</a></p> <ul> <li><p><a class="reference internal" href="#no-broadcast-support" id="id15">No broadcast support</a></p></li> <li><p><a class="reference internal" href="#no-pipeline-parallel-support" id="id16">No pipeline parallel support</a></p></li> <li><p><a class="reference internal" href="#dropout-is-disabled" id="id17">Dropout is disabled</a></p></li> <li><p><a class="reference internal" href="#failed-accept4-too-many-open-files" id="id18">“Failed accept4: Too many open files”</a></p></li> <li><p><a class="reference internal" href="#error-cannot-import-name-helpers-from-megatron-data" id="id19">Error: cannot import name ‘helpers’ from ‘megatron.data’</a></p></li> <li><p><a class="reference internal" href="#error-out-of-space-while-checkpointing" id="id20">Error: Out of space while checkpointing</a></p></li> </ul> </li> <li><p><a class="reference internal" href="#troubleshooting" id="id21">Troubleshooting</a></p></li> </ul> </div> <div class="admonition note"> <p class="admonition-title">Note</p> <p>Logs used in tutorials do not present latest performance numbers</p> <p>For latest performance numbers visit <a class="reference internal" href="../../../../../general/benchmarks/index.html#benchmark"><span class="std std-ref">Neuron Performance</span></a></p> </div> <div class="section" id="install-pytorch-neuron"> <h2><a class="toc-backref" href="#id1">Install PyTorch Neuron</a><a class="headerlink" href="#install-pytorch-neuron" title="Permalink to this headline">#</a></h2> <p>Before running the tutorial please follow the installation instructions at:</p> <p><a class="reference internal" href="../../../../../general/setup/torch-neuronx.html#setup-torch-neuronx"><span class="std std-ref">Install PyTorch Neuron on Trn1</span></a></p> <p>Please set the storage of instance to <em>512GB</em> or more if you intent to run multiple experiments and save many checkpoints.</p> </div> <div class="section" id="download-preprocessed-wikipedia-dataset"> <h2><a class="toc-backref" href="#id2">Download Preprocessed Wikipedia Dataset</a><a class="headerlink" href="#download-preprocessed-wikipedia-dataset" title="Permalink to this headline">#</a></h2> <p>Download the vocabulary file, the merge table file, and the preprocessed Wikipedia dataset using the following commands:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>export DATA_DIR=~/examples_datasets/gpt2 mkdir -p ${DATA_DIR} &amp;&amp; cd ${DATA_DIR} wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/my-gpt2_text_document.bin . --no-sign-request aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/my-gpt2_text_document.idx . --no-sign-request aws s3 cp s3://neuron-s3/training_datasets/gpt/wikipedia/license.txt . --no-sign-request </pre></div> </div> <p>See section <code class="docutils literal notranslate"><span class="pre">Preparing</span> <span class="pre">Wikipedia</span> <span class="pre">dataset</span> <span class="pre">from</span> <span class="pre">scratch</span></code> if you would like to recreate the preprocessed dataset from scratch.</p> </div> <div class="section" id="setting-up-the-training-environment-on-trn1-32xlarge"> <h2><a class="toc-backref" href="#id3">Setting up the training environment on trn1.32xlarge</a><a class="headerlink" href="#setting-up-the-training-environment-on-trn1-32xlarge" title="Permalink to this headline">#</a></h2> <p>Please follow the <span class="xref std std-ref">instructions</span> to setup Python virtual environment with Neuron packages.</p> <p>Install Python3 development package needed to build the data helpers tools. If you are on Amazon Linux, do:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">sudo</span> <span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">python3</span><span class="o">-</span><span class="n">devel</span> </pre></div> </div> <p>If you are on Ubuntu, do:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">sudo</span> <span class="n">apt</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">python3</span><span class="o">-</span><span class="n">dev</span> </pre></div> </div> <p>Clone the AWS Neuron Reference for Megatron-LM package, install dependencies, and build the data helpers tool:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">cd</span> <span class="o">~/</span> <span class="n">git</span> <span class="n">clone</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">reference</span><span class="o">-</span><span class="k">for</span><span class="o">-</span><span class="n">megatron</span><span class="o">-</span><span class="n">lm</span><span class="o">.</span><span class="n">git</span> <span class="n">pip</span> <span class="n">install</span> <span class="n">pybind11</span> <span class="n">regex</span> <span class="n">pushd</span> <span class="o">.</span> <span class="n">cd</span> <span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">reference</span><span class="o">-</span><span class="k">for</span><span class="o">-</span><span class="n">megatron</span><span class="o">-</span><span class="n">lm</span><span class="o">/</span><span class="n">megatron</span><span class="o">/</span><span class="n">data</span><span class="o">/</span> <span class="n">make</span> <span class="n">popd</span> </pre></div> </div> </div> <div class="section" id="gpt-pretraining-python-script"> <h2><a class="toc-backref" href="#id4">GPT Pretraining Python Script</a><a class="headerlink" href="#gpt-pretraining-python-script" title="Permalink to this headline">#</a></h2> <p>The GPT pretraining python script is a wrapper that imports the Megatron-LM library modules and sets up the pieces needed by the Megatron-LM trainer: GPT model, loss function, forward pass, data provider. It is adapted from <a class="reference external" href="https://github.com/NVIDIA/Megatron-LM/blob/main/pretrain_gpt.py">pretrain_gpt.py</a>. The Neuron changes are:</p> <ul class="simple"> <li><p>Use XLA device</p></li> <li><p>Not using mpu.broadcast_data as it is currently unsupported. Instead each worker reads the data in parallel.</p></li> <li><p>Use int instead of long datatype for token data</p></li> </ul> <p>The script is available at <code class="docutils literal notranslate"><span class="pre">~/aws-neuron-reference-for-megatron-lm/pretrain_gpt.py</span></code></p> </div> <div class="section" id="gpt-training-shell-script"> <h2><a class="toc-backref" href="#id5">GPT Training Shell Script</a><a class="headerlink" href="#gpt-training-shell-script" title="Permalink to this headline">#</a></h2> <p>The GPT training shell script runs the above python script with following model configurations (for 6.7 billion parameters model):</p> <ul class="simple"> <li><p>Number of layers: 32</p></li> <li><p>Hidden size: 4096</p></li> <li><p>Number attention heads: 32</p></li> <li><p>Sequence length: 2048</p></li> <li><p>Max positional embeddings size: 2048</p></li> </ul> <p>The following training parameters are used:</p> <ul class="simple"> <li><p>The number of gradient accumulation microsteps is 64, with worker batch size of 1.</p></li> <li><p>The tensor parallelism degree is 8.</p></li> <li><p>The data parallelism degree is 4.</p></li> <li><p>The number of workers is 32.</p></li> </ul> <p>Additionally, the script uses:</p> <ul class="simple"> <li><p>CPU intitialization</p></li> <li><p>AdamW optimizer (default).</p></li> <li><p>Gradient clipping.</p></li> <li><p>No CUDA fusions (bias-gelu, masked-softmax, bias-dropout)</p></li> <li><p>Disabled contiguous buffer in local DDP</p></li> <li><p>Option <code class="docutils literal notranslate"><span class="pre">--distributed-backend</span> <span class="pre">xla</span></code> picks the XLA distributed backend to bridge the Pytorch distributed APIs to XLA communication semantics.</p></li> </ul> <p>See <a class="reference external" href="https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/arguments.py">this link</a> for a full list of options and their descriptions.</p> <div class="admonition note"> <p class="admonition-title">Note</p> <p>Not all options are supported. Currently only tensor-parallel and data-parallel modes in Neuron Reference For Megatron-LM are supported. We support tensor-parallel degree of 8 and data-parallel degree of upto 64.</p> </div> <p>The script for running on a single node is available at <code class="docutils literal notranslate"><span class="pre">~/aws-neuron-reference-for-megatron-lm/examples/pretrain_gpt3_6.7B_32layers_bf16.sh</span></code></p> <p>This shell script expects dataset files to be located in ~/examples_datasets/gpt2/ following the steps above. If you place the dataset files in another location, please update the DATA_PATH variable in the shell script.</p> </div> <div class="section" id="initiating-a-training-job"> <h2><a class="toc-backref" href="#id6">Initiating a Training Job</a><a class="headerlink" href="#initiating-a-training-job" title="Permalink to this headline">#</a></h2> <p>To run the GPT example, first activate the Python virtual environment, change to the Megatron-LM package location, and allow execute permission on the script:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">source</span> <span class="o">~/</span><span class="n">aws_neuron_venv_pytorch</span><span class="o">/</span><span class="nb">bin</span><span class="o">/</span><span class="n">activate</span> <span class="n">cd</span> <span class="o">~/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">reference</span><span class="o">-</span><span class="k">for</span><span class="o">-</span><span class="n">megatron</span><span class="o">-</span><span class="n">lm</span><span class="o">/</span> <span class="n">chmod</span> <span class="o">+</span><span class="n">x</span> <span class="o">*.</span><span class="n">sh</span> </pre></div> </div> <p>Next, run the parallel compilations of graphs in order to reduce compilation time during the actual run.</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">neuron_parallel_compile</span> <span class="o">./</span><span class="n">examples</span><span class="o">/</span><span class="n">pretrain_gpt3_6</span><span class="mf">.7</span><span class="n">B_32layers_bf16</span><span class="o">.</span><span class="n">sh</span> </pre></div> </div> <p>This command performs a short trial run of the training script to extract graphs and then do parallel compilations on those graphs before populating the persistent cache with compiled graphs. This helps reduce the compilation time during the actual run of the training script.</p> <div class="admonition note"> <p class="admonition-title">Note</p> <p>Please ignore the results of the trial run as they are not the actual execution results.</p> </div> <p>If some or all the graphs were already compiled and cached in the persistent cache, then fewer or none of the graphs would need compilation. To force recompilation, you can remove the cache directory at <code class="docutils literal notranslate"><span class="pre">/var/tmp/neuron-compile-cache/.</span></code></p> <p>Compilation is recommended if there are some changes in the script (such as batch size, number of layers, number of workers, etc.). Compilation will only happen if the model graph or its parameters/compilation flags change.</p> <p>Finally, run the script for the actual run:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="o">./</span><span class="n">examples</span><span class="o">/</span><span class="n">pretrain_gpt3_6</span><span class="mf">.7</span><span class="n">B_32layers_bf16</span><span class="o">.</span><span class="n">sh</span> </pre></div> </div> <p>During the run, you will see outputs like below, some lines showing throughput and loss statistics every global step.</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>`iteration 4873/ 10000 | consumed samples: 311872 | elapsed time per iteration (ms): 8718.9 | learning rate: 1.500E-04 | global batch size: 64 | lm loss: 3.296875E+00 | grad norm: 0.430 | throughput: 7.340` </pre></div> </div> </div> <div class="section" id="monitoring-training-job-progress"> <h2><a class="toc-backref" href="#id7">Monitoring Training Job Progress</a><a class="headerlink" href="#monitoring-training-job-progress" title="Permalink to this headline">#</a></h2> <p>Using a single Trn1 instance with 32 NeuronCores, the current GPT pretraining will run for ~81 hours. During this time, you will see the average loss metric begin at 11 and ultimately converge to ~3.2. Throughput for the training job will be ~7.3 seq/sec.</p> </div> <div class="section" id="monitoring-training-job-progress-using-neuron-top"> <h2><a class="toc-backref" href="#id8">Monitoring Training Job Progress using neuron-top</a><a class="headerlink" href="#monitoring-training-job-progress-using-neuron-top" title="Permalink to this headline">#</a></h2> <p>With the training job still running, launch a second SSH connection into the trn1 instance, and use the <code class="docutils literal notranslate"><span class="pre">neuron-top</span></code> command to examine the aggregate NeuronCore utilization.</p> </div> <div class="section" id="monitoring-training-job-progress-using-tensorboard"> <h2><a class="toc-backref" href="#id9">Monitoring Training Job Progress using TensorBoard</a><a class="headerlink" href="#monitoring-training-job-progress-using-tensorboard" title="Permalink to this headline">#</a></h2> <p>The demo includes TensorBoard-compatible logging, which allows the learning rate and training metrics to be monitored in real-time. By default, the training script logs metrics to the following TensorBoard log directory <code class="docutils literal notranslate"><span class="pre">~/aws-neuron-reference-for-megatron-lm/tb_*</span></code>.</p> <p>In order to view your training metrics in TensorBoard, first run the following commands in your SSH session:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">source</span> <span class="o">~/</span><span class="n">aws_neuron_venv_pytorch</span><span class="o">/</span><span class="nb">bin</span><span class="o">/</span><span class="n">activate</span> <span class="n">cd</span> <span class="o">~/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">reference</span><span class="o">-</span><span class="k">for</span><span class="o">-</span><span class="n">megatron</span><span class="o">-</span><span class="n">lm</span><span class="o">/</span> <span class="n">tensorboard</span> <span class="o">--</span><span class="n">logdir</span> <span class="o">./</span> </pre></div> </div> <p>Once running, open a new SSH connection to the instance and port-forward TCP port 6006 (ex: -L 6006:127.0.0.1:6006). Once the tunnel is established, TensorBoard can then be accessed via web browser at the following URL: <a class="reference external" href="http://localhost:6006/">http://localhost:6006</a>. Please note that you will not be able to access TensorBoard if you disconnect your port-forwarding SSH session to the Trainium instance.</p> </div> <div class="section" id="finishing-the-tutorial"> <h2><a class="toc-backref" href="#id10">Finishing the tutorial</a><a class="headerlink" href="#finishing-the-tutorial" title="Permalink to this headline">#</a></h2> <p>Once you are ready, and the training throughput is as expected, there are a couple of options for finishing the GPT pretraining demo:</p> <p><strong>Allow the training script to run to completion</strong>. If you would like to observe the training script run to completion, it is recommended to launch the training script from a terminal multiplexer such as <code class="docutils literal notranslate"><span class="pre">tmux</span></code> or <code class="docutils literal notranslate"><span class="pre">screen</span></code>, and then detach the session so that the training script can run in the background. With this approach, you can safely let the training script run unattended, without risk of an SSH disconnection causing the training job to stop running.</p> <p><strong>Stop the training job early</strong>. To stop the training job early, press CTRL-C in the terminal window in which you launched the training script. In some cases, if you manually cancel a job using CTRL-C and then later want to run the job again, you might first need to terminate all the python processes by the command <code class="docutils literal notranslate"><span class="pre">killall</span> <span class="pre">-9</span> <span class="pre">python3</span></code> .</p> </div> <div class="section" id="running-a-multi-node-gpt"> <h2><a class="toc-backref" href="#id11">Running a multi-node GPT</a><a class="headerlink" href="#running-a-multi-node-gpt" title="Permalink to this headline">#</a></h2> <p>We use SLURM to launch multi-node GPT training jobs. Like single node runs, we have a precompilation step followed by the actual run. To precompile:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">sbatch</span> <span class="n">examples</span><span class="o">/</span><span class="n">pretrain_gpt3_6</span><span class="mf">.7</span><span class="n">B_compile</span><span class="o">.</span><span class="n">slurm</span> </pre></div> </div> <p>This will precompile the script <code class="docutils literal notranslate"><span class="pre">examples/pretrain_gpt3_6.7B_32layers_bf16_bs1024_slurm.sh</span></code> on all the nodes and populate the caches.</p> <p>To run the compiled model:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">sbatch</span> <span class="n">examples</span><span class="o">/</span><span class="n">pretrain_gpt3_6</span><span class="mf">.7</span><span class="n">B</span><span class="o">.</span><span class="n">slurm</span> </pre></div> </div> <p>The number of nodes is currently set to 16 and since the tensor-parallel degree used is 8, the data-parallel degree is automatically computed to be 64, resulting in a 8x64 two dimensional mesh parallelism.</p> <p>The tensorboard logs are written by the last rank and will be in the TensorBoard log directory <code class="docutils literal notranslate"><span class="pre">~/aws-neuron-reference-for-megatron-lm/tb_*</span></code>.</p> <p>Compared to the single-node script, we use an increased batch size of 1024 which gives us a throughput bump of ~98 seq/sec. The number of iterations is also increased with changes in the hyperparameters pertaining to learning rates, weight decay.</p> </div> <div class="section" id="checkpointing-gpt-model"> <h2><a class="toc-backref" href="#id12">Checkpointing GPT Model</a><a class="headerlink" href="#checkpointing-gpt-model" title="Permalink to this headline">#</a></h2> <p>A new mode of checkpointing using serialized tensor and staggered save/load is supported to alleviate memory pressure. To save the model, add the lines:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>--save-xser $CHECKPOINT_PATH --save-interval 1500 </pre></div> </div> <p>This will save the checkpoint at path variable provided for every 1500 iterations.</p> <div class="admonition note"> <p class="admonition-title">Note</p> <p>Please note that the model saves all the model weights, optimizer and rng states (~76GB for a 32 layermodel). And if checkpointed frequently can quickly lead to low disk storage. Make sure there is enough disk space.</p> </div> <p>To load the checkpoint, we first need to remove <code class="docutils literal notranslate"><span class="pre">--use-cpu-initialization</span></code> from the script and then add</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>--load-xser $CHECKPOINT_PATH </pre></div> </div> <div class="admonition note"> <p class="admonition-title">Note</p> <p>Please note not removing the –use-cpu-initialization flag may lead to out-of-memory execution and result in unstable resumption of training.</p> </div> </div> <div class="section" id="preparing-wikipedia-dataset-from-scratch"> <h2><a class="toc-backref" href="#id13">Preparing Wikipedia Dataset from Scratch</a><a class="headerlink" href="#preparing-wikipedia-dataset-from-scratch" title="Permalink to this headline">#</a></h2> <p>The process of preparing the Wikipedia dataset follows the original <a class="reference external" href="https://github.com/NVIDIA/Megatron-LM#user-content-datasets">Megatron-LM documentation</a>. You will need a large c5 machine like c5n.18xlarge and using the latest Deep Learning AMI. First download the Wikipedia dataset. Depending on the network bandwidth, this is expected to be about ~65 minutes.</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>export WIKI_DIR=~/examples_datasets/wiki mkdir -p $WIKI_DIR &amp;&amp; cd $WIKI_DIR wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2 </pre></div> </div> <p>Download the vocabulary and merge table files for the desired model. This example uses the GPT-2 model:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span>export DATA_DIR=~/examples_datasets/gpt2 export GPT2_DATA=${DATA_DIR}/gpt2 mkdir -p ${GPT2_DATA} &amp;&amp; cd ${GPT2_DATA} wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt mkdir -p ${GPT2_DATA}/checkpoint wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O ${GPT2_DATA}/checkpoint/megatron_lm_345m_v0.0.zip </pre></div> </div> <p>Extract the downloaded data using WikiExtractor (this step takes about 2 hours):</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">git</span> <span class="n">clone</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">attardi</span><span class="o">/</span><span class="n">wikiextractor</span><span class="o">.</span><span class="n">git</span> <span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">wikiextractor</span> <span class="n">cd</span> <span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">wikiextractor</span> <span class="n">python</span> <span class="o">-</span><span class="n">m</span> <span class="n">wikiextractor</span><span class="o">.</span><span class="n">WikiExtractor</span> <span class="o">--</span><span class="n">json</span> <span class="o">~/</span><span class="n">examples_datasets</span><span class="o">/</span><span class="n">wiki</span><span class="o">/</span><span class="n">enwiki</span><span class="o">-</span><span class="n">latest</span><span class="o">-</span><span class="n">pages</span><span class="o">-</span><span class="n">articles</span><span class="o">.</span><span class="n">xml</span><span class="o">.</span><span class="n">bz2</span> <span class="o">--</span><span class="n">output</span> <span class="o">~/</span><span class="n">examples_datasets</span><span class="o">/</span><span class="n">wiki</span><span class="o">/</span><span class="n">text</span><span class="o">/</span> <span class="o">-</span><span class="n">q</span> <span class="o">--</span><span class="n">processes</span> <span class="mi">70</span> <span class="mi">2</span><span class="o">&gt;&amp;</span><span class="mi">1</span> <span class="o">|</span> <span class="n">tee</span> <span class="n">wikiextract</span><span class="o">.</span><span class="n">out</span> <span class="o">&amp;</span> </pre></div> </div> <p>The Wikiextractor first preprocesses the template of all pages sequentially, followed by a Map/Reduce process for extracting the pages and converting to the loose json format required by Megatron-LM.</p> <p>Once the extraction completes, we merge the text files with (~2 minutes):</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">conda</span> <span class="n">activate</span> <span class="n">pytorch_latest_p37</span> <span class="n">cd</span> <span class="o">~/</span><span class="n">examples_datasets</span><span class="o">/</span><span class="n">wiki</span> <span class="n">find</span> <span class="o">~/</span><span class="n">examples_datasets</span><span class="o">/</span><span class="n">wiki</span><span class="o">/</span><span class="n">text</span><span class="o">/</span> <span class="o">-</span><span class="n">name</span> <span class="n">wiki</span><span class="o">*</span> <span class="o">|</span> <span class="n">parallel</span> <span class="o">-</span><span class="n">m</span> <span class="o">-</span><span class="n">j</span> <span class="mi">70</span> <span class="s2">"cat </span><span class="si">{}</span><span class="s2"> &gt;&gt; mergedfile.json"</span> </pre></div> </div> <p>The <code class="docutils literal notranslate"><span class="pre">mergedfile.json</span></code> size on disk is 16GB. With it, create the binary data format for Megatron GPT2. NOTE: Refer to <a class="reference external" href="https://github.com/NVIDIA/Megatron-LM/issues/62">this solution</a> if an <code class="docutils literal notranslate"><span class="pre">IndexError:</span> <span class="pre">list</span> <span class="pre">index</span> <span class="pre">out</span> <span class="pre">of</span> <span class="pre">range</span></code> occurs. To create the binary data, type the following command:</p> <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">python</span> <span class="o">~/</span><span class="n">aws</span><span class="o">-</span><span class="n">neuron</span><span class="o">-</span><span class="n">reference</span><span class="o">-</span><span class="k">for</span><span class="o">-</span><span class="n">megatron</span><span class="o">-</span><span class="n">lm</span><span class="o">/</span><span class="n">tools</span><span class="o">/</span><span class="n">preprocess_data</span><span class="o">.</span><span class="n">py</span> \ <span class="o">--</span><span class="nb">input</span> <span class="o">~/</span><span class="n">examples_datasets</span><span class="o">/</span><span class="n">wiki</span><span class="o">/</span><span class="n">mergedfile</span><span class="o">.</span><span class="n">json</span> \ <span class="o">--</span><span class="n">output</span><span class="o">-</span><span class="n">prefix</span> <span class="n">my</span><span class="o">-</span><span class="n">gpt2</span> \ <span class="o">--</span><span class="n">vocab</span> <span class="o">~/</span><span class="n">examples_datasets</span><span class="o">/</span><span class="n">gpt2</span><span class="o">/</span><span class="n">gpt2</span><span class="o">-</span><span class="n">vocab</span><span class="o">.</span><span class="n">json</span> \ <span class="o">--</span><span class="n">dataset</span><span class="o">-</span><span class="n">impl</span> <span class="n">mmap</span> \ <span class="o">--</span><span class="n">tokenizer</span><span class="o">-</span><span class="nb">type</span> <span class="n">GPT2BPETokenizer</span> \ <span class="o">--</span><span class="n">merge</span><span class="o">-</span><span class="n">file</span> <span class="o">~/</span><span class="n">examples_datasets</span><span class="o">/</span><span class="n">gpt2</span><span class="o">/</span><span class="n">gpt2</span><span class="o">-</span><span class="n">merges</span><span class="o">.</span><span class="n">txt</span> \ <span class="o">--</span><span class="n">append</span><span class="o">-</span><span class="n">eod</span> \ <span class="o">--</span><span class="n">workers</span> <span class="mi">70</span> </pre></div> </div> <p>Files my-gpt2_text_document.* are generated after about 12 minutes.</p> </div> <div class="section" id="known-issues-and-limitations"> <h2><a class="toc-backref" href="#id14">Known issues and limitations</a><a class="headerlink" href="#known-issues-and-limitations" title="Permalink to this headline">#</a></h2> <div class="section" id="no-broadcast-support"> <h3><a class="toc-backref" href="#id15">No broadcast support</a><a class="headerlink" href="#no-broadcast-support" title="Permalink to this headline">#</a></h3> <p>Currently, the mpu.broadcast_data is unsupported on Trainium.</p> </div> <div class="section" id="no-pipeline-parallel-support"> <h3><a class="toc-backref" href="#id16">No pipeline parallel support</a><a class="headerlink" href="#no-pipeline-parallel-support" title="Permalink to this headline">#</a></h3> <p>Currently, only tensor parallel and data parallel are supported and there is no pipeline parallel support in Neuron Reference For Megatron-LM.</p> </div> <div class="section" id="dropout-is-disabled"> <h3><a class="toc-backref" href="#id17">Dropout is disabled</a><a class="headerlink" href="#dropout-is-disabled" title="Permalink to this headline">#</a></h3> <p>Currently, dropout is disabled in the example.</p> </div> <div class="section" id="failed-accept4-too-many-open-files"> <h3><a class="toc-backref" href="#id18">“Failed accept4: Too many open files”</a><a class="headerlink" href="#failed-accept4-too-many-open-files" title="Permalink to this headline">#</a></h3> <p>When running Megatron-LM GPT3 6.7B example above on <cite>Ubuntu Server 20.04 LTS (HVM)</cite> and <cite>Ubuntu Server 22.04 LTS (HVM)</cite> AMIs, you may encounter the following “Failed accept4: Too many open files” error:</p> <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>E0301<span class="w"> </span><span class="m">08</span>:06:14.272283286<span class="w"> </span><span class="m">72588</span><span class="w"> </span>tcp_server_posix.cc:214<span class="o">]</span><span class="w"> </span>Failed<span class="w"> </span>accept4:<span class="w"> </span>Too<span class="w"> </span>many<span class="w"> </span>open<span class="w"> </span>files <span class="m">2023</span>-03-01<span class="w"> </span><span class="m">08</span>:06:15.515834:<span class="w"> </span>F<span class="w"> </span>tensorflow/libtpu/neuron/neuron_compiler.cc:200<span class="o">]</span><span class="w"> </span>Check<span class="w"> </span>failed:<span class="w"> </span>fd<span class="w"> </span>!<span class="o">=</span><span class="w"> </span>-1<span class="w"> </span>Opening<span class="w"> </span>lock<span class="w"> </span>file<span class="w"> </span>failed<span class="w"> </span>with<span class="w"> </span>errno<span class="w"> </span><span class="m">24</span> </pre></div> </div> <p>The reason is that on this AMI, the “ulimit -n” is set to 1024, which is too low compared to for example <cite>Amazon Linux 2 AMI (HVM) - Kernel 5.10</cite> where it is set tp 65535 by default. To workaround this issue, please increase “ulimit -n” to a higher value, such as 65535 which matches <cite>Amazon Linux 2 AMI (HVM) - Kernel 5.10</cite> and is sufficient for the Megatron-LM GPT3 6.7B example. Additionally, this can be set within the shell script (which is ran using SLURM srun command) so that it is set for each worker process.</p> <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">ulimit</span><span class="w"> </span>-n<span class="w"> </span><span class="m">65535</span> </pre></div> </div> </div> <div class="section" id="error-cannot-import-name-helpers-from-megatron-data"> <h3><a class="toc-backref" href="#id19">Error: cannot import name ‘helpers’ from ‘megatron.data’</a><a class="headerlink" href="#error-cannot-import-name-helpers-from-megatron-data" title="Permalink to this headline">#</a></h3> <p>You may encounter the error “cannot import name ‘helpers’ from ‘megatron.data’” like below:</p> <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>Exception<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="nv">device</span><span class="o">=</span>NEURONT:0:<span class="w"> </span>cannot<span class="w"> </span>import<span class="w"> </span>name<span class="w"> </span><span class="s1">'helpers'</span><span class="w"> </span>from<span class="w"> </span><span class="s1">'megatron.data'</span><span class="w"> </span><span class="o">(</span>/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/__init__.py<span class="o">)</span> Traceback<span class="w"> </span><span class="o">(</span>most<span class="w"> </span>recent<span class="w"> </span>call<span class="w"> </span>last<span class="o">)</span>: <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">373</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>_mp_start_fn <span class="w"> </span>_start_fn<span class="o">(</span>index,<span class="w"> </span>pf_cfg,<span class="w"> </span>fn,<span class="w"> </span>args<span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch_xla/distributed/xla_multiprocessing.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">367</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>_start_fn <span class="w"> </span>fn<span class="o">(</span>gindex,<span class="w"> </span>*args<span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/pretrain_gpt_mp.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">138</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>pretrain_mp <span class="w"> </span>forward_step,<span class="w"> </span><span class="nv">args_defaults</span><span class="o">={</span><span class="s1">'tokenizer_type'</span>:<span class="w"> </span><span class="s1">'GPT2BPETokenizer'</span><span class="o">})</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/training.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">162</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>pretrain <span class="w"> </span>train_valid_test_dataset_provider<span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/training.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">1021</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>build_train_valid_test_data_iterators <span class="w"> </span>train_val_test_num_samples<span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/pretrain_gpt_mp.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">128</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>train_valid_test_datasets_provider <span class="w"> </span><span class="nv">skip_warmup</span><span class="o">=(</span>not<span class="w"> </span>args.mmap_warmup<span class="o">))</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">43</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>build_train_valid_test_datasets <span class="w"> </span>seq_length,<span class="w"> </span>seed,<span class="w"> </span>skip_warmup<span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">118</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>_build_train_valid_test_datasets <span class="w"> </span><span class="nv">train_dataset</span><span class="w"> </span><span class="o">=</span><span class="w"> </span>build_dataset<span class="o">(</span><span class="m">0</span>,<span class="w"> </span><span class="s1">'train'</span><span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">115</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>build_dataset <span class="w"> </span>seq_length,<span class="w"> </span>seed<span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">156</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>__init__ <span class="w"> </span>num_samples,<span class="w"> </span>seq_length,<span class="w"> </span>seed<span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/gpt_dataset.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">274</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>_build_index_mappings <span class="w"> </span>from<span class="w"> </span>megatron.data<span class="w"> </span>import<span class="w"> </span>helpers ImportError:<span class="w"> </span>cannot<span class="w"> </span>import<span class="w"> </span>name<span class="w"> </span><span class="s1">'helpers'</span><span class="w"> </span>from<span class="w"> </span><span class="s1">'megatron.data'</span><span class="w"> </span><span class="o">(</span>/home/ec2-user/aws-neuron-reference-for-megatron-lm/megatron/data/__init__.py<span class="o">)</span> </pre></div> </div> <p>To fix this, please go into aws-neuron-reference-for-megatron-lm/megatron/data/ and do “make”:</p> <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip<span class="w"> </span>install<span class="w"> </span>pybind11 <span class="nb">pushd</span><span class="w"> </span>. <span class="nb">cd</span><span class="w"> </span>aws-neuron-reference-for-megatron-lm/megatron/data/ make <span class="nb">popd</span> </pre></div> </div> </div> <div class="section" id="error-out-of-space-while-checkpointing"> <h3><a class="toc-backref" href="#id20">Error: Out of space while checkpointing</a><a class="headerlink" href="#error-out-of-space-while-checkpointing" title="Permalink to this headline">#</a></h3> <p>You may seem an error as follows. The model checkpoints are large as they dump all the model weights, optimizer and rng states. And if these are frequently checkpointed, the storage can run out fast. Please make sure you have enough disk space.</p> <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>Traceback<span class="w"> </span><span class="o">(</span>most<span class="w"> </span>recent<span class="w"> </span>call<span class="w"> </span>last<span class="o">)</span>: <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch/serialization.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">380</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>save <span class="w"> </span>_save<span class="o">(</span>obj,<span class="w"> </span>opened_zipfile,<span class="w"> </span>pickle_module,<span class="w"> </span>pickle_protocol<span class="o">)</span> <span class="w"> </span>File<span class="w"> </span><span class="s2">"/home/ec2-user/aws_neuron_venv_pytorch_p37/lib64/python3.7/site-packages/torch/serialization.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">604</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>_save <span class="w"> </span>zip_file.write_record<span class="o">(</span>name,<span class="w"> </span>storage.data_ptr<span class="o">()</span>,<span class="w"> </span>num_bytes<span class="o">)</span> OSError:<span class="w"> </span><span class="o">[</span>Errno<span class="w"> </span><span class="m">28</span><span class="o">]</span><span class="w"> </span>No<span class="w"> </span>space<span class="w"> </span>left<span class="w"> </span>on<span class="w"> </span>device </pre></div> </div> </div> </div> <div class="section" id="troubleshooting"> <h2><a class="toc-backref" href="#id21">Troubleshooting</a><a class="headerlink" href="#troubleshooting" title="Permalink to this headline">#</a></h2> <p>See <a class="reference internal" href="../../training-troubleshooting.html#pytorch-neuron-traning-troubleshooting"><span class="std std-ref">PyTorch Neuron (torch-neuronx) for Training Troubleshooting Guide</span></a></p> <p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p> </div> </div> <div class="section"> </div> </div> </main> <footer class="footer-article noprint"> <!-- Previous / next buttons --> <div class="prev-next-area"> </div> </footer> </div> </div> <div class="footer-content row"> <footer class="col footer"><p> By AWS<br> © Copyright 2023, Amazon.com.<br> </p> </footer> </div> </div> </div> </div> <!-- Scripts loaded after <body> so the DOM is not blocked --> <script src="../../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script> </body></html>
2023-09-29T20:55:24.038Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/eol-ncgs-env_2.rst.txt
``` .. post:: Mar 25, 2022 :language: en :tags: announce-eol Announcing end of support for ``NEURONCORE_GROUP_SIZES`` starting with Neuron 1.20.0 release -------------------------------------------------------------------------------------------- Starting with Neuron SDK 1.20.0, ``NEURONCORE_GROUP_SIZES`` environment variable will no longer be supported. Setting ``NEURONCORE_GROUP_SIZES`` environment variable will no longer affect applications behavior. Current customers using ``NEURONCORE_GROUP_SIZES`` environment variable are advised to use ``NEURON_RT_VISIBLE_CORES`` environment variable or ``NEURON_RT_NUM_CORES`` environment variable instead. See :ref:`eol-ncg`, :ref:`nrt-configuration` and :ref:`neuron-migrating-apps-neuron-to-libnrt` for more information. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Mar 25, 2022 :language: en :tags: announce-eol Announcing end of support for ``NEURONCORE_GROUP_SIZES`` starting with Neuron 1.20.0 release -------------------------------------------------------------------------------------------- Starting with Neuron SDK 1.20.0, ``NEURONCORE_GROUP_SIZES`` environment variable will no longer be supported. Setting ``NEURONCORE_GROUP_SIZES`` environment variable will no longer affect applications behavior. Current customers using ``NEURONCORE_GROUP_SIZES`` environment variable are advised to use ``NEURON_RT_VISIBLE_CORES`` environment variable or ``NEURON_RT_NUM_CORES`` environment variable instead. See :ref:`eol-ncg`, :ref:`nrt-configuration` and :ref:`neuron-migrating-apps-neuron-to-libnrt` for more information. </pre></body></html>
2023-09-29T20:55:24.099Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/announcements/neuron1.x/announcements.rst.txt
``` .. post:: Feb 17, 2022 :language: en :tags: announcements .. _prev-announcements: Previous Announcements ====================== .. contents:: Table of contents :local: :depth: 1 .. _maintenance_tf21_tf24: 02/17/2022 - tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4 enter maintenance mode ------------------------------------------------------------------------------------ Starting with *Neuron 1.17.2* release, *tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4* are entering maintenance mode. Future releases of *tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4* will address critical security issues only. Current users of those versions are advised to migrate to latest *tensorflow-neuron* version. 10/27/2021 - Introducing Neuron Runtime 2.x (libnrt.so) ------------------------------------------------------- Starting with *Neuron 1.16.0* release, *Neuron Runtime 1.x* (``neuron-rtd``) is entering maintenance mode and is replaced by *Neuron Runtime 2.x*, a shared library named (``libnrt.so``). For more information on Runtime 1.x see :ref:`Neuron Runtime 1.x enters maintenance mode <maintenance_rtd>`. For more information please see :ref:`introduce-libnrt`. .. _maintenance_rtd: 10/27/2021 - Neuron Runtime 1.x (``neuron-rtd``) enters maintenance mode ------------------------------------------------------------------------ Starting with *Neuron 1.16.0* release, *Neuron Runtime 1.x* (``neuron-rtd``) is entering maintenance mode and replaced with *Neuron Runtime 2.x*, a shared library named ``libnrt.so``. Future releases of *Neuron Runtime 1.x* (``neuron-rtd``) will address critical bug fixes and security issues only. Previous releases of *Neuron Runtime 1.x* (``neuron-rtd``) will continue to be available via ``rpm`` and ``deb`` packages. For more information please see: * :ref:`introduce-libnrt` * :ref:`install-guide-index` * :ref:`neuron-maintenance-policy` .. _maintenance_mxnet_1_5: 10/27/2021 - Neuron support for *Apache MXNet 1.5* enters maintenance mode -------------------------------------------------------------------------- Starting *Neuron release 1.16.0*, Neuron support for *MXNet 1.5* is entering maintenance mode. Future releases of Neuron supporting *MXNet 1.5* will address critical bug fixes and security issues only. Previous releases of *Apache MXNet 1.5* will continue to be available via ``pip`` packages. Current users of *MXNet Neuron 1.5* can migrate their applications to *MXNet Neuron 1.8*, for more information about MXNet Neuron support and how to upgrade to latest *MXNet Neuron 1.8*, please see visit :ref:`neuron-mxnet`. .. _maintenance_neuron-cli: 10/27/2021 - ``neuron-cli`` enters maintenance mode --------------------------------------------------- Starting *Neuron release 1.16.0*, with the introduction of *Neuron Runtime 2.x*, ``neuron-cli`` is entering maintenance mode. ``neuron-cli`` functionality will be available only if *Neuron Runtime 1.x* (``neuron-rtd``) is being used by the application. If the application is using *Neuron Runtime 2.x* shared library(``libnrt.so``), ``neuron-cli`` functionality will not be available. If you have used ``neuron-cli`` in previous releases, and you are migrating to newer Neuron releases where applications require *Neuron Runtime 2.x* shared library, please see the below :ref:`neuron-cli-mntnce-faq`. Future releases of ``neuron-cli`` will address critical bug fixes and security issues only. Previous releases of ``neuron-cli`` will continue to be available via ``rpm`` and ``deb`` packages. .. _eol-ncg: 10/27/2021 - End of support for NeuronCore Groups (NCG) ------------------------------------------------------- Before the introduction of *Neuron Runtime 2.x*, :ref:`NeuronCore Group (NCG) <neuron-core-group>` has been used by Neuron Runtime 1.x to define an execution group of one or more NeuronCores where models can be loaded and executed. It also provided separation between processes. With the introduction of *Neuron Runtime 2.x*, the strict separation of NeuronCores into groups is no longer needed and NeuronCore Groups (NCG) is deprecated. *Neuron Runtime 2.x* enables each process to own a set of NeuronCores, and within each process, Neuron Runtime 2.x supports loading and executing multiple models on separate , different or overlapping sets of NeuronCores. Please note that ``NEURONCORE_GROUP_SIZES`` environment variable is in the process of being :ref:`deprecated <eol-ncgs-env>`, and for a transition period ``NEURONCORE_GROUP_SIZES`` can be used to preserve the old NeuronCore Group behavior. The frameworks internally would convert ``NEURONCORE_GROUP_SIZES`` to use runtime's new mode of mapping models to NeuronCores. For more information see details about ``NEURON_RT_VISIBLE_CORES`` at :ref:`nrt-configuration` and and :ref:`neuron-migrating-apps-neuron-to-libnrt`. .. _eol-ncgs-env: 10/27/2021 - Announcing end of support for ``NEURONCORE_GROUP_SIZES`` --------------------------------------------------------------------- ``NEURONCORE_GROUP_SIZES`` environment variable is in the process of being deprecated, future Neuron releases may no longer support the ``NEURONCORE_GROUP_SIZES`` environment variable. Please start using ``NEURON_RT_VISIBLE_CORES`` instead. See :ref:`eol-ncg`, :ref:`nrt-configuration` and :ref:`neuron-migrating-apps-neuron-to-libnrt` for more information. .. _neuron-cli-mntnce-faq: Frequently Asked questions (FAQ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Is there another tool that provide the same functionality as ``neuron-cli list-model``? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Yes, please see :ref:`neuron-ls-ug` or :ref:`neuron-monitor-ug`. Is there another tool that provide the same functionality as ``neuron-cli create-ncg``, ``neuron-cli destroy-ncg``, and ``neuron-cli list-ncg``? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ No, these functionalities are no longer needed with *Neuron Runtime 2.x*,NeuronCore Groups (NCG) :ref:`is deprecated <eol-ncg>` and ``NEURONCORE_GROUP_SIZES`` environment variable :ref:`is in the process of being deprecated <eol-ncgs-env>`, Please start using ``NEURON_RT_VISIBLE_CORES`` instead. See :ref:`nrt-configuration` and :ref:`neuron-migrating-apps-neuron-to-libnrt` for more information. Is there another tool that provide the same functionality as ``neuron-cli reset``? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ No, this functionality is no longer needed with *Neuron Runtime 2.x*. Before introducing ``libnrt.so``, in certain cases after an application crashed models had to be unloaded manually by calling neuron-cli reset. With ``libnrt.so``, applications runs in the context of the ``libnrt.so`` shared library and when an application exits the Neuron driver will free all resources associated with the application. For more information please see: * :ref:`introduce-libnrt` * :ref:`neuron-tools` * :ref:`install-guide-index` * :ref:`neuron-maintenance-policy` .. _eol-conda-packages: 05/28/2021 - End of support for Neuron Conda packages in Deep Learning AMI starting Neuron 1.14.0 ------------------------------------------------------------------------------------------------- 05/28/2021 - Starting with Neuron SDK 1.14.0, we will no longer support conda packages to install Neuron SDK framework in DLAMI and we will no longer update conda packages used to install Neuron SDK framework (Neuron conda packages) with new versions. Starting with Neuron SDK 1.14.0, pip packages (Neuron pip packages) will be used to install Neuron SDK framework in DLAMI conda environment. To upgrade Neuron SDK framework DLAMI users should use pip upgrade commands instead of conda update commands. Instructions are available in this blog and in Neuron SDK documentation (:ref:`setup-guide-index`). Starting with Neuron SDK 1.14.0, run one of the following commands to upgrade to latest Neuron framework of your choice: * To upgrade PyTorch Neuron: .. code-block:: source activate aws_neuron_pytorch_p36 pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com pip install --upgrade torch-neuron neuron-cc[tensorflow] torchvision * To upgrade TensorFlow Neuron: .. code-block:: source activate aws_neuron_tensorflow_p36 pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com pip install --upgrade tensorflow-neuron tensorboard-neuron neuron-cc * To upgrade MXNet Neuron: .. code-block:: source activate aws_neuron_mxnet_p36 pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com pip install --upgrade mxnet-neuron neuron-cc For more information please check the `blog <https://aws.amazon.com/blogs/developer/neuron-conda-packages-eol/>`__. .. _eol-ubuntu16: 05/01/2021 - End of support for Ubuntu 16 starting Neuron 1.14.0 ---------------------------------------------------------------- Ubuntu 16.04 entered end of life phase officially in April 2021 (see https://ubuntu.com/about/release-cycle) and will not receive any public software or security updates. Starting with Neuron SDK 1.14.0, Ubuntu 16 is no longer supported for Neuron, users who are using Ubuntu 16 are requested to migrate to Ubuntu18 or Amazon Linux 2. Customers who choose to upgrade libc on Ubuntu 16 to work with Neuron v1.13.0 (or higher versions) are highly discouraged from doing that since Ubuntu 16 will no longer receive public security updates. .. _eol-classic-tensorboard: 05/01/2021 - End of support for classic TensorBoard-Neuron starting Neuron 1.13.0 and introducing Neuron Plugin for TensorBoard ------------------------------------------------------------------------------------------------------------------------------- Starting with Neuron SDK 1.13.0, we are introducing :ref:`Neuron Plugin for TensorBoard <neuron-plugin-tensorboard>` and we will no longer support classic TensorBoard-Neuron. Users are required to migrate to Neuron Plugin for TensorBoard. Starting with Neuron SDK 1.13.0, if you are using TensorFlow-Neuron within DLAMI Conda environment, attempting to run ``tensorboard`` with the existing version of TensorBoard will fail. Please update the TensorBoard version before installing the Neuron plugin by running ``pip install TensorBoard --force-reinstall``, for installation instructions see :ref:`neuron-plugin-tensorboard`. Users who are using Neuron SDK releases before 1.13.0, can find classic TensorBoard-Neuron documentation at `Neuron 1.12.2 documentation <https://awsdocs-neuron.readthedocs-hosted.com/en/1.12.2/neuron-guide/neuron-tools/getting-started-tensorboard-neuron.html>`__. For more information see see :ref:`neuron-tensorboard-rn` and :ref:`neuron-plugin-tensorboard`. .. _eol_python_3_5: 02/24/2021 - End of support for Python 3.5 ----------------------------------------- As Python 3.5 reached end-of-life in October 2020, and many packages including TorchVision and Transformers have stopped support for Python 3.5, we will begin to stop supporting Python 3.5 for frameworks, starting with PyTorch-Neuron version :ref:`neuron-torch-11170` in this release. You can continue to use older versions with Python 3.5. 11/17/2020 - End of support for ONNX ------------------------------------ ONNX support is limited and from this version onwards we are not planning to add any additional capabilities to ONNX. We recommend running models in TensorFlow, PyTorch or MXNet for best performance and support. 07/16/2020 - End of support for PyTorch 1.3 ------------------------------------------ Starting this release we are ending the support of PyTorch 1.3 and migrating to PyTorch 1.5.1, customers are advised to migrate to PyTorch 1.5.1. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. post:: Feb 17, 2022 :language: en :tags: announcements .. _prev-announcements: Previous Announcements ====================== .. contents:: Table of contents :local: :depth: 1 .. _maintenance_tf21_tf24: 02/17/2022 - tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4 enter maintenance mode ------------------------------------------------------------------------------------ Starting with *Neuron 1.17.2* release, *tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4* are entering maintenance mode. Future releases of *tensorflow-neuron versions 2.1, 2.2, 2.3 and 2.4* will address critical security issues only. Current users of those versions are advised to migrate to latest *tensorflow-neuron* version. 10/27/2021 - Introducing Neuron Runtime 2.x (libnrt.so) ------------------------------------------------------- Starting with *Neuron 1.16.0* release, *Neuron Runtime 1.x* (``neuron-rtd``) is entering maintenance mode and is replaced by *Neuron Runtime 2.x*, a shared library named (``libnrt.so``). For more information on Runtime 1.x see :ref:`Neuron Runtime 1.x enters maintenance mode &lt;maintenance_rtd&gt;`. For more information please see :ref:`introduce-libnrt`. .. _maintenance_rtd: 10/27/2021 - Neuron Runtime 1.x (``neuron-rtd``) enters maintenance mode ------------------------------------------------------------------------ Starting with *Neuron 1.16.0* release, *Neuron Runtime 1.x* (``neuron-rtd``) is entering maintenance mode and replaced with *Neuron Runtime 2.x*, a shared library named ``libnrt.so``. Future releases of *Neuron Runtime 1.x* (``neuron-rtd``) will address critical bug fixes and security issues only. Previous releases of *Neuron Runtime 1.x* (``neuron-rtd``) will continue to be available via ``rpm`` and ``deb`` packages. For more information please see: * :ref:`introduce-libnrt` * :ref:`install-guide-index` * :ref:`neuron-maintenance-policy` .. _maintenance_mxnet_1_5: 10/27/2021 - Neuron support for *Apache MXNet 1.5* enters maintenance mode -------------------------------------------------------------------------- Starting *Neuron release 1.16.0*, Neuron support for *MXNet 1.5* is entering maintenance mode. Future releases of Neuron supporting *MXNet 1.5* will address critical bug fixes and security issues only. Previous releases of *Apache MXNet 1.5* will continue to be available via ``pip`` packages. Current users of *MXNet Neuron 1.5* can migrate their applications to *MXNet Neuron 1.8*, for more information about MXNet Neuron support and how to upgrade to latest *MXNet Neuron 1.8*, please see visit :ref:`neuron-mxnet`. .. _maintenance_neuron-cli: 10/27/2021 - ``neuron-cli`` enters maintenance mode --------------------------------------------------- Starting *Neuron release 1.16.0*, with the introduction of *Neuron Runtime 2.x*, ``neuron-cli`` is entering maintenance mode. ``neuron-cli`` functionality will be available only if *Neuron Runtime 1.x* (``neuron-rtd``) is being used by the application. If the application is using *Neuron Runtime 2.x* shared library(``libnrt.so``), ``neuron-cli`` functionality will not be available. If you have used ``neuron-cli`` in previous releases, and you are migrating to newer Neuron releases where applications require *Neuron Runtime 2.x* shared library, please see the below :ref:`neuron-cli-mntnce-faq`. Future releases of ``neuron-cli`` will address critical bug fixes and security issues only. Previous releases of ``neuron-cli`` will continue to be available via ``rpm`` and ``deb`` packages. .. _eol-ncg: 10/27/2021 - End of support for NeuronCore Groups (NCG) ------------------------------------------------------- Before the introduction of *Neuron Runtime 2.x*, :ref:`NeuronCore Group (NCG) &lt;neuron-core-group&gt;` has been used by Neuron Runtime 1.x to define an execution group of one or more NeuronCores where models can be loaded and executed. It also provided separation between processes. With the introduction of *Neuron Runtime 2.x*, the strict separation of NeuronCores into groups is no longer needed and NeuronCore Groups (NCG) is deprecated. *Neuron Runtime 2.x* enables each process to own a set of NeuronCores, and within each process, Neuron Runtime 2.x supports loading and executing multiple models on separate , different or overlapping sets of NeuronCores. Please note that ``NEURONCORE_GROUP_SIZES`` environment variable is in the process of being :ref:`deprecated &lt;eol-ncgs-env&gt;`, and for a transition period ``NEURONCORE_GROUP_SIZES`` can be used to preserve the old NeuronCore Group behavior. The frameworks internally would convert ``NEURONCORE_GROUP_SIZES`` to use runtime's new mode of mapping models to NeuronCores. For more information see details about ``NEURON_RT_VISIBLE_CORES`` at :ref:`nrt-configuration` and and :ref:`neuron-migrating-apps-neuron-to-libnrt`. .. _eol-ncgs-env: 10/27/2021 - Announcing end of support for ``NEURONCORE_GROUP_SIZES`` --------------------------------------------------------------------- ``NEURONCORE_GROUP_SIZES`` environment variable is in the process of being deprecated, future Neuron releases may no longer support the ``NEURONCORE_GROUP_SIZES`` environment variable. Please start using ``NEURON_RT_VISIBLE_CORES`` instead. See :ref:`eol-ncg`, :ref:`nrt-configuration` and :ref:`neuron-migrating-apps-neuron-to-libnrt` for more information. .. _neuron-cli-mntnce-faq: Frequently Asked questions (FAQ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Is there another tool that provide the same functionality as ``neuron-cli list-model``? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Yes, please see :ref:`neuron-ls-ug` or :ref:`neuron-monitor-ug`. Is there another tool that provide the same functionality as ``neuron-cli create-ncg``, ``neuron-cli destroy-ncg``, and ``neuron-cli list-ncg``? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ No, these functionalities are no longer needed with *Neuron Runtime 2.x*,NeuronCore Groups (NCG) :ref:`is deprecated &lt;eol-ncg&gt;` and ``NEURONCORE_GROUP_SIZES`` environment variable :ref:`is in the process of being deprecated &lt;eol-ncgs-env&gt;`, Please start using ``NEURON_RT_VISIBLE_CORES`` instead. See :ref:`nrt-configuration` and :ref:`neuron-migrating-apps-neuron-to-libnrt` for more information. Is there another tool that provide the same functionality as ``neuron-cli reset``? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ No, this functionality is no longer needed with *Neuron Runtime 2.x*. Before introducing ``libnrt.so``, in certain cases after an application crashed models had to be unloaded manually by calling neuron-cli reset. With ``libnrt.so``, applications runs in the context of the ``libnrt.so`` shared library and when an application exits the Neuron driver will free all resources associated with the application. For more information please see: * :ref:`introduce-libnrt` * :ref:`neuron-tools` * :ref:`install-guide-index` * :ref:`neuron-maintenance-policy` .. _eol-conda-packages: 05/28/2021 - End of support for Neuron Conda packages in Deep Learning AMI starting Neuron 1.14.0 ------------------------------------------------------------------------------------------------- 05/28/2021 - Starting with Neuron SDK 1.14.0, we will no longer support conda packages to install Neuron SDK framework in DLAMI and we will no longer update conda packages used to install Neuron SDK framework (Neuron conda packages) with new versions. Starting with Neuron SDK 1.14.0, pip packages (Neuron pip packages) will be used to install Neuron SDK framework in DLAMI conda environment. To upgrade Neuron SDK framework DLAMI users should use pip upgrade commands instead of conda update commands. Instructions are available in this blog and in Neuron SDK documentation (:ref:`setup-guide-index`). Starting with Neuron SDK 1.14.0, run one of the following commands to upgrade to latest Neuron framework of your choice: * To upgrade PyTorch Neuron: .. code-block:: source activate aws_neuron_pytorch_p36 pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com pip install --upgrade torch-neuron neuron-cc[tensorflow] torchvision * To upgrade TensorFlow Neuron: .. code-block:: source activate aws_neuron_tensorflow_p36 pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com pip install --upgrade tensorflow-neuron tensorboard-neuron neuron-cc * To upgrade MXNet Neuron: .. code-block:: source activate aws_neuron_mxnet_p36 pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com pip install --upgrade mxnet-neuron neuron-cc For more information please check the `blog &lt;https://aws.amazon.com/blogs/developer/neuron-conda-packages-eol/&gt;`__. .. _eol-ubuntu16: 05/01/2021 - End of support for Ubuntu 16 starting Neuron 1.14.0 ---------------------------------------------------------------- Ubuntu 16.04 entered end of life phase officially in April 2021 (see https://ubuntu.com/about/release-cycle) and will not receive any public software or security updates. Starting with Neuron SDK 1.14.0, Ubuntu 16 is no longer supported for Neuron, users who are using Ubuntu 16 are requested to migrate to Ubuntu18 or Amazon Linux 2. Customers who choose to upgrade libc on Ubuntu 16 to work with Neuron v1.13.0 (or higher versions) are highly discouraged from doing that since Ubuntu 16 will no longer receive public security updates. .. _eol-classic-tensorboard: 05/01/2021 - End of support for classic TensorBoard-Neuron starting Neuron 1.13.0 and introducing Neuron Plugin for TensorBoard ------------------------------------------------------------------------------------------------------------------------------- Starting with Neuron SDK 1.13.0, we are introducing :ref:`Neuron Plugin for TensorBoard &lt;neuron-plugin-tensorboard&gt;` and we will no longer support classic TensorBoard-Neuron. Users are required to migrate to Neuron Plugin for TensorBoard. Starting with Neuron SDK 1.13.0, if you are using TensorFlow-Neuron within DLAMI Conda environment, attempting to run ``tensorboard`` with the existing version of TensorBoard will fail. Please update the TensorBoard version before installing the Neuron plugin by running ``pip install TensorBoard --force-reinstall``, for installation instructions see :ref:`neuron-plugin-tensorboard`. Users who are using Neuron SDK releases before 1.13.0, can find classic TensorBoard-Neuron documentation at `Neuron 1.12.2 documentation &lt;https://awsdocs-neuron.readthedocs-hosted.com/en/1.12.2/neuron-guide/neuron-tools/getting-started-tensorboard-neuron.html&gt;`__. For more information see see :ref:`neuron-tensorboard-rn` and :ref:`neuron-plugin-tensorboard`. .. _eol_python_3_5: 02/24/2021 - End of support for Python 3.5 ----------------------------------------- As Python 3.5 reached end-of-life in October 2020, and many packages including TorchVision and Transformers have stopped support for Python 3.5, we will begin to stop supporting Python 3.5 for frameworks, starting with PyTorch-Neuron version :ref:`neuron-torch-11170` in this release. You can continue to use older versions with Python 3.5. 11/17/2020 - End of support for ONNX ------------------------------------ ONNX support is limited and from this version onwards we are not planning to add any additional capabilities to ONNX. We recommend running models in TensorFlow, PyTorch or MXNet for best performance and support. 07/16/2020 - End of support for PyTorch 1.3 ------------------------------------------ Starting this release we are ending the support of PyTorch 1.3 and migrating to PyTorch 1.5.1, customers are advised to migrate to PyTorch 1.5.1. </pre></body></html>
2023-09-29T20:55:24.181Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/tensorflow/tensorflow-modelserver-neuron/tensorflow-modelserver-neuronx.rst.txt
``` .. _tensorflow-modeslserver-neuronx-rn: TensorFlow-Model-Server-Neuron (``tensorflow-modeslserver-neuronx``) Release Notes ================================================================================== .. contents:: Table of contents :local: :depth: 1 This document lists the release notes for the TensorFlow-Model-Server-Neuron (``tensorflow-modeslserver-neuronx``) package. TensorFlow Model Server Neuron [2.9.3.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 7/19/2023 * Minor updates TensorFlow Model Server Neuron [2.8.9.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 6/14/2023 * Minor updates TensorFlow Model Server Neuron [2.8.1.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 5/1/2023 * Minor updates TensorFlow Model Server Neuron [2.7.3.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 3/28/2023 * Minor updates TensorFlow Model Server Neuron [2.6.5.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 2/24/2023 First release of TensorFlow-Model-Server-Neuron (``tensorflow-modeslserver-neuronx``) package. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-modeslserver-neuronx-rn: TensorFlow-Model-Server-Neuron (``tensorflow-modeslserver-neuronx``) Release Notes ================================================================================== .. contents:: Table of contents :local: :depth: 1 This document lists the release notes for the TensorFlow-Model-Server-Neuron (``tensorflow-modeslserver-neuronx``) package. TensorFlow Model Server Neuron [2.9.3.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 7/19/2023 * Minor updates TensorFlow Model Server Neuron [2.8.9.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 6/14/2023 * Minor updates TensorFlow Model Server Neuron [2.8.1.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 5/1/2023 * Minor updates TensorFlow Model Server Neuron [2.7.3.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 3/28/2023 * Minor updates TensorFlow Model Server Neuron [2.6.5.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 2/24/2023 First release of TensorFlow-Model-Server-Neuron (``tensorflow-modeslserver-neuronx``) package. </pre></body></html>
2023-09-29T20:55:24.264Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/nemo/neuronx-nemo.rst.txt
``` .. _neuronx-nemo-rn: AWS Neuron Reference for Nemo Megatron(``neuronx-nemo-megatron``) Release Notes =============================================================================== .. contents:: Table of contents :local: :depth: 1 This document lists the release notes for ``neuronx-nemo-megatron`` library. ``neuronx-nemo-megatron`` [0.3.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 9/15/2023 New in this release ------------------- * Added Llama 13B model support that works with tensor-parallelism and pipeline parallelism * Zero1 Optimizer support that works with tensor-parallelism and pipeline parallelism * Fixes for loading/saving checkpoint OOM issues while loading large models * Added Docker support * Feature to save only the last checkpoint and delete previous ones to conserve disk space * Added FP32 OptimizerState option for mixed precision * Added Validation loop support Known Issues and Limitations ---------------------------- * Tested validation logic with smaller global batch sizes (32). Not tested larger global batch sizes. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-nemo-rn: AWS Neuron Reference for Nemo Megatron(``neuronx-nemo-megatron``) Release Notes =============================================================================== .. contents:: Table of contents :local: :depth: 1 This document lists the release notes for ``neuronx-nemo-megatron`` library. ``neuronx-nemo-megatron`` [0.3.0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Date: 9/15/2023 New in this release ------------------- * Added Llama 13B model support that works with tensor-parallelism and pipeline parallelism * Zero1 Optimizer support that works with tensor-parallelism and pipeline parallelism * Fixes for loading/saving checkpoint OOM issues while loading large models * Added Docker support * Feature to save only the last checkpoint and delete previous ones to conserve disk space * Added FP32 OptimizerState option for mixed precision * Added Validation loop support Known Issues and Limitations ---------------------------- * Tested validation logic with smaller global batch sizes (32). Not tested larger global batch sizes. </pre></body></html>
2023-09-29T20:55:24.410Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/documentation/neuron-documentation.rst.txt
```perceiver .. _neuron-documentation-rn: Neuron Documentation Release Notes ================================== .. contents:: Table of contents :local: :depth: 1 Neuron 2.14.0 ------------- Date: 09/15/2023 - Neuron Calculator now supports multiple model configurations for Tensor Parallel Degree computation. See :ref:`neuron_calculator` - Announcement to deprecate ``--model-type=transformer-inference`` flag. See :ref:`announce-deprecation-transformer-flag` - Updated HF ViT benchmarking script to use ``--model-type=transformer`` flag. See :ref:`[script] <src/benchmark/pytorch/hf-google-vit_benchmark.py>` - Updated ``torch_neuronx.analyze`` API documentation. See :ref:`torch_neuronx_analyze_api` - Updated Performance benchmarking numbers for models on Inf1,Inf2 and Trn1 instances with 2.14 release bits. See :ref:`_benchmark` - New tutorial for Training Llama2 7B with Tensor Parallelism and ZeRO-1 Optimizer using ``neuronx-distributed`` :ref:`llama2_7b_tp_zero1_tutorial` - New tutorial for ``T5-3B`` model inference using ``neuronx-distributed`` (:pytorch-neuron-src:`tutorial <neuronx_distributed/t5-inference/t5-inference-tutorial.ipynb>`) - Updated ``Neuron Persistent Cache`` documentation regarding clarification of flags parsed by ``neuron_cc_wrapper`` tool which is a wrapper over ``Neuron Compiler CLI``. See :ref:`neuron-caching` - Added ``tokenizers_parallelism=true`` in various notebook scripts to supress tokenizer warnings making errors easier to detect - Updated Neuron device plugin and scheduler YAMLs to point to latest images. See `yaml configs <https://github.com/aws-neuron/aws-neuron-sdk/tree/master/src/k8>`_ - Added notebook script to fine-tune ``deepmind/language-perceiver`` model using ``torch-neuronx``. See `sample script <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_text_classification/LanguagePerceiver.ipynb>`_ - Added notebook script to fine-tune ``clip-large`` model using ``torch-neuronx``. See `sample script <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_contrastive_image_text/CLIPLarge.ipynb>`_ - Added ``SD XL Base+Refiner`` inference sample script using ``torch-neuronx``. See `sample script <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_sdxl_base_and_refiner_1024_inference.ipynb>`_ - Upgraded default ``diffusers`` library from 0.14.0 to latest 0.20.2 in ``Stable Diffusion 1.5`` and ``Stable Diffusion 2.1`` inference scripts. See `sample scripts <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference>`_ - Added ``Llama-2-13B`` model training script using ``neuronx-nemo-megatron`` ( `tutorial <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-llamav2-job.md>`_ ) Neuron 2.13.0 ------------- Date: 08/28/2023 - Added tutorials for GPT-NEOX 6.9B and 20B models training using neuronx-distributed. See more at :ref:`tp_tutorials` - Added TensorFlow 2.x (``tensorflow-neuronx``) analyze_model API section. See more at :ref:`tensorflow-ref-neuron-analyze_model-api` - Updated setup instructions to fix path of existing virtual environments in DLAMIs. See more at :ref:`setup guide <setup-guide-index>` - Updated setup instructions to fix pinned versions in upgrade instructions of setup guide. See more at :ref:`setup guide <setup-guide-index>` - Updated tensorflow-neuron HF distilbert tutorial to improve performance by removing HF pipeline. See more at :ref:`[html] </src/examples/tensorflow/huggingface_bert/huggingface_bert.html>` :github:`[notebook] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` - Updated training troubleshooting guide in torch-neuronx to describe network Connectivity Issue on trn1/trn1n 32xlarge with Ubuntu. See more at :ref:`pytorch-neuron-traning-troubleshooting` - Added "Unsupported Hardware Operator Code" section to Neuron Runtime Troubleshooting page. See more at :ref:`nrt-troubleshooting` - Removed 'Experimental' tag from ``neuronx-distributed`` section for training. ``neuronx-distributed`` Training is now considered stable and ``neuronx-distributed`` inference is considered as experimental. - Added FLOP count(``flop_count``) and connected Neuron Device ids (``connected_devices``) to sysfs userguide. See :ref:`neuron-sysfs-ug` - Added tutorial for ``T5`` model inference. See more at :pytorch-neuron-src:`[notebook] <torch-neuronx/t5-inference-tutorial.ipynb>` - Updated neuronx-distributed api guide and inference tutorial. See more at :ref:`api_guide` and :ref:`tp_inference_tutorial` - Announcing End of support for ``AWS Neuron reference for Megatron-LM`` starting Neuron 2.13. See more at :ref:`announce-eol-megatronlm` - Announcing end of support for ``torch-neuron`` version 1.9 starting Neuron 2.14. See more at :ref:`announce-eol-pytorch19` - Upgraded ``numpy`` version to ``1.21.6`` in various training scripts for `Text Classification <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training>`_ - Added license for Nemo Megatron to SDK Maintenance Policy. See more at :ref:`sdk-maintenance-policy` - Updated ``bert-japanese`` training Script to use ``multilingual-sentiments`` dataset. See `hf-bert-jp <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_bert_jp> `_ - Added sample script for LLaMA V2 13B model inference using transformers-neuronx. See `neuron samples repo <https://github.com/aws-neuron/aws-neuron-samples/>`_ - Added samples for training GPT-NEOX 20B and 6.9B models using neuronx-distributed. See `neuron samples repo <https://github.com/aws-neuron/aws-neuron-samples/>`_ - Added sample scripts for CLIP and Stable Diffusion XL inference using torch-neuronx. See `neuron samples repo <https://github.com/aws-neuron/aws-neuron-samples/>`_ - Added sample scripts for vision and language Perceiver models inference using torch-neuronx. See `neuron samples repo <https://github.com/aws-neuron/aws-neuron-samples/>`_ - Added camembert training/finetuning example for Trn1 under hf_text_classification in torch-neuronx. See `neuron samples repo <https://github.com/aws-neuron/aws-neuron-samples/>`_ - Updated Fine-tuning Hugging Face BERT Japanese model sample in torch-neuronx. See `neuron samples repo <https://github.com/aws-neuron/aws-neuron-samples/>`_ - See more neuron samples changes in `neuron samples release notes <https://github.com/aws-neuron/aws-neuron-samples/blob/master/releasenotes.md>`_ - Added samples for pre-training GPT-3 23B, 46B and 175B models using neuronx-nemo-megatron library. See `aws-neuron-parallelcluster-samples <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>`_ - Announced End of Support for GPT-3 training using aws-neuron-reference-for-megatron-lm library. See `aws-neuron-parallelcluster-samples <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>`_ - Updated bert-fine-tuning SageMaker sample by replacing amazon_reviews_multi dataset with amazon_polarity dataset. See `aws-neuron-sagemaker-samples <https://github.com/aws-neuron/aws-neuron-sagemaker-samples>`_ Neuron 2.12.0 ------------- Date: 07/19/2023 - Added best practices user guide for benchmarking performance of Neuron Devices `Benchmarking Guide and Helper scripts <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/microbenchmark>`_ - Announcing end of support for Ubuntu 18. See more at :ref:`announce-eol-ubuntu18` - Improved sidebar navigation in Documentation. - Removed support for Distributed Data Parallel(DDP) Tutorial. Neuron 2.11.0 ------------- Date: 06/14/2023 - New :ref:`neuron_calculator` Documentation section to help determine number of Neuron Cores needed for LLM Inference. - Added App Note :ref:`neuron_llm_inference` - New ``ML Libraries`` Documentation section to have :ref:`neuronx-distributed-index` and :ref:`transformers_neuronx_readme` - Improved Installation and Setup Guides for the different platforms supported. See more at :ref:`setup-guide-index` - Added Tutorial :ref:`setup-trn1-multi-node-execution` ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-documentation-rn: Neuron Documentation Release Notes ================================== .. contents:: Table of contents :local: :depth: 1 Neuron 2.14.0 ------------- Date: 09/15/2023 - Neuron Calculator now supports multiple model configurations for Tensor Parallel Degree computation. See :ref:`neuron_calculator` - Announcement to deprecate ``--model-type=transformer-inference`` flag. See :ref:`announce-deprecation-transformer-flag` - Updated HF ViT benchmarking script to use ``--model-type=transformer`` flag. See :ref:`[script] &lt;src/benchmark/pytorch/hf-google-vit_benchmark.py&gt;` - Updated ``torch_neuronx.analyze`` API documentation. See :ref:`torch_neuronx_analyze_api` - Updated Performance benchmarking numbers for models on Inf1,Inf2 and Trn1 instances with 2.14 release bits. See :ref:`_benchmark` - New tutorial for Training Llama2 7B with Tensor Parallelism and ZeRO-1 Optimizer using ``neuronx-distributed`` :ref:`llama2_7b_tp_zero1_tutorial` - New tutorial for ``T5-3B`` model inference using ``neuronx-distributed`` (:pytorch-neuron-src:`tutorial &lt;neuronx_distributed/t5-inference/t5-inference-tutorial.ipynb&gt;`) - Updated ``Neuron Persistent Cache`` documentation regarding clarification of flags parsed by ``neuron_cc_wrapper`` tool which is a wrapper over ``Neuron Compiler CLI``. See :ref:`neuron-caching` - Added ``tokenizers_parallelism=true`` in various notebook scripts to supress tokenizer warnings making errors easier to detect - Updated Neuron device plugin and scheduler YAMLs to point to latest images. See `yaml configs &lt;https://github.com/aws-neuron/aws-neuron-sdk/tree/master/src/k8&gt;`_ - Added notebook script to fine-tune ``deepmind/language-perceiver`` model using ``torch-neuronx``. See `sample script &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_text_classification/LanguagePerceiver.ipynb&gt;`_ - Added notebook script to fine-tune ``clip-large`` model using ``torch-neuronx``. See `sample script &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_contrastive_image_text/CLIPLarge.ipynb&gt;`_ - Added ``SD XL Base+Refiner`` inference sample script using ``torch-neuronx``. See `sample script &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference/hf_pretrained_sdxl_base_and_refiner_1024_inference.ipynb&gt;`_ - Upgraded default ``diffusers`` library from 0.14.0 to latest 0.20.2 in ``Stable Diffusion 1.5`` and ``Stable Diffusion 2.1`` inference scripts. See `sample scripts &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/inference&gt;`_ - Added ``Llama-2-13B`` model training script using ``neuronx-nemo-megatron`` ( `tutorial &lt;https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-llamav2-job.md&gt;`_ ) Neuron 2.13.0 ------------- Date: 08/28/2023 - Added tutorials for GPT-NEOX 6.9B and 20B models training using neuronx-distributed. See more at :ref:`tp_tutorials` - Added TensorFlow 2.x (``tensorflow-neuronx``) analyze_model API section. See more at :ref:`tensorflow-ref-neuron-analyze_model-api` - Updated setup instructions to fix path of existing virtual environments in DLAMIs. See more at :ref:`setup guide &lt;setup-guide-index&gt;` - Updated setup instructions to fix pinned versions in upgrade instructions of setup guide. See more at :ref:`setup guide &lt;setup-guide-index&gt;` - Updated tensorflow-neuron HF distilbert tutorial to improve performance by removing HF pipeline. See more at :ref:`[html] &lt;/src/examples/tensorflow/huggingface_bert/huggingface_bert.html&gt;` :github:`[notebook] &lt;/src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb&gt;` - Updated training troubleshooting guide in torch-neuronx to describe network Connectivity Issue on trn1/trn1n 32xlarge with Ubuntu. See more at :ref:`pytorch-neuron-traning-troubleshooting` - Added "Unsupported Hardware Operator Code" section to Neuron Runtime Troubleshooting page. See more at :ref:`nrt-troubleshooting` - Removed 'Experimental' tag from ``neuronx-distributed`` section for training. ``neuronx-distributed`` Training is now considered stable and ``neuronx-distributed`` inference is considered as experimental. - Added FLOP count(``flop_count``) and connected Neuron Device ids (``connected_devices``) to sysfs userguide. See :ref:`neuron-sysfs-ug` - Added tutorial for ``T5`` model inference. See more at :pytorch-neuron-src:`[notebook] &lt;torch-neuronx/t5-inference-tutorial.ipynb&gt;` - Updated neuronx-distributed api guide and inference tutorial. See more at :ref:`api_guide` and :ref:`tp_inference_tutorial` - Announcing End of support for ``AWS Neuron reference for Megatron-LM`` starting Neuron 2.13. See more at :ref:`announce-eol-megatronlm` - Announcing end of support for ``torch-neuron`` version 1.9 starting Neuron 2.14. See more at :ref:`announce-eol-pytorch19` - Upgraded ``numpy`` version to ``1.21.6`` in various training scripts for `Text Classification &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training&gt;`_ - Added license for Nemo Megatron to SDK Maintenance Policy. See more at :ref:`sdk-maintenance-policy` - Updated ``bert-japanese`` training Script to use ``multilingual-sentiments`` dataset. See `hf-bert-jp &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/hf_bert_jp&gt; `_ - Added sample script for LLaMA V2 13B model inference using transformers-neuronx. See `neuron samples repo &lt;https://github.com/aws-neuron/aws-neuron-samples/&gt;`_ - Added samples for training GPT-NEOX 20B and 6.9B models using neuronx-distributed. See `neuron samples repo &lt;https://github.com/aws-neuron/aws-neuron-samples/&gt;`_ - Added sample scripts for CLIP and Stable Diffusion XL inference using torch-neuronx. See `neuron samples repo &lt;https://github.com/aws-neuron/aws-neuron-samples/&gt;`_ - Added sample scripts for vision and language Perceiver models inference using torch-neuronx. See `neuron samples repo &lt;https://github.com/aws-neuron/aws-neuron-samples/&gt;`_ - Added camembert training/finetuning example for Trn1 under hf_text_classification in torch-neuronx. See `neuron samples repo &lt;https://github.com/aws-neuron/aws-neuron-samples/&gt;`_ - Updated Fine-tuning Hugging Face BERT Japanese model sample in torch-neuronx. See `neuron samples repo &lt;https://github.com/aws-neuron/aws-neuron-samples/&gt;`_ - See more neuron samples changes in `neuron samples release notes &lt;https://github.com/aws-neuron/aws-neuron-samples/blob/master/releasenotes.md&gt;`_ - Added samples for pre-training GPT-3 23B, 46B and 175B models using neuronx-nemo-megatron library. See `aws-neuron-parallelcluster-samples &lt;https://github.com/aws-neuron/aws-neuron-parallelcluster-samples&gt;`_ - Announced End of Support for GPT-3 training using aws-neuron-reference-for-megatron-lm library. See `aws-neuron-parallelcluster-samples &lt;https://github.com/aws-neuron/aws-neuron-parallelcluster-samples&gt;`_ - Updated bert-fine-tuning SageMaker sample by replacing amazon_reviews_multi dataset with amazon_polarity dataset. See `aws-neuron-sagemaker-samples &lt;https://github.com/aws-neuron/aws-neuron-sagemaker-samples&gt;`_ Neuron 2.12.0 ------------- Date: 07/19/2023 - Added best practices user guide for benchmarking performance of Neuron Devices `Benchmarking Guide and Helper scripts &lt;https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/microbenchmark&gt;`_ - Announcing end of support for Ubuntu 18. See more at :ref:`announce-eol-ubuntu18` - Improved sidebar navigation in Documentation. - Removed support for Distributed Data Parallel(DDP) Tutorial. Neuron 2.11.0 ------------- Date: 06/14/2023 - New :ref:`neuron_calculator` Documentation section to help determine number of Neuron Cores needed for LLM Inference. - Added App Note :ref:`neuron_llm_inference` - New ``ML Libraries`` Documentation section to have :ref:`neuronx-distributed-index` and :ref:`transformers_neuronx_readme` - Improved Installation and Setup Guides for the different platforms supported. See more at :ref:`setup-guide-index` - Added Tutorial :ref:`setup-trn1-multi-node-execution` </pre></body></html>
2023-09-29T20:55:24.431Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/guide-torch-neuron-vs-torch-neuronx-inference.rst.txt
``` .. _torch-neuron_vs_torch-neuronx: Comparison of |torch-neuron| (|Inf1|) versus |torch-neuronx| (|Inf2| & |Trn1|) for Inference ============================================================================================ Neuron now supports multiple instance types for inference. The choice of instance should be motivated primarily by the performance needs of the application, the instance pricing, and model compatibility. In prior releases, |torch-neuron| *only supported inference* and |torch-neuronx| *only supported training*. While |torch-neuron| will never be updated to support training, |torch-neuronx| now supports both *inference and training*. .. note:: **Recommendation**: Continue using |torch-neuron| (|Inf1|) for existing inference applications. |torch-neuronx| (|Inf2| & |Trn1|) should be used for inference applications that require very low latency, distributed inference, and large models that would not otherwise work with |Inf1|. See: :ref:`benchmark`. Framework Comparison -------------------- Example ~~~~~~~ The following scripts are identical model compilations performed using each framework. The lines that are changed are highlighted to show where the differences occur. .. tab-set:: .. tab-item:: torch-neuron .. code-block:: python :emphasize-lines: 3, 8 import torch import torchvision import torch_neuron model = torchvision.models.resnet50(pretrained=True).eval() image = torch.rand(1, 3, 224, 224) trace = torch_neuron.trace(model, image) .. tab-item:: torch-neuronx .. code-block:: python :emphasize-lines: 3, 8 import torch import torchvision import torch_neuronx model = torchvision.models.resnet50(pretrained=True).eval() image = torch.rand(1, 3, 224, 224) trace = torch_neuronx.trace(model, image) Hardware Features ~~~~~~~~~~~~~~~~~ The |torch-neuron| framework supports |Inf1| instances and the |torch-neuronx| framework supports |Inf2| & |Trn1| instances. These instances have different |architectures|, networking configurations, and capabilities due to the NeuronCore versions used. Models compiled with |torch-neuron| produce artifacts which are *only* compatible with |NeuronCore-v1|. Models compiled with |torch-neuronx| produce artifacts which are *only* compatible with |NeuronCore-v2|. This also means that models that were previously compiled with |torch-neuron| for |Inf1| are not forwards compatible with |Inf2| & |Trn1| instances. Likewise, models compiled with |torch-neuronx| for |Inf2| & |Trn1| are not backwards compatible with |Inf1|. |NeuronCore-v2| is capable of higher throughput and lower latency than |NeuronCore-v1| due to more powerful compute engines and improved memory bandwidth. |NeuronCore-v2| can also support larger models since more memory is available per NeuronCore. The hardware differences between NeuronCore versions means that models compiled with |torch-neuronx| will usually outperform models compiled with |torch-neuron|. In cases where throughput may be similar across instance-types, instances using |NeuronCore-v2| tend to achieve *significantly lower* latency than instances using |NeuronCore-v1|. This can enable applications that require extremely fast response time. See the :ref:`benchmark` page for the most up-to-date performance metrics. Besides performance benefits, |NeuronCore-v2| also has more hardware capabilities compared to |NeuronCore-v1|. For example, |NeuronCore-v2| supports a greater variety of data types and introduces a new fully programmable GPSIMD-Engine. Note that ``Trn`` instance-types are optimized for training purposes. Some ``Trn`` features (such as inter-chip networking) may be unnecessary for inference applications that do not require distribution across multiple NeuronCores. Software Features ~~~~~~~~~~~~~~~~~ The |torch-neuron| framework uses :func:`torch_neuron.trace` to create a TensorFlow GraphDef protobuf intermediate representation (IR) of the model compute graph. This is compiled to a binary Neuron Executable File Format (NEFF) with the |neuron-cc| compiler. The |torch-neuronx| framework uses :func:`torch_neuronx.trace` with torch-xla_ to create a HloModule protobuf IR of the model compute graph. This is compiled to a binary executable NEFF with the |neuronx-cc| compiler. The use of different compiler versions means that separate flags are supported by each framework. For example: - :ref:`neuroncore-pipeline` is supported in |neuron-cc| but is not supported in |neuronx-cc|. However, this feature is much less useful when using the |NeuronCore-v2| architecture due to significant memory improvements. - Mixed precision flags will differ across the compilers. |neuronx-cc| improves the flags by making the behavior more explicit and streamlined: - |neuron-cc-mixed-precision| - |neuronx-cc-mixed-precision| Since the python graph recording methods used by the frameworks are much different, this may lead to different levels of model support. To view the models which are known to be working, many compilation samples are provided for each framework: - `torch-neuron Samples`_ - `torch-neuronx Samples`_ Framework model support may also be affected by the graph partitioning feature. In |torch-neuron|, the :func:`torch_neuron.trace` API provides the ability to fall back to CPU for operations that are not supported directly by Neuron. This fallback behavior is currently not supported by :func:`torch_neuronx.trace`, however, certain operations that were previously not well-supported in |torch-neuron| are now supported in |torch-neuronx| by default (e.g. :class:`torch.nn.Embedding`). Feature Summary ~~~~~~~~~~~~~~~ +-----------------------+-----------------------------+-----------------------------+ | | `torch-neuron` | `torch-neuronx` | +=======================+=============================+=============================+ | Supported Instances | |Inf1| | |Inf2| & |Trn1| | +-----------------------+-----------------------------+-----------------------------+ | Inference Support | Yes | Yes | +-----------------------+-----------------------------+-----------------------------+ | Training Support | No | Yes | +-----------------------+-----------------------------+-----------------------------+ | Architecture | |NeuronCore-v1| | |NeuronCore-v2| | +-----------------------+-----------------------------+-----------------------------+ | Model Support | |model-support-v1| | |model-support-v2| | +-----------------------+-----------------------------+-----------------------------+ | Trace API | :func:`torch_neuron.trace` | :func:`torch_neuronx.trace` | +-----------------------+-----------------------------+-----------------------------+ | NeuronCore Pipeline | Yes | No | +-----------------------+-----------------------------+-----------------------------+ | Partitioning | Yes | No | +-----------------------+-----------------------------+-----------------------------+ | IR | GraphDef | HLO | +-----------------------+-----------------------------+-----------------------------+ | Compiler | |neuron-cc| | |neuronx-cc| | +-----------------------+-----------------------------+-----------------------------+ | Samples | `torch-neuron Samples`_ | `torch-neuronx Samples`_ | +-----------------------+-----------------------------+-----------------------------+ References ---------- To determine if a model is already supported in a given framework, it is recommended to check the existing documentation for specific models. In order of reference quality, the following pages can be checked prior to compiling a model: 1. :ref:`benchmark`: Models that are available here have been optimized to maximize throughput and/or latency. These metrics are updated frequently as improvements are made. Since metrics are published for different instance types, this can provide a direct performance comparison between instances. Note that the exact models and configurations may differ across instances. 2. `Neuron GitHub Samples`_: Provides simple examples of compiling and executing models. Compared to the benchmarks, this reference is only intended to show *how* to run a particular model on Neuron. This only validates if a framework supports a given model. 3. :ref:`model_architecture_fit`: If the a model is not listed on the prior pages, it may be that the model has not been tested or may not be well-supported. The architecture fit page provides high-level guidelines for which kinds of models will work well based on the hardware capabilities. If a model does not appear in any of these references, the last option is to attempt to compile the model to see how it performs. In the case that an error occurs during compilation, please file a ticket in the `Neuron SDK Github Issues`_. .. |neuron-cc-mixed-precision| replace:: :ref:`neuron-cc-training-mixed-precision` .. |neuronx-cc-mixed-precision| replace:: :ref:`neuronx-cc-training-mixed-precision` .. |Inf1| replace:: :ref:`Inf1 <aws-inf1-arch>` .. |Trn1| replace:: :ref:`Trn1 <aws-trn1-arch>` .. |Inf2| replace:: :ref:`Inf2 <aws-inf2-arch>` .. |architectures| replace:: :ref:`architectures <neuroncores-arch>` .. |NeuronCore-v1| replace:: :ref:`NeuronCore-v1 <neuroncores-v1-arch>` .. |NeuronCore-v2| replace:: :ref:`NeuronCore-v2 <neuroncores-v2-arch>` .. |neuron-cc| replace:: :ref:`neuron-cc <neuron-compiler-cli-reference>` .. |neuronx-cc| replace:: :ref:`neuronx-cc <neuron-compiler-cli-reference-guide>` .. |torch-neuron| replace:: :ref:`torch-neuron <inference-torch-neuron>` .. |torch-neuronx| replace:: :ref:`torch-neuronx <inference-torch-neuronx>` .. |model-support-v1| replace:: :ref:`Architecture Fit NeuronCore-v1 <model-architecture-fit-neuroncore-v1>` .. |model-support-v2| replace:: :ref:`Architecture Fit NeuronCore-v2 <model-architecture-fit-neuroncore-v2>` .. _Neuron GitHub Samples: https://github.com/aws-neuron/aws-neuron-samples .. _torch-neuron Samples: https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron .. _torch-neuronx Samples: https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx .. _torch-xla: https://github.com/pytorch/xla .. _Neuron SDK Github Issues: https://github.com/aws-neuron/aws-neuron-sdk/issues ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-neuron_vs_torch-neuronx: Comparison of |torch-neuron| (|Inf1|) versus |torch-neuronx| (|Inf2| &amp; |Trn1|) for Inference ============================================================================================ Neuron now supports multiple instance types for inference. The choice of instance should be motivated primarily by the performance needs of the application, the instance pricing, and model compatibility. In prior releases, |torch-neuron| *only supported inference* and |torch-neuronx| *only supported training*. While |torch-neuron| will never be updated to support training, |torch-neuronx| now supports both *inference and training*. .. note:: **Recommendation**: Continue using |torch-neuron| (|Inf1|) for existing inference applications. |torch-neuronx| (|Inf2| &amp; |Trn1|) should be used for inference applications that require very low latency, distributed inference, and large models that would not otherwise work with |Inf1|. See: :ref:`benchmark`. Framework Comparison -------------------- Example ~~~~~~~ The following scripts are identical model compilations performed using each framework. The lines that are changed are highlighted to show where the differences occur. .. tab-set:: .. tab-item:: torch-neuron .. code-block:: python :emphasize-lines: 3, 8 import torch import torchvision import torch_neuron model = torchvision.models.resnet50(pretrained=True).eval() image = torch.rand(1, 3, 224, 224) trace = torch_neuron.trace(model, image) .. tab-item:: torch-neuronx .. code-block:: python :emphasize-lines: 3, 8 import torch import torchvision import torch_neuronx model = torchvision.models.resnet50(pretrained=True).eval() image = torch.rand(1, 3, 224, 224) trace = torch_neuronx.trace(model, image) Hardware Features ~~~~~~~~~~~~~~~~~ The |torch-neuron| framework supports |Inf1| instances and the |torch-neuronx| framework supports |Inf2| &amp; |Trn1| instances. These instances have different |architectures|, networking configurations, and capabilities due to the NeuronCore versions used. Models compiled with |torch-neuron| produce artifacts which are *only* compatible with |NeuronCore-v1|. Models compiled with |torch-neuronx| produce artifacts which are *only* compatible with |NeuronCore-v2|. This also means that models that were previously compiled with |torch-neuron| for |Inf1| are not forwards compatible with |Inf2| &amp; |Trn1| instances. Likewise, models compiled with |torch-neuronx| for |Inf2| &amp; |Trn1| are not backwards compatible with |Inf1|. |NeuronCore-v2| is capable of higher throughput and lower latency than |NeuronCore-v1| due to more powerful compute engines and improved memory bandwidth. |NeuronCore-v2| can also support larger models since more memory is available per NeuronCore. The hardware differences between NeuronCore versions means that models compiled with |torch-neuronx| will usually outperform models compiled with |torch-neuron|. In cases where throughput may be similar across instance-types, instances using |NeuronCore-v2| tend to achieve *significantly lower* latency than instances using |NeuronCore-v1|. This can enable applications that require extremely fast response time. See the :ref:`benchmark` page for the most up-to-date performance metrics. Besides performance benefits, |NeuronCore-v2| also has more hardware capabilities compared to |NeuronCore-v1|. For example, |NeuronCore-v2| supports a greater variety of data types and introduces a new fully programmable GPSIMD-Engine. Note that ``Trn`` instance-types are optimized for training purposes. Some ``Trn`` features (such as inter-chip networking) may be unnecessary for inference applications that do not require distribution across multiple NeuronCores. Software Features ~~~~~~~~~~~~~~~~~ The |torch-neuron| framework uses :func:`torch_neuron.trace` to create a TensorFlow GraphDef protobuf intermediate representation (IR) of the model compute graph. This is compiled to a binary Neuron Executable File Format (NEFF) with the |neuron-cc| compiler. The |torch-neuronx| framework uses :func:`torch_neuronx.trace` with torch-xla_ to create a HloModule protobuf IR of the model compute graph. This is compiled to a binary executable NEFF with the |neuronx-cc| compiler. The use of different compiler versions means that separate flags are supported by each framework. For example: - :ref:`neuroncore-pipeline` is supported in |neuron-cc| but is not supported in |neuronx-cc|. However, this feature is much less useful when using the |NeuronCore-v2| architecture due to significant memory improvements. - Mixed precision flags will differ across the compilers. |neuronx-cc| improves the flags by making the behavior more explicit and streamlined: - |neuron-cc-mixed-precision| - |neuronx-cc-mixed-precision| Since the python graph recording methods used by the frameworks are much different, this may lead to different levels of model support. To view the models which are known to be working, many compilation samples are provided for each framework: - `torch-neuron Samples`_ - `torch-neuronx Samples`_ Framework model support may also be affected by the graph partitioning feature. In |torch-neuron|, the :func:`torch_neuron.trace` API provides the ability to fall back to CPU for operations that are not supported directly by Neuron. This fallback behavior is currently not supported by :func:`torch_neuronx.trace`, however, certain operations that were previously not well-supported in |torch-neuron| are now supported in |torch-neuronx| by default (e.g. :class:`torch.nn.Embedding`). Feature Summary ~~~~~~~~~~~~~~~ +-----------------------+-----------------------------+-----------------------------+ | | `torch-neuron` | `torch-neuronx` | +=======================+=============================+=============================+ | Supported Instances | |Inf1| | |Inf2| &amp; |Trn1| | +-----------------------+-----------------------------+-----------------------------+ | Inference Support | Yes | Yes | +-----------------------+-----------------------------+-----------------------------+ | Training Support | No | Yes | +-----------------------+-----------------------------+-----------------------------+ | Architecture | |NeuronCore-v1| | |NeuronCore-v2| | +-----------------------+-----------------------------+-----------------------------+ | Model Support | |model-support-v1| | |model-support-v2| | +-----------------------+-----------------------------+-----------------------------+ | Trace API | :func:`torch_neuron.trace` | :func:`torch_neuronx.trace` | +-----------------------+-----------------------------+-----------------------------+ | NeuronCore Pipeline | Yes | No | +-----------------------+-----------------------------+-----------------------------+ | Partitioning | Yes | No | +-----------------------+-----------------------------+-----------------------------+ | IR | GraphDef | HLO | +-----------------------+-----------------------------+-----------------------------+ | Compiler | |neuron-cc| | |neuronx-cc| | +-----------------------+-----------------------------+-----------------------------+ | Samples | `torch-neuron Samples`_ | `torch-neuronx Samples`_ | +-----------------------+-----------------------------+-----------------------------+ References ---------- To determine if a model is already supported in a given framework, it is recommended to check the existing documentation for specific models. In order of reference quality, the following pages can be checked prior to compiling a model: 1. :ref:`benchmark`: Models that are available here have been optimized to maximize throughput and/or latency. These metrics are updated frequently as improvements are made. Since metrics are published for different instance types, this can provide a direct performance comparison between instances. Note that the exact models and configurations may differ across instances. 2. `Neuron GitHub Samples`_: Provides simple examples of compiling and executing models. Compared to the benchmarks, this reference is only intended to show *how* to run a particular model on Neuron. This only validates if a framework supports a given model. 3. :ref:`model_architecture_fit`: If the a model is not listed on the prior pages, it may be that the model has not been tested or may not be well-supported. The architecture fit page provides high-level guidelines for which kinds of models will work well based on the hardware capabilities. If a model does not appear in any of these references, the last option is to attempt to compile the model to see how it performs. In the case that an error occurs during compilation, please file a ticket in the `Neuron SDK Github Issues`_. .. |neuron-cc-mixed-precision| replace:: :ref:`neuron-cc-training-mixed-precision` .. |neuronx-cc-mixed-precision| replace:: :ref:`neuronx-cc-training-mixed-precision` .. |Inf1| replace:: :ref:`Inf1 &lt;aws-inf1-arch&gt;` .. |Trn1| replace:: :ref:`Trn1 &lt;aws-trn1-arch&gt;` .. |Inf2| replace:: :ref:`Inf2 &lt;aws-inf2-arch&gt;` .. |architectures| replace:: :ref:`architectures &lt;neuroncores-arch&gt;` .. |NeuronCore-v1| replace:: :ref:`NeuronCore-v1 &lt;neuroncores-v1-arch&gt;` .. |NeuronCore-v2| replace:: :ref:`NeuronCore-v2 &lt;neuroncores-v2-arch&gt;` .. |neuron-cc| replace:: :ref:`neuron-cc &lt;neuron-compiler-cli-reference&gt;` .. |neuronx-cc| replace:: :ref:`neuronx-cc &lt;neuron-compiler-cli-reference-guide&gt;` .. |torch-neuron| replace:: :ref:`torch-neuron &lt;inference-torch-neuron&gt;` .. |torch-neuronx| replace:: :ref:`torch-neuronx &lt;inference-torch-neuronx&gt;` .. |model-support-v1| replace:: :ref:`Architecture Fit NeuronCore-v1 &lt;model-architecture-fit-neuroncore-v1&gt;` .. |model-support-v2| replace:: :ref:`Architecture Fit NeuronCore-v2 &lt;model-architecture-fit-neuroncore-v2&gt;` .. _Neuron GitHub Samples: https://github.com/aws-neuron/aws-neuron-samples .. _torch-neuron Samples: https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron .. _torch-neuronx Samples: https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx .. _torch-xla: https://github.com/pytorch/xla .. _Neuron SDK Github Issues: https://github.com/aws-neuron/aws-neuron-sdk/issues</pre></body></html>
2023-09-29T20:55:24.472Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/tutorials/training_llama2_7b.rst.txt
``` .. _llama2_7b_tp_zero1_tutorial: Training Llama2 7B with Tensor Parallelism and ZeRO-1 Optimizer (``neuronx-distributed`` ) ========================================================================================= In this section, we showcase to pretrain a Llama2 7B model by using the sequence parallel, selective checkpoint as well as constant mask optimization in the ``neuronx-distributed`` package. Setting up environment: For this experiment, we will use a ParallelCluster with at least four trn1-32xl compute nodes. `Train your model on ParallelCluster <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/devflows/training/parallelcluster/parallelcluster-training.html>`__ introduces how to setup and use a ParallelCluster. We need first to create and activate a python virtual env on the head node of the ParallelCluster. Next follow the instructions mentioned here: :ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>` to install neuron python packages. We also need to install the ``neuronx-distributed`` package using the following command: .. code:: ipython3 python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com Let’s download the scripts for pretraining: .. code:: ipython3 mkdir -p ~/examples/tp_zero1_llama2_7b_hf_pretrain cd ~/examples/tp_zero1_llama2_7b_hf_pretrain wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/tp_zero1_llama2_7b_hf_pretrain.py wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/tp_zero1_llama2_7b_hf_pretrain.sh wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/modeling_llama2_nxd.py wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/adamw_fp32_optim_params.py wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/get_dataset.py wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/requirements.txt wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/config.json python3 -m pip install -r requirements.txt chmod +x tp_zero1_llama2_7b_hf_pretrain.sh To tokenize the data, we must request the tokenizer from hugging face and meta by following the instructions at the following link: `HuggingFace Llama 2 7B Model <https://huggingface.co/meta-llama/Llama-2-7b>`__ . Use of the Llama 2 model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the above website and accept their License before requesting access. After access has been granted, you may use the download scripts provided by Meta to download the model weights and tokenizer to your cluster. Once you have downloaded the tokenizer and model weights, you can copy the ``tokenizer.model`` to the ``~/examples/tp_zero1_llama2_7b_hf_pretrain`` directory. Next let’s download and pre-process the dataset: .. code:: ipython3 cd ~/examples/tp_zero1_llama2_7b_hf_pretrain python3 get_dataset.py In case you see an error of the following form when downloading data: ``huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/ubuntu/examples/tp_zero1_llama2_7b_hf_pretrain'. Use `repo_type` argument if needed.`` This could be because of a stale cache. Try deleting the cache using: .. code:: ipython3 sudo rm -rf /home/ubuntu/.cache/ At this point, you are all set to start training. Running training We first pre-compile the graphs using the ``neuron_parallel_compile``. Let’s run the command below: .. code:: ipython3 sbatch --exclusive \ --nodes 4 \ --wrap="srun neuron_parallel_compile bash $(pwd)/tp_zero1_llama2_7b_hf_pretrain.sh" This script uses a tensor-parallel size of 8. This will automatically set the zero-1 sharding degree to 16 (4 * 32 workers / tensor_parallel_size). Once the graphs are compiled we can now run training and observe our loss goes down. To run the training, we just the above command but without ``neuron_parallel_compile``. .. code:: ipython3 sbatch --exclusive \ --nodes 4 \ --wrap="srun bash $(pwd)/tp_zero1_llama2_7b_hf_pretrain.sh" Sequence Parallel Please refer to :ref:`GPT-NeoX 6.9B tutorial<gpt_neox_tp_zero1_tutorial>` on how to enable sequence parallel. On top of it, we further coalesced parallel matrix multiply to improve throughput: * We coalesced ``query``, ``key`` and ``value`` into one matrix multiply * We coalesced ``gate_proj`` and ``up_proj`` into one matrix multiply Please check ``modeling_llama2_nxd.py`` and ``tp_dp_gpt_neox_20b_hf_pretrain.py`` for details. Selective Activation Checkpoint Instead of checkpointing and recomputing full transformer layers, we checkpoint and recompute only parts of each transformer layer that take up a considerable amount of memory but are not computationally expensive to recompute, or selective activation recomputation: * Rewrite the attention layer into ``core_attn`` function: it takes ``query``, ``key`` and ``value`` as inputs and performs attention. * We checkpoint ``core_attn`` with ``torch.utils.checkpoint.checkpoint``. Constant Attention Mask In decoder transformer, we use casual attention masks to predict next token based on previous tokens. To enable it: * We use a constant triangular matrix as the casual masks * We detect constants in compiler with constant folding and save computation. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _llama2_7b_tp_zero1_tutorial: Training Llama2 7B with Tensor Parallelism and ZeRO-1 Optimizer (``neuronx-distributed`` ) ========================================================================================= In this section, we showcase to pretrain a Llama2 7B model by using the sequence parallel, selective checkpoint as well as constant mask optimization in the ``neuronx-distributed`` package. Setting up environment: For this experiment, we will use a ParallelCluster with at least four trn1-32xl compute nodes. `Train your model on ParallelCluster &lt;https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/devflows/training/parallelcluster/parallelcluster-training.html&gt;`__ introduces how to setup and use a ParallelCluster. We need first to create and activate a python virtual env on the head node of the ParallelCluster. Next follow the instructions mentioned here: :ref:`Install PyTorch Neuron on Trn1 &lt;setup-torch-neuronx&gt;` to install neuron python packages. We also need to install the ``neuronx-distributed`` package using the following command: .. code:: ipython3 python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com Let’s download the scripts for pretraining: .. code:: ipython3 mkdir -p ~/examples/tp_zero1_llama2_7b_hf_pretrain cd ~/examples/tp_zero1_llama2_7b_hf_pretrain wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/tp_zero1_llama2_7b_hf_pretrain.py wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/tp_zero1_llama2_7b_hf_pretrain.sh wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/modeling_llama2_nxd.py wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/adamw_fp32_optim_params.py wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/get_dataset.py wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/requirements.txt wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_zero1_llama2_7b_hf_pretrain/config.json python3 -m pip install -r requirements.txt chmod +x tp_zero1_llama2_7b_hf_pretrain.sh To tokenize the data, we must request the tokenizer from hugging face and meta by following the instructions at the following link: `HuggingFace Llama 2 7B Model &lt;https://huggingface.co/meta-llama/Llama-2-7b&gt;`__ . Use of the Llama 2 model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the above website and accept their License before requesting access. After access has been granted, you may use the download scripts provided by Meta to download the model weights and tokenizer to your cluster. Once you have downloaded the tokenizer and model weights, you can copy the ``tokenizer.model`` to the ``~/examples/tp_zero1_llama2_7b_hf_pretrain`` directory. Next let’s download and pre-process the dataset: .. code:: ipython3 cd ~/examples/tp_zero1_llama2_7b_hf_pretrain python3 get_dataset.py In case you see an error of the following form when downloading data: ``huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/ubuntu/examples/tp_zero1_llama2_7b_hf_pretrain'. Use `repo_type` argument if needed.`` This could be because of a stale cache. Try deleting the cache using: .. code:: ipython3 sudo rm -rf /home/ubuntu/.cache/ At this point, you are all set to start training. Running training We first pre-compile the graphs using the ``neuron_parallel_compile``. Let’s run the command below: .. code:: ipython3 sbatch --exclusive \ --nodes 4 \ --wrap="srun neuron_parallel_compile bash $(pwd)/tp_zero1_llama2_7b_hf_pretrain.sh" This script uses a tensor-parallel size of 8. This will automatically set the zero-1 sharding degree to 16 (4 * 32 workers / tensor_parallel_size). Once the graphs are compiled we can now run training and observe our loss goes down. To run the training, we just the above command but without ``neuron_parallel_compile``. .. code:: ipython3 sbatch --exclusive \ --nodes 4 \ --wrap="srun bash $(pwd)/tp_zero1_llama2_7b_hf_pretrain.sh" Sequence Parallel Please refer to :ref:`GPT-NeoX 6.9B tutorial&lt;gpt_neox_tp_zero1_tutorial&gt;` on how to enable sequence parallel. On top of it, we further coalesced parallel matrix multiply to improve throughput: * We coalesced ``query``, ``key`` and ``value`` into one matrix multiply * We coalesced ``gate_proj`` and ``up_proj`` into one matrix multiply Please check ``modeling_llama2_nxd.py`` and ``tp_dp_gpt_neox_20b_hf_pretrain.py`` for details. Selective Activation Checkpoint Instead of checkpointing and recomputing full transformer layers, we checkpoint and recompute only parts of each transformer layer that take up a considerable amount of memory but are not computationally expensive to recompute, or selective activation recomputation: * Rewrite the attention layer into ``core_attn`` function: it takes ``query``, ``key`` and ``value`` as inputs and performs attention. * We checkpoint ``core_attn`` with ``torch.utils.checkpoint.checkpoint``. Constant Attention Mask In decoder transformer, we use casual attention masks to predict next token based on previous tokens. To enable it: * We use a constant triangular matrix as the casual masks * We detect constants in compiler with constant folding and save computation. </pre></body></html>
2023-09-29T20:55:24.519Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.rst.txt
``` .. _setup-jupyter-notebook-steps-troubleshooting: .. _Running Jupyter Notebook Browser: Jupyter Notebook QuickStart =========================== .. contents:: Table of Contents :local: :depth: 2 SSH Tunnel to the Inf1/Trn1 instance ------------------------------------ The Jupyter notebook can be run via a browser on port 8888 by default. For simplicity we will use ssh port forwarding from your machine to the instance. :: ssh -i "<pem file>" <user>@<instance DNS name> -L 8888:127.0.0.1:8888 On an Ubuntu image the user will be ubuntu@, while on AL2 you should use ec2-user@ This additional argument forwards connections to port 8888 on your machine to the new Inf1/Trn1 instance. Starting the Jupyter Notebook on the instance --------------------------------------------- From your ssh prompt on the Inf1/Trn1 instance run :: jupyter notebook You should see logging in your ssh session similar to: .. code:: bash [I 21:53:11.729 NotebookApp] Using EnvironmentKernelSpecManager... [I 21:53:11.730 NotebookApp] Started periodic updates of the kernel list (every 3 minutes). [I 21:53:11.867 NotebookApp] Loading IPython parallel extension [I 21:53:11.884 NotebookApp] JupyterLab beta preview extension loaded from /home/ubuntu/anaconda3/lib/python3.6/site-packages/jupyterlab [I 21:53:11.884 NotebookApp] JupyterLab application directory is /home/ubuntu/anaconda3/share/jupyter/lab [I 21:53:12.002 NotebookApp] [nb_conda] enabled [I 21:53:12.004 NotebookApp] Serving notebooks from local directory: /home/ubuntu/tutorial [I 21:53:12.004 NotebookApp] 0 active kernels [I 21:53:12.004 NotebookApp] The Jupyter Notebook is running at: [I 21:53:12.004 NotebookApp] http://localhost:8888/?token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16 [I 21:53:12.004 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [W 21:53:12.004 NotebookApp] No web browser found: could not locate runnable browser. Copy/paste this URL into your browser when you connect for the first time, to login with a token: ``http://localhost:8888/?token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16&token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16`` .. code:: bash [I 21:53:12.004 NotebookApp] Starting initial scan of virtual environments... [I 21:53:13.507 NotebookApp] Found new kernels in environments: conda_tensorflow2_p27, conda_aws_neuron_mxnet_p36, conda_anaconda3, conda_tensorflow_p27, conda_chainer_p27, conda_python3, conda_tensorflow_p36, conda_aws_neuron_tensorflow_p36, conda_mxnet_p27, **conda_my_notebook_env**, conda_tensorflow2_p36, conda_pytorch_p27, conda_python2, conda_chainer_p36, conda_mxnet_p36, conda_pytorch_p36 Running the Jupyter Notebook from your local browser ---------------------------------------------------- If you copy and paste the link that looks like ``http://localhost:8888/?token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16&token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16`` into your local browser the Notebook navigation pane should pop up. This works because ssh is forwarding you local port 8888 through to the Inf1/Trn1 instance port 8888 where the notebook is running. Note that our new conda environment is visible as “kernel” with the “conda\_” prefix (highlighted) 1) In notebook browser select the tutorial. 2) This will pop up a new tab. In that tab use the menus: Kernel → Change Kernel → Environment (conda_my_notebook_env) 3) Start reading through the self documenting notebook tutorial Troubleshooting --------------- If your jupyter notebook does not start please try the following: :: mv ~/.jupyter ~/.jupyter.old mkdir -p ~/.jupyter echo "c.NotebookApp.iopub_data_rate_limit = 10000000000" > ~/.jupyter/jupyter_notebook_config.py # Instal Jupyter notebook kernel pip install ipykernel python3 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python Neuronx" pip install jupyter notebook pip install environment_kernels jupyter notebook ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-jupyter-notebook-steps-troubleshooting: .. _Running Jupyter Notebook Browser: Jupyter Notebook QuickStart =========================== .. contents:: Table of Contents :local: :depth: 2 SSH Tunnel to the Inf1/Trn1 instance ------------------------------------ The Jupyter notebook can be run via a browser on port 8888 by default. For simplicity we will use ssh port forwarding from your machine to the instance. :: ssh -i "&lt;pem file&gt;" &lt;user&gt;@&lt;instance DNS name&gt; -L 8888:127.0.0.1:8888 On an Ubuntu image the user will be ubuntu@, while on AL2 you should use ec2-user@ This additional argument forwards connections to port 8888 on your machine to the new Inf1/Trn1 instance. Starting the Jupyter Notebook on the instance --------------------------------------------- From your ssh prompt on the Inf1/Trn1 instance run :: jupyter notebook You should see logging in your ssh session similar to: .. code:: bash [I 21:53:11.729 NotebookApp] Using EnvironmentKernelSpecManager... [I 21:53:11.730 NotebookApp] Started periodic updates of the kernel list (every 3 minutes). [I 21:53:11.867 NotebookApp] Loading IPython parallel extension [I 21:53:11.884 NotebookApp] JupyterLab beta preview extension loaded from /home/ubuntu/anaconda3/lib/python3.6/site-packages/jupyterlab [I 21:53:11.884 NotebookApp] JupyterLab application directory is /home/ubuntu/anaconda3/share/jupyter/lab [I 21:53:12.002 NotebookApp] [nb_conda] enabled [I 21:53:12.004 NotebookApp] Serving notebooks from local directory: /home/ubuntu/tutorial [I 21:53:12.004 NotebookApp] 0 active kernels [I 21:53:12.004 NotebookApp] The Jupyter Notebook is running at: [I 21:53:12.004 NotebookApp] http://localhost:8888/?token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16 [I 21:53:12.004 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [W 21:53:12.004 NotebookApp] No web browser found: could not locate runnable browser. Copy/paste this URL into your browser when you connect for the first time, to login with a token: ``http://localhost:8888/?token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16&amp;token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16`` .. code:: bash [I 21:53:12.004 NotebookApp] Starting initial scan of virtual environments... [I 21:53:13.507 NotebookApp] Found new kernels in environments: conda_tensorflow2_p27, conda_aws_neuron_mxnet_p36, conda_anaconda3, conda_tensorflow_p27, conda_chainer_p27, conda_python3, conda_tensorflow_p36, conda_aws_neuron_tensorflow_p36, conda_mxnet_p27, **conda_my_notebook_env**, conda_tensorflow2_p36, conda_pytorch_p27, conda_python2, conda_chainer_p36, conda_mxnet_p36, conda_pytorch_p36 Running the Jupyter Notebook from your local browser ---------------------------------------------------- If you copy and paste the link that looks like ``http://localhost:8888/?token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16&amp;token=f9ad4086afd3c91f33d5587781f9fd8143b4cafbbf121a16`` into your local browser the Notebook navigation pane should pop up. This works because ssh is forwarding you local port 8888 through to the Inf1/Trn1 instance port 8888 where the notebook is running. Note that our new conda environment is visible as “kernel” with the “conda\_” prefix (highlighted) 1) In notebook browser select the tutorial. 2) This will pop up a new tab. In that tab use the menus: Kernel → Change Kernel → Environment (conda_my_notebook_env) 3) Start reading through the self documenting notebook tutorial Troubleshooting --------------- If your jupyter notebook does not start please try the following: :: mv ~/.jupyter ~/.jupyter.old mkdir -p ~/.jupyter echo "c.NotebookApp.iopub_data_rate_limit = 10000000000" &gt; ~/.jupyter/jupyter_notebook_config.py # Instal Jupyter notebook kernel pip install ipykernel python3 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python Neuronx" pip install jupyter notebook pip install environment_kernels jupyter notebook </pre></body></html>
2023-09-29T20:55:24.795Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/prev/content.rst.txt
``` .. _pre-release-content: Previous Releases Artifacts (Neuron 2.x) ======================================= .. contents:: Table of contents :local: :depth: 1 Neuron 2.14.0 (09/15/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.14.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.14.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.14.0 Neuron 2.13.2 (09/01/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.2 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.2 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.2 Neuron 2.13.1 (08/29/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.1 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.1 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.1 Neuron 2.13.0 (08/28/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.0 Neuron 2.12.2 (08/20/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.2 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.2 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.2 Neuron 2.12.1 (08/09/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.1 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.1 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.1 Neuron 2.12.0 (07/19/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.0 Neuron 2.11.0 (06/14/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.11.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.11.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.11.0 Neuron 2.10.0 (05/01/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.10.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.10.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.10.0 Neuron 2.9.1 (04/19/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1 Neuron 2.9.0 (03/28/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.0 Neuron 2.8.0 (02/24/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.8.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.8.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.8.0 Neuron 2.7.0 (02/08/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.7.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.7.0 Neuron 2.6.0 (12/12/2022) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ * ``aws-neuronx-dkms-2.6.33.0`` * ``aws-neuronx-oci-hook-2.1.14.0`` * ``aws-neuronx-runtime-lib-2.10.30.0`` * ``aws-neuronx-collectives-2.10.37.0`` * ``aws-neuronx-tools-2.6.1.0`` * ``aws-neuronx-k8-plugin-2.1.12.0`` * ``aws-neuronx-k8-scheduler-2.1.12.0`` * ``tensorboard_plugin_neuronx-2.5.3.0`` * ``neuronx-cc-2.3.0.4`` * ``torch-neuronx-1.12.0.1.4.0`` * ``tensorflow-model-server-neuronx_1.15.0.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.5.4.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.6.3.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.7.0.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.8.0.2.5.6.0`` Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.6.0 Neuron 2.5.0 (11/23/2022) ------------------------- Trn1 packages ^^^^^^^^^^^^^ * ``aws-neuronx-dkms-2.6.33.0`` * ``aws-neuronx-oci-hook-2.1.14.0`` * ``aws-neuronx-runtime-lib-2.10.27.0`` * ``aws-neuronx-collectives-2.10.34.0`` * ``aws-neuronx-tools-2.5.19.0`` * ``aws-neuronx-k8-plugin-2.1.12.0`` * ``aws-neuronx-k8-scheduler-2.1.12.0`` * ``neuronx-cc-2.2.0.73`` * ``torch-neuronx-1.11.0.1.2.0`` * ``tensorflow-model-server-neuronx_1.15.0.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.5.4.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.6.3.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.7.0.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.8.0.2.5.6.0`` Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.5.0 Neuron 2.4.0 (10/27/2022) -------------------------- Trn1 packages ^^^^^^^^^^^^^ * ``aws-neuronx-dkms-2.6.5.0`` * ``aws-neuronx-oci-hook-2.1.1.0`` * ``aws-neuronx-runtime-lib-2.10.15.0`` * ``aws-neuronx-collectives-2.10.17.0`` * ``aws-neuronx-tools-2.5.16.0`` * ``aws-neuronx-k8-plugin-2.1.2.0`` * ``aws-neuronx-k8-scheduler-2.1.2.0`` * ``neuronx-cc-2.2.0.73`` * ``torch-neuronx-1.11.0.1.2.0`` Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.4.0 Neuron 2.3.0 (10/10/2022) ------------------------- Trn1 packages ^^^^^^^^^^^^^ * ``aws-neuronx-dkms-2.5.41.0`` * ``aws-neuronx-oci-hook-2.0.16.0`` * ``aws-neuronx-runtime-lib-2.9.64.0`` * ``aws-neuronx-collectives-2.9.86.0`` * ``aws-neuronx-tools-2.4.14.0`` * ``aws-neuronx-k8-plugin-2.0.1.0`` * ``aws-neuronx-k8-scheduler-2.0.1.0`` * ``neuronx-cc-2.1.0.76`` * ``torch-neuronx-1.11.0.1.1.1`` Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.3.0 ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pre-release-content: Previous Releases Artifacts (Neuron 2.x) ======================================= .. contents:: Table of contents :local: :depth: 1 Neuron 2.14.0 (09/15/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.14.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.14.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.14.0 Neuron 2.13.2 (09/01/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.2 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.2 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.2 Neuron 2.13.1 (08/29/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.1 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.1 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.1 Neuron 2.13.0 (08/28/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.13.0 Neuron 2.12.2 (08/20/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.2 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.2 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.2 Neuron 2.12.1 (08/09/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.1 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.1 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.1 Neuron 2.12.0 (07/19/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.12.0 Neuron 2.11.0 (06/14/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.11.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.11.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.11.0 Neuron 2.10.0 (05/01/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.10.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.10.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.10.0 Neuron 2.9.1 (04/19/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.1 Neuron 2.9.0 (03/28/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.9.0 Neuron 2.8.0 (02/24/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.8.0 Inf2 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf2 --file=src/helperscripts/n2-manifest.json --neuron-version=2.8.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=inf1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.8.0 Neuron 2.7.0 (02/08/2023) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/n2-helper.py --list=packages --instance=trn1 --file=src/helperscripts/n2-manifest.json --neuron-version=2.7.0 Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.7.0 Neuron 2.6.0 (12/12/2022) -------------------------------------- Trn1 packages ^^^^^^^^^^^^^ * ``aws-neuronx-dkms-2.6.33.0`` * ``aws-neuronx-oci-hook-2.1.14.0`` * ``aws-neuronx-runtime-lib-2.10.30.0`` * ``aws-neuronx-collectives-2.10.37.0`` * ``aws-neuronx-tools-2.6.1.0`` * ``aws-neuronx-k8-plugin-2.1.12.0`` * ``aws-neuronx-k8-scheduler-2.1.12.0`` * ``tensorboard_plugin_neuronx-2.5.3.0`` * ``neuronx-cc-2.3.0.4`` * ``torch-neuronx-1.12.0.1.4.0`` * ``tensorflow-model-server-neuronx_1.15.0.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.5.4.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.6.3.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.7.0.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.8.0.2.5.6.0`` Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.6.0 Neuron 2.5.0 (11/23/2022) ------------------------- Trn1 packages ^^^^^^^^^^^^^ * ``aws-neuronx-dkms-2.6.33.0`` * ``aws-neuronx-oci-hook-2.1.14.0`` * ``aws-neuronx-runtime-lib-2.10.27.0`` * ``aws-neuronx-collectives-2.10.34.0`` * ``aws-neuronx-tools-2.5.19.0`` * ``aws-neuronx-k8-plugin-2.1.12.0`` * ``aws-neuronx-k8-scheduler-2.1.12.0`` * ``neuronx-cc-2.2.0.73`` * ``torch-neuronx-1.11.0.1.2.0`` * ``tensorflow-model-server-neuronx_1.15.0.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.5.4.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.6.3.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.7.0.2.5.6.0`` * ``tensorflow-model-server-neuronx_2.8.0.2.5.6.0`` Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.5.0 Neuron 2.4.0 (10/27/2022) -------------------------- Trn1 packages ^^^^^^^^^^^^^ * ``aws-neuronx-dkms-2.6.5.0`` * ``aws-neuronx-oci-hook-2.1.1.0`` * ``aws-neuronx-runtime-lib-2.10.15.0`` * ``aws-neuronx-collectives-2.10.17.0`` * ``aws-neuronx-tools-2.5.16.0`` * ``aws-neuronx-k8-plugin-2.1.2.0`` * ``aws-neuronx-k8-scheduler-2.1.2.0`` * ``neuronx-cc-2.2.0.73`` * ``torch-neuronx-1.11.0.1.2.0`` Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.4.0 Neuron 2.3.0 (10/10/2022) ------------------------- Trn1 packages ^^^^^^^^^^^^^ * ``aws-neuronx-dkms-2.5.41.0`` * ``aws-neuronx-oci-hook-2.0.16.0`` * ``aws-neuronx-runtime-lib-2.9.64.0`` * ``aws-neuronx-collectives-2.9.86.0`` * ``aws-neuronx-tools-2.4.14.0`` * ``aws-neuronx-k8-plugin-2.0.1.0`` * ``aws-neuronx-k8-scheduler-2.0.1.0`` * ``neuronx-cc-2.1.0.76`` * ``torch-neuronx-1.11.0.1.1.1`` Inf1 packages ^^^^^^^^^^^^^ .. program-output:: python3 src/helperscripts/neuronsetuphelper.py --file src/helperscripts/neuron-releases-manifest.json --list packages --neuron-version=2.3.0 </pre></body></html>
2023-09-29T20:55:25.050Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/neuron1/prev/rn.rst.txt
``` .. _prev-n1-rn: Previous Release Notes (Neuron 1.x) =================================== .. contents:: Table of contents :local: :depth: 1 Neuron 1.19.2 (08/02/2022) -------------------------- **Neuron 1.19.2** This is a patch release. The release include a :ref:`security update <ndriver_2_3_26_0>` for Neuron Driver (``aws-neuron-dkms``) and includes compiler bug fix that ignore MXNet dropout for 'training' while performing inference. Please update the Neuron Driver to the latest (version 2.3.26 or newer) so that you can benefit from operational and security updates included in this release. .. important :: You must update to the latest Neuron Driver (aws-neuron-dkms version 2.3.26 or newer) before installing or upgrading to latest Neuron release. * Uninstall ``aws-neuron-dkms`` by running: ``sudo apt remove aws-neuron-dkms`` or ``sudo yum remove aws-neuron-dkms`` * Install or upgrade to latest Neuron driver (``aws-neuron-dkms``) by following the ":ref:`install-guide-index`" instructions. Neuron 1.19.1 (05/27/2022) -------------------------- **Neuron 1.19.1** is a patch release. This release fixes a bug in Neuron Driver (``aws-neuron-dkms``). Neuron driver version 2.3.11 included in this release fixes a bug that causes kernel panic when a large memory allocation on Neuron device fails. Neuron Driver 2.3.11 also introduces a new functionality required by the upcoming Neuron 1.20.0 release. Because the new functionality is mandatory for Neuron 1.20.0 support, Neuron Driver 2.3.11 adds a compatibility check that will prevents Neuron 1.20.0 from running with older versions of the driver. An attempt to run Neuron 1.20.0 with an older version of the driver will result in the application terminating with an error message. In addition, this release updates ``tensorflow-neuron`` installation instructions to pin ``protobuf`` version to avoid `compatibility issues <https://github.com/protocolbuffers/protobuf/issues/10051>`__ with older versions of TensorFlow. .. important :: For successful installation or update to next releases (Neuron 1.20.0 and newer): * Uninstall ``aws-neuron-dkms`` by running: ``sudo apt remove aws-neuron-dkms`` or ``sudo yum remove aws-neuron-dkms`` * Install or upgrade to latest Neuron driver (``aws-neuron-dkms``) by following the ":ref:`install-guide-index`" instructions. Neuron 1.19.1 (05/27/2022) ^^^^^^^^^^^^^^^^^^^^^^^^^^ **Neuron 1.19.1** is a patch release. This release fixes a bug in Neuron Driver (``aws-neuron-dkms``). Neuron driver version 2.3.11 included in this release fixes a bug that causes kernel panic when a large memory allocation on Neuron device fails. Neuron Driver 2.3.11 also introduces a new functionality required by the upcoming Neuron 1.20.0 release. Because the new functionality is mandatory for Neuron 1.20.0 support, Neuron Driver 2.3.11 adds a compatibility check that will prevents Neuron 1.20.0 from running with older versions of the driver. An attempt to run Neuron 1.20.0 with an older version of the driver will result in the application terminating with an error message. In addition, this release updates ``tensorflow-neuron`` installation instructions to pin ``protobuf`` version to avoid `compatibility issues <https://github.com/protocolbuffers/protobuf/issues/10051>`__ with older versions of TensorFlow. .. important :: For successful installation or update to next releases (Neuron 1.20.0 and newer): * Uninstall ``aws-neuron-dkms`` by running: ``sudo apt remove aws-neuron-dkms`` or ``sudo yum remove aws-neuron-dkms`` * Install or upgrade to latest Neuron driver (``aws-neuron-dkms``) by following the ":ref:`install-guide-index`" instructions. Neuron 1.19.0 (04/29/2022) -------------------------- **Neuron 1.19.0** release adds support for PyTorch version 1.11, updates torch-neuron 1.10 to 1.10.2, and adds support for TensorFlow version 2.8, as well as minor enhancements and bug fixes. Please note that starting with this release (*Neuron 1.19.0*), installing ``aws-neuron-runtime-base`` and ``oci-add-hooks`` are no longer required for Neuron Kubernetes device driver plugin. In addition starting with this release, *torch-neuron 1.5* :ref:`will no longer be supported <eol-pt-15>`. Neuron 1.18.0 (03/25/2022) -------------------------- **Neuron 1.18.0** release introduces the beta release of :ref:`NeuronPerf <neuronperf>`, NeuronPerf is a Python library with a simple API that enables fast measurements of performance when running models with Neuron. This release adds new 5 models to the :ref:`appnote-performance-benchmark` together with NeuronPerf scripts used to compile these models and run the benchmarks. This release also introduces additional ``torch-neuron`` packages that support C++11 ABI, updates TensorFlow-Neuron 2.5 to 2.5.3, adds support for TensorFlow-Neuron 2.6 and 2.7, and introduces Runtime NEURON_RT_NUM_CORES :ref:`environment variable <nrt-configuration>`. In addition this release include minor enhancements and bug fixes in Compiler, Neuron Framework Extensions, Runtime 2.x library and tools. See below detailed release notes. Starting with this release, *TensorFlow Neuron versions 2.1, 2.2, 2.3 and 2.4* will :ref:`no longer be supported <eol-tf-21-24>` . We will also :ref:`stop supporting PyTorch Neuron version 1.5 <announce-eol-pt-1-5>` starting with Neuron 1.19.0 release, and :ref:`will stop supporting <eol-ncgs-env_2>` ``NEURONCORE_GROUP_SIZES`` environment variable starting with Neuron 1.20.0 release. Neuron 1.17.2 (02/18/2022) -------------------------- **Neuron 1.17.2** is a patch release. This release fixes a bug in TensorFlow Neuron versions 2.1, 2.2. 2.3 and 2.4. The fixed bug was causing a memory leak of 128B for each inference. Starting this release, TensorFlow Neuron versions 2.1, 2.2, 2.3 and 2.4 are :ref:`entering maintenance mode <maintenance_tf21_tf24>`. Future releases of TensorFlow Neuron versions 2.1, 2.2, 2.3 and 2.4 will address security issues only. Neuron 1.17.1 (02/16/2022) -------------------------- **Neuron 1.17.1** is a patch release. This release fixes a bug in TensorFlow Neuron that caused a memory leak. The memory leak was approximately 128b for each inference and exists in all versions of TensorFlow Neuron versions part of Neuron 1.16.0 to Neuron 1.17.0 releases. see :ref:`pre-release-content` for exact versions included in each release. This release only fixes the memory leak for TensorFlow versions 1.15 and 2.5 from Neuron. The other versions of TensorFlow Neuron will be fixed in a shortly upcoming release. Neuron 1.17.0 (01/20/2022) -------------------------- **Neuron 1.17.0** release introduces the support of PyTorch 1.10, Tensorflow 2.5 update to version 2.5.2, new operators support in PyTorch and TensorFlow 1.15, in addition to enhancements and bug fixes in PyTorch, TensorFlow, MxNet, Compiler, Runtime and Tools. - **PyTorch** * First PyTorch 1.10 support. * Added new operators support. * See :ref:`pytorch-neuron-rn` and :ref:`neuron-cc-ops-pytorch` for more details. - **TensorFlow 2.x** * Updated Tensorflow 2.5 to version 2.5.2. * Updated tensorflow-model-server 2.5 to version 2.5.3. * See :ref:`tensorflow-neuron-rn-v2` and :ref:`tensorflow-modelserver-rn-v2` for more details. - **TensorFlow 1.15** * Added new operators support. * See :ref:`tensorflow-neuron-rn` and :ref:`neuron-cc-ops-tensorflow` for more details. - **MXNet** * Added support for ``mx_neuron.__version__`` to get the build version of MXNet Neuron plugin. * See :ref:`mxnet-neuron-rn` for more details. - **Tools 2.x** * ``neuron-top`` - Added “all” tab that aggregates all running Neuron processes into a single view. * ``neuron-top`` - Improved startup time by approximately 1.5 seconds in most cases. * See :ref:`neuron-tools-rn` for more details. - **Compiler** * Enhancements and minor bug fixes. * See :ref:`neuron-cc-rn` for more details. - **Runtime 2.x** * Enhancements and minor bug fixes. * See :ref:`neuron-runtime-release-notes` for more details. Neuron 1.16.3 (01/05/2022) -------------------------- **Neuron 1.16.3** is a minor release. This release includes performance enhancements and operator support in :ref:`PyTorch Neuron <pytorch-neuron-rn>` and minor bug fixes in :ref:`Neuron Compiler <neuron-cc-rn>`. Neuron 1.16.2 (12/15/2021) -------------------------- **Neuron 1.16.2** is a patch release. This release includes performance enhancements and minor bug fixes in :ref:`Neuron Compiler <neuron-cc-rn>` and :ref:`PyTorch Neuron <pytorch-neuron-rn>`. Neuron 1.16.1 (11/05/2021) -------------------------- **Neuron 1.16.1** is a patch release. This release fixes a bug in Neuron Runtime that would have prevented users from launching a container that doesn’t use all of the Neuron Devices in the instance. If you are using Neuron within a container, please update to this new release by updating to latest Neuron ML framework package, Neuron Tools, and/or TensorFlow Neuron Model Server. * To update to latest PyTorch 1.9.1: ``pip install --upgrade torch-neuron neuron-cc[tensorflow] torchvision`` * To update to latest TensorFlow 2.5.1: ``pip install --upgrade tensorflow-neuron[cc]`` * To update to latest TensorFlow 1.15.5: ``pip install --upgrade tensorflow-neuron==1.15.5.* neuron-cc`` * To update to latest MXNet 1.8.0: ``pip install --upgrade mx_neuron neuron-cc`` For more details on how to update the framework packages, please check out our :ref:`setup-guide-index`. Neuron 1.16.0 (10/27/2021) -------------------------- **Neuron 1.16.0 is a release that requires your attention**. **You must update to the latest Neuron Driver (** ``aws-neuron-dkms`` **version 2.1 or newer) for successful installation or upgrade**. This release introduces :ref:`Neuron Runtime 2.x <introduce-libnrt>`, upgrades :ref:`PyTorch Neuron <neuron-pytorch>` to PyTorch 1.9.1, adds support for new APIs (:func:`torch.neuron.DataParallel` and ``torch_neuron.is_available()``), adds new features and capabilities (compiler ``--fast-math`` :ref:`option for better fine-tuning of accuracy/performance neuron-cc-training-mixed-precision` and :ref:`MXNet FlexEG feature <flexeg>`), improves :ref:`tools <neuron-tools>`, adds support for additional :ref:`operators <neuron-supported-operators>`, improves :ref:`performance <appnote-performance-benchmark>` (Up to 20% additional throughput and up to 25% lower latency), and reduces model loading times. It also simplifies :ref:`Neuron installation steps <install-guide-index>`, and improves the user experience of :ref:`container creation and deployment <neuron-containers>`. In addition it includes bug fixes, new :ref:`application notes <neuron-appnotes>`, updated :ref:`tutorials <neuron-tutorials>`, and announcements of software :ref:`deprecation <software-deprecation>` and :ref:`maintenance <software-maintenance>`. - **Neuron Runtime 2.x** - :ref:`introduce-libnrt` - In this release we are introducing Neuron Runtime 2.x. The new runtime is a shared library (``libnrt.so``), replacing Neuron Runtime 1.x which was a server daemon (``neruon-rtd``). Upgrading to ``libnrt.so`` is expected to improves throughput and latency, simplifies Neuron installation and upgrade process, introduces new capabilities for allocating NeuronCores to applications, streamlines container creation, and deprecates tools that are no longer needed. The new library-based runtime (``libnrt.so``) is directly integrated into Neuron’s ML Frameworks (with the exception of MXNet 1.5) and Neuron Tools packages. As a result, users no longer need to install/deploy the ``aws-neuron-runtime``\ package. .. important:: - You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer) for proper functionality of the new runtime library. - Read :ref:`introduce-libnrt` application note that describes :ref:`why we are making this change <introduce-libnrt-why>` and how :ref:`this change will affect the Neuron SDK <introduce-libnrt-how-sdk>` in detail. - Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to migrate your application. - **Performance** - Updated :ref:`performance numbers <appnote-performance-benchmark>` - Improved performance: Up to 20% additional throughput and up to 25% lower latency. - **Documentation resources** - Improved :ref:`Neuron Setup Guide <install-guide-index>`. - New :ref:`introduce-libnrt` application note. - New :ref:`bucketing_app_note` application note. - New :ref:`neuron-cc-training-mixed-precision` application note. - New :ref:`torch-neuron-dataparallel-app-note` application note. - New :ref:`flexeg` application note. - New :ref:`parallel-exec-ncgs` application note. - New :ref:`Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving <tensorflow-serving-neuronrt-visible-cores>` tutorial. - Updated :ref:`ResNet50 model for Inferentia </src/examples/pytorch/resnet50.ipynb>` tutorial to use :func:`torch.neuron.DataParallel`. - **PyTorch** - PyTorch now supports Neuron Runtime 2.x only. Please visit :ref:`introduce-libnrt` for more information. - Introducing PyTorch 1.9.1 support. - Introducing new APIs: :func:`torch.neuron.DataParallel` (see :ref:`torch-neuron-dataparallel-app-note` application note for more details) and ``torch_neuron.is_available()``. - Introducing :ref:`new operators support <neuron-cc-ops-pytorch>`. - For more information visit :ref:`neuron-pytorch` - **TensorFlow 2.x** - TensorFlow 2.x now supports Neuron Runtime 2.x only. Please visit :ref:`introduce-libnrt` for more information. - Updated Tensorflow 2.3.x from Tensorflow 2.3.3 to Tensorflow 2.3.4. - Updated Tensorflow 2.4.x from Tensorflow 2.4.2 to Tensorflow 2.4.3. - Updated Tensorflow 2.5.x from Tensorflow 2.5.0 to Tensorflow 2.5.1. - Introducing :ref:`new operators support <tensorflow-ref-neuron-accelerated-ops>` - For more information visit :ref:`tensorflow-neuron` - **TensorFlow 1.x** - TensorFlow 1.x now supports Neuron Runtime 2.x only. Please visit :ref:`introduce-libnrt` for more information. - Introducing :ref:`new operators support <neuron-cc-ops-tensorflow>`. - For more information visit :ref:`tensorflow-neuron` - **MXNet 1.8** - MXNet 1.8 now supports Neuron Runtime 2.x only. Please visit :ref:`introduce-libnrt` for more information. - Introducing Flexible Execution Groups (FlexEG) feature. - MXNet 1.5 enters maintenance mode. Please visit :ref:`maintenance_mxnet_1_5` for more information. - For more information visit :ref:`neuron-mxnet` - **Neuron Compiler** - Introducing the ``–-fast-math`` option for better fine-tuning of accuracy/performance. See :ref:`neuron-cc-training-mixed-precision` - Support added for new ArgMax and ArgMin operators. See :ref:`neuron-cc-rn`. - For more information visit :ref:`neuron-cc` - **Neuron Tools** - Updates have been made to ``neuron-ls`` and ``neuron-top`` to improve the interface and utility of information provided. - `neuron-monitor`` has been enhanced to include additional information when used to monitor the latest Frameworks released with Neuron 1.16.0. See :ref:`neuron-tools-rn`. - ``neuron-cli`` is entering maintenance mode as its use is no longer relevant when using ML Frameworks with an integrated Neuron Runtime (libnrt.so). - For more information visit :ref:`neuron-tools` - **Neuron Containers** - Starting with Neuron 1.16.0, installation of Neuron ML Frameworks now includes an integrated Neuron Runtime library. As a result, it is no longer required to deploy ``neuron-rtd``. Please visit :ref:`introduce-libnrt` for information. - When using containers built with components from Neuron 1.16.0, or newer, please use ``aws-neuron-dkms`` version 2.1 or newer and the latest version of ``aws-neuron-runtime-base``. Passing additional system capabilities is no longer required. - For more information visit :ref:`neuron-containers` - **Neuron Driver** - Support is added for Neuron Runtime 2.x (libnrt.so). - Memory improvements have been made to ensure all allocations are made with 4K alignments. - **Software Deprecation** - :ref:`eol-ncgs-env` - :ref:`eol-ncg` - **Software maintenance mode** - :ref:`maintenance_rtd` - :ref:`maintenance_mxnet_1_5` - :ref:`maintenance_neuron-cli` Neuron 1.15.2 (09/22/2021) -------------------------- Neuron 1.15.2 includes bug fixes for the tensorflow-model-server-neuron 2.5.1.1.6.8.0 package and several other bug fixes for tensorflow-neuron/tensorflow-model-server-neuron packages. Neuron 1.15.1 (08/30/2021) -------------------------- Neuron 1.15.1 includes bug fixes for the aws-neuron-dkms package and several other bug fixes for related packages. Neuron 1.15.0 (08/12/2021) -------------------------- Neuron 1.15.0 is the first release to support TensorFlow 2. In this release TensorFlow 2 supports language transformer base models like BERT. The TensorFlow 2 support will be enhanced in future releases to support additional models. * **TensorFlow 2.x** - To get started with TensorFlow 2.x: * Run the TensorFlow 2 :ref:`HuggingFace distilBERT Tutorial </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>`. * Read :ref:`tf2_faq` * See newly introduced :ref:`TensorFlow 2.x (``tensorflow-neuron``) Tracing API <tensorflow-ref-neuron-tracing-api>`. * See :ref:`tensorflow-ref-neuron-accelerated-ops`. * **Documentation** * **New** :ref:`models-inferentia` application note added in this release. This application note describes what types of deep learning model architectures perform well out of the box and provides guidance on techniques you can use to optimize your deep learning models for Inferentia. * **New** :ref:`Neuron inference performance page <appnote-performance-benchmark>` provides performance information for popular models and links to test these models in your own environment. The data includes throughout and latency numbers, cost per inference, for both realtime and offline applications. * **New** :ref:`TensorFlow 2 HuggingFace distilBERT Tutorial </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>`. * **New** :ref:`Bring your own HuggingFace pretrained BERT container to Sagemaker Tutorial </src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>`. * **More information** * :ref:`tensorflow-neuron-rn` * :ref:`neuron-cc-rn` * :ref:`tensorflow-modelserver-rn` .. _07-02-2021-rn: Neuron 1.14.2 (07/26/2021) -------------------------- This release (Neuron 1.14.2) , include bug fixes and minor enhancements to Neuron Runtime: * Neuron Runtime - see :ref:`neuron-runtime-release-notes` Neuron 1.14.1 (07/02/2021) -------------------------- This release (Neuron 1.14.1) , include bug fixes and minor enhancements: * PyTorch Neuron - This release adds “Dynamic Batching” feature support, see PyTorch-Neuron trace python API for more information, the release also add support for new operators and include additional bug fixes and minor enhancements, for more information see :ref:`pytorch-neuron-rn`. * TensorFlow Neuron - see :ref:`tensorflow-neuron-rn`. * MXNet Neuron - see :ref:`mxnet-neuron-rn`. * Neuron Compiler - see :ref:`neuron-cc-rn`. * Neuron Runtime - see :ref:`neuron-runtime-release-notes`. * Neuron Tools - see :ref:`neuron-tools-rn`. .. _05-28-2021-rn: Neuron 1.14.0 (05/28/2021) -------------------------- This release (Neuron 1.14.0) introduces first release of PyTorch Neuron 1.8.1, tutorials update, performance enhancements and memory optimizations for PyTorch Neuron, TensorFlow Neuron and MXNet Neuron. * PyTorch Neuron - First release of PyTorch Neuron 1.8.1. * PyTorch Neuron - Convolution operator support has been extended to include ConvTranspose2d variants. * PyTorch Neuron - Updated tutorials to use Hugging Face Transformers 4.6.0. * PyTorch Neuron - Additional performance enhancements, memory optimizations, and bug fixes. see :ref:`pytorch-neuron-rn`. * Neuron Compiler - New feature - Uncompressed NEFF format for faster loading models prior inference. Enable it by –enable-fast-loading-neuron-binaries. Some cases of large models may be detrimentally impacted as it will not be compressed but many cases will benefit. * Neuron Compiler - Additional performance enhancements, memory optimizations, and bug fixes, see :ref:`neuron-cc-rn`. * TensorFlow Neuron - Performance enhancements, memory optimizations, and bug fixes. see :ref:`tensorflow-neuron-rn`. * MXNet Neuron - Enhancements and minor bug fixes (MXNet 1.8), see :ref:`mxnet-neuron-rn`. * Neuron Runtime - Performance enhancements, memory optimizations, and bug fixes. :ref:`neuron-runtime-release-notes`. * Neuron Tools - Minor bug fixes and enhancements. * Software Deprecation * End of support for Neuron Conda packages in Deep Learning AMI, users should use pip upgrade commands to upgrade to latest Neuron version in DLAMI, see `blog <https://aws.amazon.com/blogs/developer/neuron-conda-packages-eol/>`_. * End of support for Ubuntu 16, see :ref:`documentation <eol-ubuntu16>`_. Neuron 1.14.0 (05/28/2021) -------------------------- This release (Neuron 1.14.0) introduces first release of PyTorch Neuron 1.8.1, tutorials update, performance enhancements and memory optimizations for PyTorch Neuron, TensorFlow Neuron and MXNet Neuron. * PyTorch Neuron - First release of PyTorch Neuron 1.8.1. * PyTorch Neuron - Convolution operator support has been extended to include ConvTranspose2d variants. * PyTorch Neuron - Updated tutorials to use Hugging Face Transformers 4.6.0. * PyTorch Neuron - Additional performance enhancements, memory optimizations, and bug fixes. see :ref:`pytorch-neuron-rn`. * Neuron Compiler - New feature - Uncompressed NEFF format for faster loading models prior inference. Enable it by –enable-fast-loading-neuron-binaries. Some cases of large models may be detrimentally impacted as it will not be compressed but many cases will benefit. * Neuron Compiler - Additional performance enhancements, memory optimizations, and bug fixes, see :ref:`neuron-cc-rn`. * TensorFlow Neuron - Performance enhancements, memory optimizations, and bug fixes. see :ref:`tensorflow-neuron-rn`. * MXNet Neuron - Enhancements and minor bug fixes (MXNet 1.8), see :ref:`mxnet-neuron-rn`. * Neuron Runtime - Performance enhancements, memory optimizations, and bug fixes. :ref:`neuron-runtime-release-notes`. * Neuron Tools - Minor bug fixes and enhancements. * Software Deprecation * End of support for Neuron Conda packages in Deep Learning AMI, users should use pip upgrade commands to upgrade to latest Neuron version in DLAMI, see `blog <https://aws.amazon.com/blogs/developer/neuron-conda-packages-eol/>`_. * End of support for Ubuntu 16, see :ref:`documentation <eol-ubuntu16>`. Neuron 1.13.0 (05/01/2021) -------------------------- This release introduces higher performance, updated framework support, new tutorials, and adding models and tools: * Additional compiler improvements boost performance up to 20% higher throughput compared to previous release across model types. * Improving usability for NLP models, with out-of-the-box 12x higher-throughput at 70% lower cost for Hugging Face Transformers pre-trained BERT Base models, see :ref:`pytorch-tutorials-neuroncore-pipeline-pytorch`. * Upgrade Apache MXNet (Incubating) to 1.8, where Neuron is now a plugin, see :ref:`mxnet-neuron-rn`. * PyTorch ResNext models now functional with new operator support, see :ref:`pytorch-neuron-rn`. * PyTorch Yolov5 support, see :ref:`pytorch-neuron-rn`. * MXNet (Incubating): Gluon API and Neuron support for NLP BERT models, see :ref:`mxnet-neuron-rn`. * PyTorch Convolution operator support has been extended to include most Conv1d and Conv3d variants, please see :ref:`neuron-cc-ops-pytorch` for the complete list of operators. * First release of Neuron plugin for TensorBoard, see :ref:`neuron-tensorboard-rn`. **Software Deprecation** * :ref:`eol-conda-packages` * :ref:`eol-ubuntu16` * :ref:`eol-classic-tensorboard` .. _03-04-2021-rn: March 4, 2021 Release (Patch) ----------------------------- This release include bug fixes and minor enhancements to the Neuron Runtime and Tools. February 24, 2021 Release (Patch) --------------------------------- This release updates all Neuron packages and libraries in response to the Python Secutity issue CVE-2021-3177 as described here: https://nvd.nist.gov/vuln/detail/CVE-2021-3177. This vulnerability potentially exists in multiple versions of Python including 3.5, 3.6, 3.7. Python is used by various components of Neuron, including the Neuron compiler as well as Machine Learning frameworks including TensorFlow, PyTorch and Apache MXNet (Incubating). It is recommended that the Python interpreters used in any AMIs and containers used with Neuron are also updated. Python 3.5 reached `end-of-life <https://peps.python.org/pep-0478/>`_, from this release Neuron packages will not support Python 3.5. Users should upgrade to latest DLAMI or upgrade to a newer Python versions if they are using other AMI. January 30, 2021 Release -------------------------- This release continues to improves the NeuronCore Pipeline performance for BERT models. For example, running BERT Base with the the neuroncore-pipeline-cores compile option, at batch=3, seqlen=32 using 16 Neuron Cores, results in throughput of up to 5340 sequences per second and P99 latency of 9ms using Tensorflow Serving. This release also adds operator support and performance improvements for the PyTorch based DistilBert model for sequence classification. December 23, 2020 Release -------------------------- This release introduces a PyTorch 1.7 based torch-neuron package as a part of the Neuron SDK. Support for PyTorch model serving with TorchServe 0.2 is added and will be demonstrated with a tutorial. This release also provides an example tutorial for PyTorch based Yolo v4 model for Inferentia. To aid visibility into compiler activity, the Neuron-extended Frameworks TensorFlow and PyTorch will display a new compilation status indicator that prints a dot (.) every 20 seconds to the console as compilation is executing. Important to know: ^^^^^^^^^^^^^^^^^^ 1. This update continues to support the torch-neuron version of PyTorch 1.5.1 for backwards compatibility. 2. As Python 3.5 reached end-of-life in October 2020, and many packages including TorchVision and Transformers have stopped support for Python 3.5, we will begin to stop supporting Python 3.5 for frameworks, starting with PyTorch-Neuron version :ref:`neuron-torch-11170` in this release. You can continue to use older versions with Python 3.5. November 17, 2020 Release -------------------------- This release improves NeuronCore Pipeline performance. For example, running BERT Small, batch=4, seqlen=32 using 4 Neuron Cores, results in throughput of up to 7000 sequences per second and P99 latency of 3ms using Tensorflow Serving. Neuron tools updated the NeuronCore utilization metric to include all inf1 compute engines and DMAs. Added a new neuron-monitor example that connects to Grafana via Prometheus. We've added a new sample script which exports most of neuron-monitor's metrics to a Prometheus monitoring server. Additionally, we also provided a sample Grafana dashboard. More details at :ref:`neuron-tools`. ONNX support is limited and from this version onwards we are not planning to add any additional capabilities to ONNX. We recommend running models in TensorFlow, PyTorch or MXNet for best performance and support. October 22, 2020 Release -------------------------- This release adds a Neuron kernel mode driver (KMD). The Neuron KMD simplifies Neuron Runtime deployments by removing the need for elevated privileges, improves memory management by removing the need for huge pages configuration, and eliminates the need for running neuron-rtd as a sidecar container. Documentation throughout the repo has been updated to reflect the new support. The new Neuron KMD is backwards compatible with prior versions of Neuron ML Frameworks and Compilers - no changes are required to existing application code. More details in the Neuron Runtime release notes at :ref:`neuron-runtime`. September 22, 2020 Release -------------------------- This release improves performance of YOLO v3 and v4, VGG16, SSD300, and BERT. As part of these improvements, Neuron Compiler doesn’t require any special compilation flags for most models. Details on how to use the prior optimizations are outlined in the neuron-cc :ref:`neuron-cc-rn`. The release also improves operational deployments of large scale inference applications, with a session management agent incorporated into all supported ML Frameworks and a new neuron tool called neuron-monitor allows to easily scale monitoring of large fleets of Inference applications. A sample script for connecting neuron-monitor to Amazon CloudWatch metrics is provided as well. Read more about using neuron-monitor :ref:`neuron-monitor-ug`. August 19, 2020 Release -------------------------- Bug fix for an error reporting issue with the Neuron Runtime. Previous versions of the runtime were only reporting uncorrectable errors on half of the dram per Inferentia. Other Neuron packages are not changed. August 8, 2020 Release -------------------------- This release of the Neuron SDK delivers performance enhancements for the BERT Base model. Sequence lengths including 128, 256 and 512 were found to have best performance at batch size 6, 3 and 1 respectively using publically available versions of both Pytorch (1.5.x) and Tensorflow-based (1.15.x) models. The compiler option "-O2" was used in all cases. A new Kubernetes scheduler extension is included in this release to improve pod scheduling on inf1.6xlarge and inf1.24xlarge instance sizes. Details on how the scheduler works and how to apply the scheduler can be found :ref:`neuron-k8-scheduler-ext`. Check the :ref:`neuron-k8-rn` for details changes to k8 components going forward. August 4, 2020 Release -------------------------- Bug fix for a latent issue caused by a race condition in Neuron Runtime leading to possible crashes. The crash was observed under stress load conditons. All customers are encouraged to update the latest Neuron Runtime package (aws-neuron-runtime), version 1.0.8813.0 or newer. Other Neuron packages are being updated as well, but are to be considered non-critical updates. July 16, 2020 Release -------------------------- This release of Neuron SDK adds support for the OpenPose (posenet) Neural Network. An example of using Openpose for end to end inference is available :ref:`/src/examples/tensorflow/openpose_demo/openpose.ipynb`. A new PyTorch auto-partitioner feature now automatically builds a Neuron specific graph representation of PyTorch models. The key benefit of this feature is automatic partitioning the model graph to run the supported operators on the NeuronCores and the rest on the host. PyTorch auto-partitioner is enabled by default with ability to disable if a manual partition is needed. More details :ref:`neuron-pytorch`. The release also includes various bug fixes and increased operator support. Important to know: ^^^^^^^^^^^^^^^^^^ 1. This update moves the supported version for PyTorch to the current release (PyTorch 1.5.1) 2. This release supports Python 3.7 Conda packages in addition to Python 3.6 Conda packages June 18, 2020 Release -------------------------- Point fix an error related to yum downgrade/update of Neuron Runtime packages. The prior release fails to successfully downgrade/update Neuron Runtime Base package and Neuron Runtime package when using Yum on Amazon Linux 2. Please remove and then install both packages on AL2 using these commands: :: # Amazon Linux 2 sudo yum remove aws-neuron-runtime-base sudo yum remove aws-neuron-runtime sudo yum install aws-neuron-runtime-base sudo yum install aws-neuron-runtime Jun 11, 2020 Release -------------------------- This Neuron release provides support for the recent launch of EKS for Inf1 instance types and numerous other improvements. More details about how to use EKS with the Neuron SDK can be found in AWS documentation `here <https://docs.aws.amazon.com/eks/latest/userguide/inferentia-support.html>`__. This release adds initial support for OpenPose PoseNet for images with resolutions upto 400x400. This release also adds a '-O2' option to the Neuron Compiler. '-O2' can help with handling of large tensor inputs. In addition the Neuron Compiler increments the version of the compiled artifacts, called "NEFF", to version 1.0. Neuron Runtime versions earlier than the 1.0.6905.0 release in May 2020 will not be able to execute NEFFs compiled from this release forward. Please see :ref:`neff-support-table` for compatibility. Stay up to date on future improvements and new features by following the :ref:`neuron_roadmap`. Refer to the detailed release notes for more information on each Neuron component. .. _important-to-know-1: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). Using the Neuron Compiler '-O2' option can help with handling of large tensor inputs for some models. If not used, Neuron limits the size of CNN models like ResNet to an input size of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. INT8 data type is not currently supported by the Neuron compiler. 3. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. May 15, 2020 Release -------------------------- Point fix an error related to installation of the Neuron Runtime Base package. The prior release fails to successfully start Neuron Discovery when the Neuron Runtime package is not also installed. This scenario of running Neuron Discovery alone is critical to users of Neuron in container environments. Please update the aws-neuron-runtime-base package: :: # Ubuntu 18 or 16: sudo apt-get update sudo apt-get install aws-neuron-runtime-base # Amazon Linux, Centos, RHEL sudo yum update sudo yum install aws-neuron-runtime-base May 11, 2020 Release -------------------------- This release provides additional throughput improvements to running inference on a variety of models; for example BERTlarge throughput has improved by an additional 35% compared to the previous release and with peak thoughput of 360 seq/second on inf1.xlarge (more details :ref:`tensorflow-bert-demo` ). In addition to the performance boost, this release adds PyTorch, and MXNet framework support for BERT models, as well as expands container support in preparation to an upcoming EKS launch. We continue to work on new features and improving performance further, to stay up to date follow this repository and our :ref:`neuron_roadmap`. Refer to the detailed release notes for more information for each Neuron component. .. _important-to-know-2: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). As a result, we limit the sizes of CNN models like ResNet to have an input size limit of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. INT8 data type is not currently supported by the Neuron compiler. 3. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. Mar 26, 2020 Release -------------------------- This release supports a variant of the SSD object detection network, a SSD inference demo is available :ref:`tensorflow-ssd300` This release also enhances our Tensorboard support to enable CPU-node visibility. Refer to the detailed release notes for more information for each neuron component. .. _important-to-know-3: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). As a result, we limit the sizes of CNN models like ResNet to have an input size limit of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. INT8 data type is not currently supported by the Neuron compiler. 3. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. Feb 27, 2020 Release -------------------------- This release improves performance throughput by up to 10%, for example ResNet-50 on inf1.xlarge has increased from 1800 img/sec to 2040 img/sec, Neuron logs include more detailed messages and various bug fixes. Refer to the detailed release notes for more details. We continue to work on new features and improving performance further, to stay up to date follow this repository, and watch the `AWS Neuron developer forum <https://forums.aws.amazon.com/forum.jspa?forumID=355>`__. .. _important-to-know-4: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). As a result, we limit the sizes of CNN models like ResNet to have an input size limit of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. Computer-vision object detection and segmentation models are not yet supported. 3. INT8 data type is not currently supported by the Neuron compiler. 4. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. Jan 28, 2020 Release -------------------------- This release brings significant throughput improvements to running inference on a variety of models; for example Resnet50 throughput is increased by 63% (measured 1800 img/sec on inf1.xlarge up from 1100/sec, and measured 2300/sec on inf1.2xlarge). BERTbase throughput has improved by 36% compared to the re:Invent launch (up to 26100seq/sec from 19200seq/sec on inf1.24xlarge), and BERTlarge improved by 15% (230 seq/sec, compared to 200 running on inf1.2xlarge). In addition to the performance boost, this release includes various bug fixes as well as additions to the GitHub with :ref:`neuron-features-index` diving deep on how Neuron performance features work and overall improved documentation following customer input. We continue to work on new features and improving performance further, to stay up to date follow this repository, and watch the `AWS Neuron developer forum <https://forums.aws.amazon.com/forum.jspa?forumID=355>`__. .. _important-to-know-5: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). As a result, we limit the sizes of CNN models like ResNet to have an input size limit of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. Computer-vision object detection and segmentation models are not yet supported. 3. INT8 data type is not currently supported by the Neuron compiler. 4. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. Neuron SDK Release Notes Structure ---------------------------------- The Neuron SDK is delivered through commonly used package mananagers (e.g. PIP, APT and YUM). These packages are then themselves packaged into Conda packages that are integrated into the AWS DLAMI for minimal developer overhead. The Neuron SDK release notes follow a similar structure, with the core improvements and known-issues reported in the release notes of the primary packages (e.g. Neuron-Runtime or Neuron-Compiler release notes), and additional release notes specific to the package-integration are reported through their dedicated release notes (e.g. Conda or DLAMI release notes). ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _prev-n1-rn: Previous Release Notes (Neuron 1.x) =================================== .. contents:: Table of contents :local: :depth: 1 Neuron 1.19.2 (08/02/2022) -------------------------- **Neuron 1.19.2** This is a patch release. The release include a :ref:`security update &lt;ndriver_2_3_26_0&gt;` for Neuron Driver (``aws-neuron-dkms``) and includes compiler bug fix that ignore MXNet dropout for 'training' while performing inference. Please update the Neuron Driver to the latest (version 2.3.26 or newer) so that you can benefit from operational and security updates included in this release. .. important :: You must update to the latest Neuron Driver (aws-neuron-dkms version 2.3.26 or newer) before installing or upgrading to latest Neuron release. * Uninstall ``aws-neuron-dkms`` by running: ``sudo apt remove aws-neuron-dkms`` or ``sudo yum remove aws-neuron-dkms`` * Install or upgrade to latest Neuron driver (``aws-neuron-dkms``) by following the ":ref:`install-guide-index`" instructions. Neuron 1.19.1 (05/27/2022) -------------------------- **Neuron 1.19.1** is a patch release. This release fixes a bug in Neuron Driver (``aws-neuron-dkms``). Neuron driver version 2.3.11 included in this release fixes a bug that causes kernel panic when a large memory allocation on Neuron device fails. Neuron Driver 2.3.11 also introduces a new functionality required by the upcoming Neuron 1.20.0 release. Because the new functionality is mandatory for Neuron 1.20.0 support, Neuron Driver 2.3.11 adds a compatibility check that will prevents Neuron 1.20.0 from running with older versions of the driver. An attempt to run Neuron 1.20.0 with an older version of the driver will result in the application terminating with an error message. In addition, this release updates ``tensorflow-neuron`` installation instructions to pin ``protobuf`` version to avoid `compatibility issues &lt;https://github.com/protocolbuffers/protobuf/issues/10051&gt;`__ with older versions of TensorFlow. .. important :: For successful installation or update to next releases (Neuron 1.20.0 and newer): * Uninstall ``aws-neuron-dkms`` by running: ``sudo apt remove aws-neuron-dkms`` or ``sudo yum remove aws-neuron-dkms`` * Install or upgrade to latest Neuron driver (``aws-neuron-dkms``) by following the ":ref:`install-guide-index`" instructions. Neuron 1.19.1 (05/27/2022) ^^^^^^^^^^^^^^^^^^^^^^^^^^ **Neuron 1.19.1** is a patch release. This release fixes a bug in Neuron Driver (``aws-neuron-dkms``). Neuron driver version 2.3.11 included in this release fixes a bug that causes kernel panic when a large memory allocation on Neuron device fails. Neuron Driver 2.3.11 also introduces a new functionality required by the upcoming Neuron 1.20.0 release. Because the new functionality is mandatory for Neuron 1.20.0 support, Neuron Driver 2.3.11 adds a compatibility check that will prevents Neuron 1.20.0 from running with older versions of the driver. An attempt to run Neuron 1.20.0 with an older version of the driver will result in the application terminating with an error message. In addition, this release updates ``tensorflow-neuron`` installation instructions to pin ``protobuf`` version to avoid `compatibility issues &lt;https://github.com/protocolbuffers/protobuf/issues/10051&gt;`__ with older versions of TensorFlow. .. important :: For successful installation or update to next releases (Neuron 1.20.0 and newer): * Uninstall ``aws-neuron-dkms`` by running: ``sudo apt remove aws-neuron-dkms`` or ``sudo yum remove aws-neuron-dkms`` * Install or upgrade to latest Neuron driver (``aws-neuron-dkms``) by following the ":ref:`install-guide-index`" instructions. Neuron 1.19.0 (04/29/2022) -------------------------- **Neuron 1.19.0** release adds support for PyTorch version 1.11, updates torch-neuron 1.10 to 1.10.2, and adds support for TensorFlow version 2.8, as well as minor enhancements and bug fixes. Please note that starting with this release (*Neuron 1.19.0*), installing ``aws-neuron-runtime-base`` and ``oci-add-hooks`` are no longer required for Neuron Kubernetes device driver plugin. In addition starting with this release, *torch-neuron 1.5* :ref:`will no longer be supported &lt;eol-pt-15&gt;`. Neuron 1.18.0 (03/25/2022) -------------------------- **Neuron 1.18.0** release introduces the beta release of :ref:`NeuronPerf &lt;neuronperf&gt;`, NeuronPerf is a Python library with a simple API that enables fast measurements of performance when running models with Neuron. This release adds new 5 models to the :ref:`appnote-performance-benchmark` together with NeuronPerf scripts used to compile these models and run the benchmarks. This release also introduces additional ``torch-neuron`` packages that support C++11 ABI, updates TensorFlow-Neuron 2.5 to 2.5.3, adds support for TensorFlow-Neuron 2.6 and 2.7, and introduces Runtime NEURON_RT_NUM_CORES :ref:`environment variable &lt;nrt-configuration&gt;`. In addition this release include minor enhancements and bug fixes in Compiler, Neuron Framework Extensions, Runtime 2.x library and tools. See below detailed release notes. Starting with this release, *TensorFlow Neuron versions 2.1, 2.2, 2.3 and 2.4* will :ref:`no longer be supported &lt;eol-tf-21-24&gt;` . We will also :ref:`stop supporting PyTorch Neuron version 1.5 &lt;announce-eol-pt-1-5&gt;` starting with Neuron 1.19.0 release, and :ref:`will stop supporting &lt;eol-ncgs-env_2&gt;` ``NEURONCORE_GROUP_SIZES`` environment variable starting with Neuron 1.20.0 release. Neuron 1.17.2 (02/18/2022) -------------------------- **Neuron 1.17.2** is a patch release. This release fixes a bug in TensorFlow Neuron versions 2.1, 2.2. 2.3 and 2.4. The fixed bug was causing a memory leak of 128B for each inference. Starting this release, TensorFlow Neuron versions 2.1, 2.2, 2.3 and 2.4 are :ref:`entering maintenance mode &lt;maintenance_tf21_tf24&gt;`. Future releases of TensorFlow Neuron versions 2.1, 2.2, 2.3 and 2.4 will address security issues only. Neuron 1.17.1 (02/16/2022) -------------------------- **Neuron 1.17.1** is a patch release. This release fixes a bug in TensorFlow Neuron that caused a memory leak. The memory leak was approximately 128b for each inference and exists in all versions of TensorFlow Neuron versions part of Neuron 1.16.0 to Neuron 1.17.0 releases. see :ref:`pre-release-content` for exact versions included in each release. This release only fixes the memory leak for TensorFlow versions 1.15 and 2.5 from Neuron. The other versions of TensorFlow Neuron will be fixed in a shortly upcoming release. Neuron 1.17.0 (01/20/2022) -------------------------- **Neuron 1.17.0** release introduces the support of PyTorch 1.10, Tensorflow 2.5 update to version 2.5.2, new operators support in PyTorch and TensorFlow 1.15, in addition to enhancements and bug fixes in PyTorch, TensorFlow, MxNet, Compiler, Runtime and Tools. - **PyTorch** * First PyTorch 1.10 support. * Added new operators support. * See :ref:`pytorch-neuron-rn` and :ref:`neuron-cc-ops-pytorch` for more details. - **TensorFlow 2.x** * Updated Tensorflow 2.5 to version 2.5.2. * Updated tensorflow-model-server 2.5 to version 2.5.3. * See :ref:`tensorflow-neuron-rn-v2` and :ref:`tensorflow-modelserver-rn-v2` for more details. - **TensorFlow 1.15** * Added new operators support. * See :ref:`tensorflow-neuron-rn` and :ref:`neuron-cc-ops-tensorflow` for more details. - **MXNet** * Added support for ``mx_neuron.__version__`` to get the build version of MXNet Neuron plugin. * See :ref:`mxnet-neuron-rn` for more details. - **Tools 2.x** * ``neuron-top`` - Added “all” tab that aggregates all running Neuron processes into a single view. * ``neuron-top`` - Improved startup time by approximately 1.5 seconds in most cases. * See :ref:`neuron-tools-rn` for more details. - **Compiler** * Enhancements and minor bug fixes. * See :ref:`neuron-cc-rn` for more details. - **Runtime 2.x** * Enhancements and minor bug fixes. * See :ref:`neuron-runtime-release-notes` for more details. Neuron 1.16.3 (01/05/2022) -------------------------- **Neuron 1.16.3** is a minor release. This release includes performance enhancements and operator support in :ref:`PyTorch Neuron &lt;pytorch-neuron-rn&gt;` and minor bug fixes in :ref:`Neuron Compiler &lt;neuron-cc-rn&gt;`. Neuron 1.16.2 (12/15/2021) -------------------------- **Neuron 1.16.2** is a patch release. This release includes performance enhancements and minor bug fixes in :ref:`Neuron Compiler &lt;neuron-cc-rn&gt;` and :ref:`PyTorch Neuron &lt;pytorch-neuron-rn&gt;`. Neuron 1.16.1 (11/05/2021) -------------------------- **Neuron 1.16.1** is a patch release. This release fixes a bug in Neuron Runtime that would have prevented users from launching a container that doesn’t use all of the Neuron Devices in the instance. If you are using Neuron within a container, please update to this new release by updating to latest Neuron ML framework package, Neuron Tools, and/or TensorFlow Neuron Model Server. * To update to latest PyTorch 1.9.1: ``pip install --upgrade torch-neuron neuron-cc[tensorflow] torchvision`` * To update to latest TensorFlow 2.5.1: ``pip install --upgrade tensorflow-neuron[cc]`` * To update to latest TensorFlow 1.15.5: ``pip install --upgrade tensorflow-neuron==1.15.5.* neuron-cc`` * To update to latest MXNet 1.8.0: ``pip install --upgrade mx_neuron neuron-cc`` For more details on how to update the framework packages, please check out our :ref:`setup-guide-index`. Neuron 1.16.0 (10/27/2021) -------------------------- **Neuron 1.16.0 is a release that requires your attention**. **You must update to the latest Neuron Driver (** ``aws-neuron-dkms`` **version 2.1 or newer) for successful installation or upgrade**. This release introduces :ref:`Neuron Runtime 2.x &lt;introduce-libnrt&gt;`, upgrades :ref:`PyTorch Neuron &lt;neuron-pytorch&gt;` to PyTorch 1.9.1, adds support for new APIs (:func:`torch.neuron.DataParallel` and ``torch_neuron.is_available()``), adds new features and capabilities (compiler ``--fast-math`` :ref:`option for better fine-tuning of accuracy/performance neuron-cc-training-mixed-precision` and :ref:`MXNet FlexEG feature &lt;flexeg&gt;`), improves :ref:`tools &lt;neuron-tools&gt;`, adds support for additional :ref:`operators &lt;neuron-supported-operators&gt;`, improves :ref:`performance &lt;appnote-performance-benchmark&gt;` (Up to 20% additional throughput and up to 25% lower latency), and reduces model loading times. It also simplifies :ref:`Neuron installation steps &lt;install-guide-index&gt;`, and improves the user experience of :ref:`container creation and deployment &lt;neuron-containers&gt;`. In addition it includes bug fixes, new :ref:`application notes &lt;neuron-appnotes&gt;`, updated :ref:`tutorials &lt;neuron-tutorials&gt;`, and announcements of software :ref:`deprecation &lt;software-deprecation&gt;` and :ref:`maintenance &lt;software-maintenance&gt;`. - **Neuron Runtime 2.x** - :ref:`introduce-libnrt` - In this release we are introducing Neuron Runtime 2.x. The new runtime is a shared library (``libnrt.so``), replacing Neuron Runtime 1.x which was a server daemon (``neruon-rtd``). Upgrading to ``libnrt.so`` is expected to improves throughput and latency, simplifies Neuron installation and upgrade process, introduces new capabilities for allocating NeuronCores to applications, streamlines container creation, and deprecates tools that are no longer needed. The new library-based runtime (``libnrt.so``) is directly integrated into Neuron’s ML Frameworks (with the exception of MXNet 1.5) and Neuron Tools packages. As a result, users no longer need to install/deploy the ``aws-neuron-runtime``\ package. .. important:: - You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer) for proper functionality of the new runtime library. - Read :ref:`introduce-libnrt` application note that describes :ref:`why we are making this change &lt;introduce-libnrt-why&gt;` and how :ref:`this change will affect the Neuron SDK &lt;introduce-libnrt-how-sdk&gt;` in detail. - Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to migrate your application. - **Performance** - Updated :ref:`performance numbers &lt;appnote-performance-benchmark&gt;` - Improved performance: Up to 20% additional throughput and up to 25% lower latency. - **Documentation resources** - Improved :ref:`Neuron Setup Guide &lt;install-guide-index&gt;`. - New :ref:`introduce-libnrt` application note. - New :ref:`bucketing_app_note` application note. - New :ref:`neuron-cc-training-mixed-precision` application note. - New :ref:`torch-neuron-dataparallel-app-note` application note. - New :ref:`flexeg` application note. - New :ref:`parallel-exec-ncgs` application note. - New :ref:`Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving &lt;tensorflow-serving-neuronrt-visible-cores&gt;` tutorial. - Updated :ref:`ResNet50 model for Inferentia &lt;/src/examples/pytorch/resnet50.ipynb&gt;` tutorial to use :func:`torch.neuron.DataParallel`. - **PyTorch** - PyTorch now supports Neuron Runtime 2.x only. Please visit :ref:`introduce-libnrt` for more information. - Introducing PyTorch 1.9.1 support. - Introducing new APIs: :func:`torch.neuron.DataParallel` (see :ref:`torch-neuron-dataparallel-app-note` application note for more details) and ``torch_neuron.is_available()``. - Introducing :ref:`new operators support &lt;neuron-cc-ops-pytorch&gt;`. - For more information visit :ref:`neuron-pytorch` - **TensorFlow 2.x** - TensorFlow 2.x now supports Neuron Runtime 2.x only. Please visit :ref:`introduce-libnrt` for more information. - Updated Tensorflow 2.3.x from Tensorflow 2.3.3 to Tensorflow 2.3.4. - Updated Tensorflow 2.4.x from Tensorflow 2.4.2 to Tensorflow 2.4.3. - Updated Tensorflow 2.5.x from Tensorflow 2.5.0 to Tensorflow 2.5.1. - Introducing :ref:`new operators support &lt;tensorflow-ref-neuron-accelerated-ops&gt;` - For more information visit :ref:`tensorflow-neuron` - **TensorFlow 1.x** - TensorFlow 1.x now supports Neuron Runtime 2.x only. Please visit :ref:`introduce-libnrt` for more information. - Introducing :ref:`new operators support &lt;neuron-cc-ops-tensorflow&gt;`. - For more information visit :ref:`tensorflow-neuron` - **MXNet 1.8** - MXNet 1.8 now supports Neuron Runtime 2.x only. Please visit :ref:`introduce-libnrt` for more information. - Introducing Flexible Execution Groups (FlexEG) feature. - MXNet 1.5 enters maintenance mode. Please visit :ref:`maintenance_mxnet_1_5` for more information. - For more information visit :ref:`neuron-mxnet` - **Neuron Compiler** - Introducing the ``–-fast-math`` option for better fine-tuning of accuracy/performance. See :ref:`neuron-cc-training-mixed-precision` - Support added for new ArgMax and ArgMin operators. See :ref:`neuron-cc-rn`. - For more information visit :ref:`neuron-cc` - **Neuron Tools** - Updates have been made to ``neuron-ls`` and ``neuron-top`` to improve the interface and utility of information provided. - `neuron-monitor`` has been enhanced to include additional information when used to monitor the latest Frameworks released with Neuron 1.16.0. See :ref:`neuron-tools-rn`. - ``neuron-cli`` is entering maintenance mode as its use is no longer relevant when using ML Frameworks with an integrated Neuron Runtime (libnrt.so). - For more information visit :ref:`neuron-tools` - **Neuron Containers** - Starting with Neuron 1.16.0, installation of Neuron ML Frameworks now includes an integrated Neuron Runtime library. As a result, it is no longer required to deploy ``neuron-rtd``. Please visit :ref:`introduce-libnrt` for information. - When using containers built with components from Neuron 1.16.0, or newer, please use ``aws-neuron-dkms`` version 2.1 or newer and the latest version of ``aws-neuron-runtime-base``. Passing additional system capabilities is no longer required. - For more information visit :ref:`neuron-containers` - **Neuron Driver** - Support is added for Neuron Runtime 2.x (libnrt.so). - Memory improvements have been made to ensure all allocations are made with 4K alignments. - **Software Deprecation** - :ref:`eol-ncgs-env` - :ref:`eol-ncg` - **Software maintenance mode** - :ref:`maintenance_rtd` - :ref:`maintenance_mxnet_1_5` - :ref:`maintenance_neuron-cli` Neuron 1.15.2 (09/22/2021) -------------------------- Neuron 1.15.2 includes bug fixes for the tensorflow-model-server-neuron 2.5.1.1.6.8.0 package and several other bug fixes for tensorflow-neuron/tensorflow-model-server-neuron packages. Neuron 1.15.1 (08/30/2021) -------------------------- Neuron 1.15.1 includes bug fixes for the aws-neuron-dkms package and several other bug fixes for related packages. Neuron 1.15.0 (08/12/2021) -------------------------- Neuron 1.15.0 is the first release to support TensorFlow 2. In this release TensorFlow 2 supports language transformer base models like BERT. The TensorFlow 2 support will be enhanced in future releases to support additional models. * **TensorFlow 2.x** - To get started with TensorFlow 2.x: * Run the TensorFlow 2 :ref:`HuggingFace distilBERT Tutorial &lt;/src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb&gt;`. * Read :ref:`tf2_faq` * See newly introduced :ref:`TensorFlow 2.x (``tensorflow-neuron``) Tracing API &lt;tensorflow-ref-neuron-tracing-api&gt;`. * See :ref:`tensorflow-ref-neuron-accelerated-ops`. * **Documentation** * **New** :ref:`models-inferentia` application note added in this release. This application note describes what types of deep learning model architectures perform well out of the box and provides guidance on techniques you can use to optimize your deep learning models for Inferentia. * **New** :ref:`Neuron inference performance page &lt;appnote-performance-benchmark&gt;` provides performance information for popular models and links to test these models in your own environment. The data includes throughout and latency numbers, cost per inference, for both realtime and offline applications. * **New** :ref:`TensorFlow 2 HuggingFace distilBERT Tutorial &lt;/src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb&gt;`. * **New** :ref:`Bring your own HuggingFace pretrained BERT container to Sagemaker Tutorial &lt;/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb&gt;`. * **More information** * :ref:`tensorflow-neuron-rn` * :ref:`neuron-cc-rn` * :ref:`tensorflow-modelserver-rn` .. _07-02-2021-rn: Neuron 1.14.2 (07/26/2021) -------------------------- This release (Neuron 1.14.2) , include bug fixes and minor enhancements to Neuron Runtime: * Neuron Runtime - see :ref:`neuron-runtime-release-notes` Neuron 1.14.1 (07/02/2021) -------------------------- This release (Neuron 1.14.1) , include bug fixes and minor enhancements: * PyTorch Neuron - This release adds “Dynamic Batching” feature support, see PyTorch-Neuron trace python API for more information, the release also add support for new operators and include additional bug fixes and minor enhancements, for more information see :ref:`pytorch-neuron-rn`. * TensorFlow Neuron - see :ref:`tensorflow-neuron-rn`. * MXNet Neuron - see :ref:`mxnet-neuron-rn`. * Neuron Compiler - see :ref:`neuron-cc-rn`. * Neuron Runtime - see :ref:`neuron-runtime-release-notes`. * Neuron Tools - see :ref:`neuron-tools-rn`. .. _05-28-2021-rn: Neuron 1.14.0 (05/28/2021) -------------------------- This release (Neuron 1.14.0) introduces first release of PyTorch Neuron 1.8.1, tutorials update, performance enhancements and memory optimizations for PyTorch Neuron, TensorFlow Neuron and MXNet Neuron. * PyTorch Neuron - First release of PyTorch Neuron 1.8.1. * PyTorch Neuron - Convolution operator support has been extended to include ConvTranspose2d variants. * PyTorch Neuron - Updated tutorials to use Hugging Face Transformers 4.6.0. * PyTorch Neuron - Additional performance enhancements, memory optimizations, and bug fixes. see :ref:`pytorch-neuron-rn`. * Neuron Compiler - New feature - Uncompressed NEFF format for faster loading models prior inference. Enable it by –enable-fast-loading-neuron-binaries. Some cases of large models may be detrimentally impacted as it will not be compressed but many cases will benefit. * Neuron Compiler - Additional performance enhancements, memory optimizations, and bug fixes, see :ref:`neuron-cc-rn`. * TensorFlow Neuron - Performance enhancements, memory optimizations, and bug fixes. see :ref:`tensorflow-neuron-rn`. * MXNet Neuron - Enhancements and minor bug fixes (MXNet 1.8), see :ref:`mxnet-neuron-rn`. * Neuron Runtime - Performance enhancements, memory optimizations, and bug fixes. :ref:`neuron-runtime-release-notes`. * Neuron Tools - Minor bug fixes and enhancements. * Software Deprecation * End of support for Neuron Conda packages in Deep Learning AMI, users should use pip upgrade commands to upgrade to latest Neuron version in DLAMI, see `blog &lt;https://aws.amazon.com/blogs/developer/neuron-conda-packages-eol/&gt;`_. * End of support for Ubuntu 16, see :ref:`documentation &lt;eol-ubuntu16&gt;`_. Neuron 1.14.0 (05/28/2021) -------------------------- This release (Neuron 1.14.0) introduces first release of PyTorch Neuron 1.8.1, tutorials update, performance enhancements and memory optimizations for PyTorch Neuron, TensorFlow Neuron and MXNet Neuron. * PyTorch Neuron - First release of PyTorch Neuron 1.8.1. * PyTorch Neuron - Convolution operator support has been extended to include ConvTranspose2d variants. * PyTorch Neuron - Updated tutorials to use Hugging Face Transformers 4.6.0. * PyTorch Neuron - Additional performance enhancements, memory optimizations, and bug fixes. see :ref:`pytorch-neuron-rn`. * Neuron Compiler - New feature - Uncompressed NEFF format for faster loading models prior inference. Enable it by –enable-fast-loading-neuron-binaries. Some cases of large models may be detrimentally impacted as it will not be compressed but many cases will benefit. * Neuron Compiler - Additional performance enhancements, memory optimizations, and bug fixes, see :ref:`neuron-cc-rn`. * TensorFlow Neuron - Performance enhancements, memory optimizations, and bug fixes. see :ref:`tensorflow-neuron-rn`. * MXNet Neuron - Enhancements and minor bug fixes (MXNet 1.8), see :ref:`mxnet-neuron-rn`. * Neuron Runtime - Performance enhancements, memory optimizations, and bug fixes. :ref:`neuron-runtime-release-notes`. * Neuron Tools - Minor bug fixes and enhancements. * Software Deprecation * End of support for Neuron Conda packages in Deep Learning AMI, users should use pip upgrade commands to upgrade to latest Neuron version in DLAMI, see `blog &lt;https://aws.amazon.com/blogs/developer/neuron-conda-packages-eol/&gt;`_. * End of support for Ubuntu 16, see :ref:`documentation &lt;eol-ubuntu16&gt;`. Neuron 1.13.0 (05/01/2021) -------------------------- This release introduces higher performance, updated framework support, new tutorials, and adding models and tools: * Additional compiler improvements boost performance up to 20% higher throughput compared to previous release across model types. * Improving usability for NLP models, with out-of-the-box 12x higher-throughput at 70% lower cost for Hugging Face Transformers pre-trained BERT Base models, see :ref:`pytorch-tutorials-neuroncore-pipeline-pytorch`. * Upgrade Apache MXNet (Incubating) to 1.8, where Neuron is now a plugin, see :ref:`mxnet-neuron-rn`. * PyTorch ResNext models now functional with new operator support, see :ref:`pytorch-neuron-rn`. * PyTorch Yolov5 support, see :ref:`pytorch-neuron-rn`. * MXNet (Incubating): Gluon API and Neuron support for NLP BERT models, see :ref:`mxnet-neuron-rn`. * PyTorch Convolution operator support has been extended to include most Conv1d and Conv3d variants, please see :ref:`neuron-cc-ops-pytorch` for the complete list of operators. * First release of Neuron plugin for TensorBoard, see :ref:`neuron-tensorboard-rn`. **Software Deprecation** * :ref:`eol-conda-packages` * :ref:`eol-ubuntu16` * :ref:`eol-classic-tensorboard` .. _03-04-2021-rn: March 4, 2021 Release (Patch) ----------------------------- This release include bug fixes and minor enhancements to the Neuron Runtime and Tools. February 24, 2021 Release (Patch) --------------------------------- This release updates all Neuron packages and libraries in response to the Python Secutity issue CVE-2021-3177 as described here: https://nvd.nist.gov/vuln/detail/CVE-2021-3177. This vulnerability potentially exists in multiple versions of Python including 3.5, 3.6, 3.7. Python is used by various components of Neuron, including the Neuron compiler as well as Machine Learning frameworks including TensorFlow, PyTorch and Apache MXNet (Incubating). It is recommended that the Python interpreters used in any AMIs and containers used with Neuron are also updated. Python 3.5 reached `end-of-life &lt;https://peps.python.org/pep-0478/&gt;`_, from this release Neuron packages will not support Python 3.5. Users should upgrade to latest DLAMI or upgrade to a newer Python versions if they are using other AMI. January 30, 2021 Release -------------------------- This release continues to improves the NeuronCore Pipeline performance for BERT models. For example, running BERT Base with the the neuroncore-pipeline-cores compile option, at batch=3, seqlen=32 using 16 Neuron Cores, results in throughput of up to 5340 sequences per second and P99 latency of 9ms using Tensorflow Serving. This release also adds operator support and performance improvements for the PyTorch based DistilBert model for sequence classification. December 23, 2020 Release -------------------------- This release introduces a PyTorch 1.7 based torch-neuron package as a part of the Neuron SDK. Support for PyTorch model serving with TorchServe 0.2 is added and will be demonstrated with a tutorial. This release also provides an example tutorial for PyTorch based Yolo v4 model for Inferentia. To aid visibility into compiler activity, the Neuron-extended Frameworks TensorFlow and PyTorch will display a new compilation status indicator that prints a dot (.) every 20 seconds to the console as compilation is executing. Important to know: ^^^^^^^^^^^^^^^^^^ 1. This update continues to support the torch-neuron version of PyTorch 1.5.1 for backwards compatibility. 2. As Python 3.5 reached end-of-life in October 2020, and many packages including TorchVision and Transformers have stopped support for Python 3.5, we will begin to stop supporting Python 3.5 for frameworks, starting with PyTorch-Neuron version :ref:`neuron-torch-11170` in this release. You can continue to use older versions with Python 3.5. November 17, 2020 Release -------------------------- This release improves NeuronCore Pipeline performance. For example, running BERT Small, batch=4, seqlen=32 using 4 Neuron Cores, results in throughput of up to 7000 sequences per second and P99 latency of 3ms using Tensorflow Serving. Neuron tools updated the NeuronCore utilization metric to include all inf1 compute engines and DMAs. Added a new neuron-monitor example that connects to Grafana via Prometheus. We've added a new sample script which exports most of neuron-monitor's metrics to a Prometheus monitoring server. Additionally, we also provided a sample Grafana dashboard. More details at :ref:`neuron-tools`. ONNX support is limited and from this version onwards we are not planning to add any additional capabilities to ONNX. We recommend running models in TensorFlow, PyTorch or MXNet for best performance and support. October 22, 2020 Release -------------------------- This release adds a Neuron kernel mode driver (KMD). The Neuron KMD simplifies Neuron Runtime deployments by removing the need for elevated privileges, improves memory management by removing the need for huge pages configuration, and eliminates the need for running neuron-rtd as a sidecar container. Documentation throughout the repo has been updated to reflect the new support. The new Neuron KMD is backwards compatible with prior versions of Neuron ML Frameworks and Compilers - no changes are required to existing application code. More details in the Neuron Runtime release notes at :ref:`neuron-runtime`. September 22, 2020 Release -------------------------- This release improves performance of YOLO v3 and v4, VGG16, SSD300, and BERT. As part of these improvements, Neuron Compiler doesn’t require any special compilation flags for most models. Details on how to use the prior optimizations are outlined in the neuron-cc :ref:`neuron-cc-rn`. The release also improves operational deployments of large scale inference applications, with a session management agent incorporated into all supported ML Frameworks and a new neuron tool called neuron-monitor allows to easily scale monitoring of large fleets of Inference applications. A sample script for connecting neuron-monitor to Amazon CloudWatch metrics is provided as well. Read more about using neuron-monitor :ref:`neuron-monitor-ug`. August 19, 2020 Release -------------------------- Bug fix for an error reporting issue with the Neuron Runtime. Previous versions of the runtime were only reporting uncorrectable errors on half of the dram per Inferentia. Other Neuron packages are not changed. August 8, 2020 Release -------------------------- This release of the Neuron SDK delivers performance enhancements for the BERT Base model. Sequence lengths including 128, 256 and 512 were found to have best performance at batch size 6, 3 and 1 respectively using publically available versions of both Pytorch (1.5.x) and Tensorflow-based (1.15.x) models. The compiler option "-O2" was used in all cases. A new Kubernetes scheduler extension is included in this release to improve pod scheduling on inf1.6xlarge and inf1.24xlarge instance sizes. Details on how the scheduler works and how to apply the scheduler can be found :ref:`neuron-k8-scheduler-ext`. Check the :ref:`neuron-k8-rn` for details changes to k8 components going forward. August 4, 2020 Release -------------------------- Bug fix for a latent issue caused by a race condition in Neuron Runtime leading to possible crashes. The crash was observed under stress load conditons. All customers are encouraged to update the latest Neuron Runtime package (aws-neuron-runtime), version 1.0.8813.0 or newer. Other Neuron packages are being updated as well, but are to be considered non-critical updates. July 16, 2020 Release -------------------------- This release of Neuron SDK adds support for the OpenPose (posenet) Neural Network. An example of using Openpose for end to end inference is available :ref:`/src/examples/tensorflow/openpose_demo/openpose.ipynb`. A new PyTorch auto-partitioner feature now automatically builds a Neuron specific graph representation of PyTorch models. The key benefit of this feature is automatic partitioning the model graph to run the supported operators on the NeuronCores and the rest on the host. PyTorch auto-partitioner is enabled by default with ability to disable if a manual partition is needed. More details :ref:`neuron-pytorch`. The release also includes various bug fixes and increased operator support. Important to know: ^^^^^^^^^^^^^^^^^^ 1. This update moves the supported version for PyTorch to the current release (PyTorch 1.5.1) 2. This release supports Python 3.7 Conda packages in addition to Python 3.6 Conda packages June 18, 2020 Release -------------------------- Point fix an error related to yum downgrade/update of Neuron Runtime packages. The prior release fails to successfully downgrade/update Neuron Runtime Base package and Neuron Runtime package when using Yum on Amazon Linux 2. Please remove and then install both packages on AL2 using these commands: :: # Amazon Linux 2 sudo yum remove aws-neuron-runtime-base sudo yum remove aws-neuron-runtime sudo yum install aws-neuron-runtime-base sudo yum install aws-neuron-runtime Jun 11, 2020 Release -------------------------- This Neuron release provides support for the recent launch of EKS for Inf1 instance types and numerous other improvements. More details about how to use EKS with the Neuron SDK can be found in AWS documentation `here &lt;https://docs.aws.amazon.com/eks/latest/userguide/inferentia-support.html&gt;`__. This release adds initial support for OpenPose PoseNet for images with resolutions upto 400x400. This release also adds a '-O2' option to the Neuron Compiler. '-O2' can help with handling of large tensor inputs. In addition the Neuron Compiler increments the version of the compiled artifacts, called "NEFF", to version 1.0. Neuron Runtime versions earlier than the 1.0.6905.0 release in May 2020 will not be able to execute NEFFs compiled from this release forward. Please see :ref:`neff-support-table` for compatibility. Stay up to date on future improvements and new features by following the :ref:`neuron_roadmap`. Refer to the detailed release notes for more information on each Neuron component. .. _important-to-know-1: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). Using the Neuron Compiler '-O2' option can help with handling of large tensor inputs for some models. If not used, Neuron limits the size of CNN models like ResNet to an input size of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. INT8 data type is not currently supported by the Neuron compiler. 3. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. May 15, 2020 Release -------------------------- Point fix an error related to installation of the Neuron Runtime Base package. The prior release fails to successfully start Neuron Discovery when the Neuron Runtime package is not also installed. This scenario of running Neuron Discovery alone is critical to users of Neuron in container environments. Please update the aws-neuron-runtime-base package: :: # Ubuntu 18 or 16: sudo apt-get update sudo apt-get install aws-neuron-runtime-base # Amazon Linux, Centos, RHEL sudo yum update sudo yum install aws-neuron-runtime-base May 11, 2020 Release -------------------------- This release provides additional throughput improvements to running inference on a variety of models; for example BERTlarge throughput has improved by an additional 35% compared to the previous release and with peak thoughput of 360 seq/second on inf1.xlarge (more details :ref:`tensorflow-bert-demo` ). In addition to the performance boost, this release adds PyTorch, and MXNet framework support for BERT models, as well as expands container support in preparation to an upcoming EKS launch. We continue to work on new features and improving performance further, to stay up to date follow this repository and our :ref:`neuron_roadmap`. Refer to the detailed release notes for more information for each Neuron component. .. _important-to-know-2: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). As a result, we limit the sizes of CNN models like ResNet to have an input size limit of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. INT8 data type is not currently supported by the Neuron compiler. 3. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. Mar 26, 2020 Release -------------------------- This release supports a variant of the SSD object detection network, a SSD inference demo is available :ref:`tensorflow-ssd300` This release also enhances our Tensorboard support to enable CPU-node visibility. Refer to the detailed release notes for more information for each neuron component. .. _important-to-know-3: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). As a result, we limit the sizes of CNN models like ResNet to have an input size limit of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. INT8 data type is not currently supported by the Neuron compiler. 3. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. Feb 27, 2020 Release -------------------------- This release improves performance throughput by up to 10%, for example ResNet-50 on inf1.xlarge has increased from 1800 img/sec to 2040 img/sec, Neuron logs include more detailed messages and various bug fixes. Refer to the detailed release notes for more details. We continue to work on new features and improving performance further, to stay up to date follow this repository, and watch the `AWS Neuron developer forum &lt;https://forums.aws.amazon.com/forum.jspa?forumID=355&gt;`__. .. _important-to-know-4: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). As a result, we limit the sizes of CNN models like ResNet to have an input size limit of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. Computer-vision object detection and segmentation models are not yet supported. 3. INT8 data type is not currently supported by the Neuron compiler. 4. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. Jan 28, 2020 Release -------------------------- This release brings significant throughput improvements to running inference on a variety of models; for example Resnet50 throughput is increased by 63% (measured 1800 img/sec on inf1.xlarge up from 1100/sec, and measured 2300/sec on inf1.2xlarge). BERTbase throughput has improved by 36% compared to the re:Invent launch (up to 26100seq/sec from 19200seq/sec on inf1.24xlarge), and BERTlarge improved by 15% (230 seq/sec, compared to 200 running on inf1.2xlarge). In addition to the performance boost, this release includes various bug fixes as well as additions to the GitHub with :ref:`neuron-features-index` diving deep on how Neuron performance features work and overall improved documentation following customer input. We continue to work on new features and improving performance further, to stay up to date follow this repository, and watch the `AWS Neuron developer forum &lt;https://forums.aws.amazon.com/forum.jspa?forumID=355&gt;`__. .. _important-to-know-5: Important to know: ^^^^^^^^^^^^^^^^^^ 1. Size of neural network. The current Neuron compiler release has a limitation in terms of the size of neural network it could effectively optimize for. The size of neural network is influenced by a number of factors including: a) type of neural network (CNN, LSTM, MLP) , b) number of layers, c) sizes of input (dimension of the tensors, batch size, ...). As a result, we limit the sizes of CNN models like ResNet to have an input size limit of 480x480 fp16/32, batch size=4; LSTM models like GNMT to have a time step limit of 900; MLP models like BERT to have input size limit of sequence length=128, batch=8. 2. Computer-vision object detection and segmentation models are not yet supported. 3. INT8 data type is not currently supported by the Neuron compiler. 4. Neuron does not support TensorFlow 2 or PyTorch 1.4.0. Neuron SDK Release Notes Structure ---------------------------------- The Neuron SDK is delivered through commonly used package mananagers (e.g. PIP, APT and YUM). These packages are then themselves packaged into Conda packages that are integrated into the AWS DLAMI for minimal developer overhead. The Neuron SDK release notes follow a similar structure, with the core improvements and known-issues reported in the release notes of the primary packages (e.g. Neuron-Runtime or Neuron-Compiler release notes), and additional release notes specific to the package-integration are reported through their dedicated release notes (e.g. Conda or DLAMI release notes). </pre></body></html>
2023-09-29T20:55:25.135Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/setup/notebook/running-jupyter-notebook-as-script.rst.txt
``` .. _running-jupyter-notebook-as-script: Running Jupyter Notebook as script ================================== Converting the Jupyter Notebook and running ------------------------------------------- Go into the aws-neuron-sdk repository directory containing the Jupyter Notebook (.ipynb file), .. code:: bash cd aws-neuron-sdk/src/examples/<framework like pytorch, tensorflow, etc> The Jupyter Notebook (.ipynb) can be converted to python script using jupyter-nbconvert. For example, .. code:: bash jupyter nbconvert --to script tutorial_pretrained_bert.ipynb and can be run in the virtual env (if needed), .. code:: bash # if not already in the virtual env, source activate <virtual env> # Run the converted script python <tutorial.py> ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _running-jupyter-notebook-as-script: Running Jupyter Notebook as script ================================== Converting the Jupyter Notebook and running ------------------------------------------- Go into the aws-neuron-sdk repository directory containing the Jupyter Notebook (.ipynb file), .. code:: bash cd aws-neuron-sdk/src/examples/&lt;framework like pytorch, tensorflow, etc&gt; The Jupyter Notebook (.ipynb) can be converted to python script using jupyter-nbconvert. For example, .. code:: bash jupyter nbconvert --to script tutorial_pretrained_bert.ipynb and can be run in the virtual env (if needed), .. code:: bash # if not already in the virtual env, source activate &lt;virtual env&gt; # Run the converted script python &lt;tutorial.py&gt; </pre></body></html>
2023-09-29T20:55:25.153Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-hardware/neuron-core-v1.rst.txt
``` .. _neuroncores-v1-arch: NeuronCore-v1 Architecture -------------------------- NeuronCore-v1 is the first generation of the NeuronCore engine, powering the Inferentia NeuronDevices. Each NeuronCore-v1 is a fully-independent heterogenous compute-unit, with 3 main engines (Tensor/Vector/Scalar Engines), and on-chip software-managed SRAM memory, for maximizing data locality (compiler managed, for maximum data locality and optimized data prefetch). .. image:: /images/nc-v1.png The ScalarEngine is optimized for scalar-computations, in which every element of the output is dependent on one element of the input, e.g. non-linearities like GELU, SIGMOID or EXP. The ScalarEngine is highly parallelized, and can process 512 floating point operations per cycle. It can handle various data-types, including FP16, BF16, FP32, INT8, INT16 and INT32. The VectorEngine is optimized for vector-computations, in which every element of the output is dependent on multiple input elements. Examples include ‘axpy’ operations (Z=aX+Y), Layer Normalization, Pooling operations, and many more. The VectorEngine is also highly parallelized, and can perform 256 floating point operations per cycle. It can handle various data-types, including FP16, BF16, FP32, INT8, INT16 and INT32. The TensorEngine is based on a power-optimized systolic-array which is highly optimized for tensor computations (e.g. GEMM, CONV, Reshape, Transpose), and supports mixed-precision computations (FP16/BF16/INT8 inputs, FP32/INT32 outputs). Each NeuronCore-v1 TensorEngine delivers 16 TFLOPS of FP16/BF16 tensor computations. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuroncores-v1-arch: NeuronCore-v1 Architecture -------------------------- NeuronCore-v1 is the first generation of the NeuronCore engine, powering the Inferentia NeuronDevices. Each NeuronCore-v1 is a fully-independent heterogenous compute-unit, with 3 main engines (Tensor/Vector/Scalar Engines), and on-chip software-managed SRAM memory, for maximizing data locality (compiler managed, for maximum data locality and optimized data prefetch). .. image:: /images/nc-v1.png The ScalarEngine is optimized for scalar-computations, in which every element of the output is dependent on one element of the input, e.g. non-linearities like GELU, SIGMOID or EXP. The ScalarEngine is highly parallelized, and can process 512 floating point operations per cycle. It can handle various data-types, including FP16, BF16, FP32, INT8, INT16 and INT32. The VectorEngine is optimized for vector-computations, in which every element of the output is dependent on multiple input elements. Examples include ‘axpy’ operations (Z=aX+Y), Layer Normalization, Pooling operations, and many more. The VectorEngine is also highly parallelized, and can perform 256 floating point operations per cycle. It can handle various data-types, including FP16, BF16, FP32, INT8, INT16 and INT32. The TensorEngine is based on a power-optimized systolic-array which is highly optimized for tensor computations (e.g. GEMM, CONV, Reshape, Transpose), and supports mixed-precision computations (FP16/BF16/INT8 inputs, FP32/INT32 outputs). Each NeuronCore-v1 TensorEngine delivers 16 TFLOPS of FP16/BF16 tensor computations.</pre></body></html>
2023-09-29T20:55:25.214Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-async-lazy-load.rst.txt
``` .. _torch_neuronx_lazy_async_load_api: PyTorch Neuron (``torch-neuronx``) Lazy and Asynchronous Loading API ============================= The :func:`torch_neuronx.lazy_load` and :func:`torch_neuronx.async_load` Python APIs allow for more fine-grained control of loading a model onto the Neuron cores. They are designed to enable different load behaviours (i.e. lazy or asynchronous loading) that, in certain cases, can speed up the load time. Both APIs take as input a :class:`~torch.jit.ScriptModule` model created by :ref:`torch_neuronx_trace_api`. **They should be called immediately after** :func:`torch_neuronx.trace` **returns, before saving the model via** :func:`torch.jit.save` .. py:function:: torch_neuronx.lazy_load(trace, enable_lazy_load=True) Enables(or disables) lazy load behaviour on the traced Neuron ScriptModule ``trace``. By default, lazy load behaviour is disabled, so this API must be called immediately after :func:`torch_neuronx.trace` returns if lazy load behaviour is desired. In this context, lazy loading means that **calling** ``torch.jit.load`` **will not immediately load the model onto the Neuron core.** Instead, the model will be loaded onto the Neuron core at a later time, either via a call to :ref:`torch_neuronx_dataparallel_api`, or automatically when the model's ``forward`` method executes. There are several scenarios where lazy loading is useful. For instance, if one wants to use the DataParallel API to load the model onto multiple Neuron cores, typically one would first call ``torch.jit.load`` to load the saved model from disk, and then call ``DataParallel`` on the object returned by ``torch.jit.load``. Doing this will cause redundant loading, because calling ``torch.jit.load`` first will by default load the model onto one Neuron core, while calling ``DataParallel`` next will first unload the model from the Neuron core, and then load again according to user-specified ``device_ids``. This redundant load is avoided if one enables lazy loading by calling ``torch_neuronx.lazy_load`` prior to saving the model. This way, ``torch.jit.load`` will not load the model onto the Neuron core, so ``DataParallel`` can directly load the model onto the desired cores. *Required Arguments* :arg ~torch.jit.ScriptModule trace: Model created by the :ref:`torch_neuronx_trace_api`, for which lazy loading is to be enabled. *Optional Arguments* :arg bool enable_lazy_load: Whether to enable lazy loading, defaults to True. Simple example usage: >>> neuron_model = torch_neuronx.trace(model, inputs) >>> torch_neuronx.lazy_load(neuron_model) >>> torch.jit.save(neuron_model, "my_model") Then some time later: >>> neuron_model = torch.jit.load("my_model") # neuron_model will not be loaded onto the Neuron core until it is run or it is passed to DataParallel .. py:function:: torch_neuronx.async_load(trace, enable_async_load=True) Enables(or disables) asynchronous load behaviour on the traced Neuron ScriptModule ``trace``. By default, loading onto the Neuron core is a synchronous, blocking operation. This API can be called immediately after :func:`torch_neuronx.trace` returns in order to make loading this model onto the Neuron core a non-blocking operation. This means that when a load onto the Neuron core is triggered, either through a call to ``torch.jit.load`` or ``DataParallel``, a new thread is launched to perform the load, while the calling function will immediately return. The load will proceed asynchronously in the background, and only when it finishes will the model successfully execute. If the model's ``forward`` method is invoked before the asynchronus load finishes, ``forward`` will wait until the load completes before executing the model. This API is useful when one wants to load multiple models onto the Neuron core in parallel. It allows multiple calls to load different models to execute concurrently on different threads, which can significantly reduce the total load time when there are multiple CPU cores on the host. It is especially useful in cases where a single model pipeline has several compiled Neuron models. In this case, one can enable asynchronous load on each Neuron model and load all of them in parallel. Note that this API differs from :func:`torch_neuronx.lazy_load`. Lazy loading will only delay the load onto the Neuron core from when ``torch.jit.load`` is called to some later time, but when the load does occur, it is still a synchronous, blocking operation. Asynchronous loading will make the load an asynchronous, non-blocking operation, but it does not delay when the load starts, meaning that calling ``torch.jit.load`` will still start the load, but the load will proceed asynchronously in the background. *Required Arguments* :arg ~torch.jit.ScriptModule trace: Model created by the :ref:`torch_neuronx_trace_api`, for which asynchronous loading is to be enabled. *Optional Arguments* :arg bool enable_async_load: Whether to enable asynchronous loading, defaults to True. Simple example usage: >>> neuron_model1 = torch_neuronx.trace(model1, inputs1) >>> torch_neuronx.async_load(neuron_model1) >>> torch.jit.save(neuron_model1, "my_model1") >>> neuron_model2 = torch_neuronx.trace(model2, inputs2) >>> torch_neuronx.async_load(neuron_model2) >>> torch.jit.save(neuron_model2, "my_model2") Then some time later: >>> neuron_model1 = torch.jit.load("my_model1") # neuron_model1 will start loading onto the Neuron core immediately, but the load will occur in a separate thread in the background. >>> neuron_model2 = torch.jit.load("my_model2") # neuron_model2 will start loading onto the Neuron core immediately, but the load will occur in a separate thread in the background. Both neuron_model1 and neuron_model2 will load concurrently. >>> output1 = neuron_model1(input1) # This call will block until the asynchronous load launched above finishes. >>> output2 = neuron_model2(input2) # This call will block until the asynchronous load launched above finishes. Using :func:`torch_neuronx.lazy_load` and :func:`torch_neuronx.async_load` Together -------- You can also enable lazy load and asynchronous load together for the same model. To do so, simply call each API independently before saving the model with ``torch.jit.save``: >>> neuron_model = torch_neuronx.trace(model, inputs) >>> torch_neuronx.lazy_load(neuron_model) >>> torch_neuronx.async_load(neuron_model) >>> torch.jit.save(neuron_model, "my_model") This will both delay loading the model onto the Neuron core, and make the load asynchronous. For another example usage, please refer to the `Github sample <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/inference/hf_pretrained_sd2_512_inference.ipynb>`_ we provide for running inference on HuggingFace Stable Diffusion 2.1, where we use both ``lazy_load`` and ``async_load`` to speed up the total load time of the four Neuron models that make up that pipeline. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch_neuronx_lazy_async_load_api: PyTorch Neuron (``torch-neuronx``) Lazy and Asynchronous Loading API ============================= The :func:`torch_neuronx.lazy_load` and :func:`torch_neuronx.async_load` Python APIs allow for more fine-grained control of loading a model onto the Neuron cores. They are designed to enable different load behaviours (i.e. lazy or asynchronous loading) that, in certain cases, can speed up the load time. Both APIs take as input a :class:`~torch.jit.ScriptModule` model created by :ref:`torch_neuronx_trace_api`. **They should be called immediately after** :func:`torch_neuronx.trace` **returns, before saving the model via** :func:`torch.jit.save` .. py:function:: torch_neuronx.lazy_load(trace, enable_lazy_load=True) Enables(or disables) lazy load behaviour on the traced Neuron ScriptModule ``trace``. By default, lazy load behaviour is disabled, so this API must be called immediately after :func:`torch_neuronx.trace` returns if lazy load behaviour is desired. In this context, lazy loading means that **calling** ``torch.jit.load`` **will not immediately load the model onto the Neuron core.** Instead, the model will be loaded onto the Neuron core at a later time, either via a call to :ref:`torch_neuronx_dataparallel_api`, or automatically when the model's ``forward`` method executes. There are several scenarios where lazy loading is useful. For instance, if one wants to use the DataParallel API to load the model onto multiple Neuron cores, typically one would first call ``torch.jit.load`` to load the saved model from disk, and then call ``DataParallel`` on the object returned by ``torch.jit.load``. Doing this will cause redundant loading, because calling ``torch.jit.load`` first will by default load the model onto one Neuron core, while calling ``DataParallel`` next will first unload the model from the Neuron core, and then load again according to user-specified ``device_ids``. This redundant load is avoided if one enables lazy loading by calling ``torch_neuronx.lazy_load`` prior to saving the model. This way, ``torch.jit.load`` will not load the model onto the Neuron core, so ``DataParallel`` can directly load the model onto the desired cores. *Required Arguments* :arg ~torch.jit.ScriptModule trace: Model created by the :ref:`torch_neuronx_trace_api`, for which lazy loading is to be enabled. *Optional Arguments* :arg bool enable_lazy_load: Whether to enable lazy loading, defaults to True. Simple example usage: &gt;&gt;&gt; neuron_model = torch_neuronx.trace(model, inputs) &gt;&gt;&gt; torch_neuronx.lazy_load(neuron_model) &gt;&gt;&gt; torch.jit.save(neuron_model, "my_model") Then some time later: &gt;&gt;&gt; neuron_model = torch.jit.load("my_model") # neuron_model will not be loaded onto the Neuron core until it is run or it is passed to DataParallel .. py:function:: torch_neuronx.async_load(trace, enable_async_load=True) Enables(or disables) asynchronous load behaviour on the traced Neuron ScriptModule ``trace``. By default, loading onto the Neuron core is a synchronous, blocking operation. This API can be called immediately after :func:`torch_neuronx.trace` returns in order to make loading this model onto the Neuron core a non-blocking operation. This means that when a load onto the Neuron core is triggered, either through a call to ``torch.jit.load`` or ``DataParallel``, a new thread is launched to perform the load, while the calling function will immediately return. The load will proceed asynchronously in the background, and only when it finishes will the model successfully execute. If the model's ``forward`` method is invoked before the asynchronus load finishes, ``forward`` will wait until the load completes before executing the model. This API is useful when one wants to load multiple models onto the Neuron core in parallel. It allows multiple calls to load different models to execute concurrently on different threads, which can significantly reduce the total load time when there are multiple CPU cores on the host. It is especially useful in cases where a single model pipeline has several compiled Neuron models. In this case, one can enable asynchronous load on each Neuron model and load all of them in parallel. Note that this API differs from :func:`torch_neuronx.lazy_load`. Lazy loading will only delay the load onto the Neuron core from when ``torch.jit.load`` is called to some later time, but when the load does occur, it is still a synchronous, blocking operation. Asynchronous loading will make the load an asynchronous, non-blocking operation, but it does not delay when the load starts, meaning that calling ``torch.jit.load`` will still start the load, but the load will proceed asynchronously in the background. *Required Arguments* :arg ~torch.jit.ScriptModule trace: Model created by the :ref:`torch_neuronx_trace_api`, for which asynchronous loading is to be enabled. *Optional Arguments* :arg bool enable_async_load: Whether to enable asynchronous loading, defaults to True. Simple example usage: &gt;&gt;&gt; neuron_model1 = torch_neuronx.trace(model1, inputs1) &gt;&gt;&gt; torch_neuronx.async_load(neuron_model1) &gt;&gt;&gt; torch.jit.save(neuron_model1, "my_model1") &gt;&gt;&gt; neuron_model2 = torch_neuronx.trace(model2, inputs2) &gt;&gt;&gt; torch_neuronx.async_load(neuron_model2) &gt;&gt;&gt; torch.jit.save(neuron_model2, "my_model2") Then some time later: &gt;&gt;&gt; neuron_model1 = torch.jit.load("my_model1") # neuron_model1 will start loading onto the Neuron core immediately, but the load will occur in a separate thread in the background. &gt;&gt;&gt; neuron_model2 = torch.jit.load("my_model2") # neuron_model2 will start loading onto the Neuron core immediately, but the load will occur in a separate thread in the background. Both neuron_model1 and neuron_model2 will load concurrently. &gt;&gt;&gt; output1 = neuron_model1(input1) # This call will block until the asynchronous load launched above finishes. &gt;&gt;&gt; output2 = neuron_model2(input2) # This call will block until the asynchronous load launched above finishes. Using :func:`torch_neuronx.lazy_load` and :func:`torch_neuronx.async_load` Together -------- You can also enable lazy load and asynchronous load together for the same model. To do so, simply call each API independently before saving the model with ``torch.jit.save``: &gt;&gt;&gt; neuron_model = torch_neuronx.trace(model, inputs) &gt;&gt;&gt; torch_neuronx.lazy_load(neuron_model) &gt;&gt;&gt; torch_neuronx.async_load(neuron_model) &gt;&gt;&gt; torch.jit.save(neuron_model, "my_model") This will both delay loading the model onto the Neuron core, and make the load asynchronous. For another example usage, please refer to the `Github sample &lt;https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/inference/hf_pretrained_sd2_512_inference.ipynb&gt;`_ we provide for running inference on HuggingFace Stable Diffusion 2.1, where we use both ``lazy_load`` and ``async_load`` to speed up the total load time of the four Neuron models that make up that pipeline. </pre></body></html>
2023-09-29T20:55:25.262Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_model_index_guide.rst.txt
``` .. _neuronperf_model_index_guide: ============================ NeuronPerf Model Index Guide ============================ A **model index** is a JSON file that tracks information about one or more compiled models. You can generate them using ``compile``, by using the API described here, or you may create them manually in a text editor. After a call to ``compile`` you may notice that you now have a ``models`` directory. You will also spot a new file named something like ``model_83b3raj2.json`` in your local directory, if you didn't provide a ``filename`` yourself. A model index is not intended to be opaque; you should feel free to open, inspect, and modify it yourself. It contains some information about the artifacts that were compiled. Individual models referenced by the index can be handed to ``benchmark`` directly along with an example input, or you may pass the entire index as in the basic example above. Here is an example index: .. code:: bash python3 -m json.tool model_index.json .. code:: json { "version": "0.0.0.0+0bc220a", "model_configs": [ { "filename": "models/model_b1_p1_38793jda.pt", "input_idx": 0, "batch_size": 1, "pipeline_size": 1, "compile_s": 5.32 } ] } An index is useful for keeping track of your compiled artifacts and their parameters. The advantages of using ``neuronperf.[torch/tensorflow/mxnet].compile`` are clearer when we wish to compile multiple variants of our model and benchmark all of them at the same time. All of the model artifacts and the index can be destroyed using ``model_index.delete('model_index.json')``. Benchmarking ============ When benchmarking with an index, there are some important details to keep in mind. If you originally built the index using a set of inputs, the model index has associated the ``inputs`` with the compiled models by their positional index. For example: .. code:: python batch_sizes = [1, 2] inputs = [torch.zeros((b, 100)) for b in batch_sizes] Here, ``inputs[0]`` corresponds to batch size 1. Therefore, the model index will contain a reference to input 0 for that model. When you call ``benchmark``, you must pass inputs with the same shape in the same positions as at compile time. .. note:: It's only necessary that there is an input with the correct shape at``inputs[input_index]``. The example data itself is not important. Working with Indexes -------------------- The API detail below describes utilities for working with indexes. An ``index`` can be either a loaded index (JSON) or the path to an index (it will be loaded automatically). Creating ======== .. code:: python index = neuronperf.model_index.create('/path/to/model', batch_size=1) filename = neuronperf.model_index.save(index) Once you have an index, you can pass its path directly to ``benchmark``. You can also pass a custom filename instead: .. code:: python index = neuronperf.model_index.create('/path/to/model', batch_size=1) neuronperf.model_index.save(index, 'my_index.json') Appending ========= If **multiple models use the same inputs**, you can append them together. For example, if you have the same batch size with multiple pipeline sizes, the inputs are the same, but the model changes. .. code:: python pipeline_sizes = [1, 2, 3, 4] indexes = [neuronperf.model_index.create(f'/path/to/model_p{p}', pipeline_size=p, batch_size=5) for p in pipeline_sizes] index = neuronperf.model_index.append(*indexes) neuronperf.model_index.save(index, 'my_index.json') Filtering ========= You can construct a new model index that is filtered by some parameter. For example, to get a new index with only batch sizes [1, 2], you could do: .. code:: python new_index = neuronperf.model_index.filter(index, batch_sizes=[1, 2]) You can also benchmark subset of a model index by passing only the subset parameters of interest, but remember to ensure you provide the correct number of inputs for the index (even if some are not used). For example, if you an index with models at ``batch_sizes = [1, 2, 3]``, but only wish to benchmark batch size 2: .. code:: python batch_sizes = [1, 2, 3] inputs = [torch.zeros((b, 100)) for b in batch_sizes] reports = neuronperf.torch.benchmark('model_index.json', inputs, batch_sizes=2) Copying ======= You can copy an index to a new location with ``neuronperf.model_index.copy(index, new_index_name, new_index_dir)``. This is mostly useful in combination with ``filter``/``append``. Deleting ======== If you wish to keep your compiled models, just delete the model index file yourself. If you want to delete your model index and all associated artifacts, use: .. code:: python neuronperf.model_index.delete('my_index.json') ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_model_index_guide: ============================ NeuronPerf Model Index Guide ============================ A **model index** is a JSON file that tracks information about one or more compiled models. You can generate them using ``compile``, by using the API described here, or you may create them manually in a text editor. After a call to ``compile`` you may notice that you now have a ``models`` directory. You will also spot a new file named something like ``model_83b3raj2.json`` in your local directory, if you didn't provide a ``filename`` yourself. A model index is not intended to be opaque; you should feel free to open, inspect, and modify it yourself. It contains some information about the artifacts that were compiled. Individual models referenced by the index can be handed to ``benchmark`` directly along with an example input, or you may pass the entire index as in the basic example above. Here is an example index: .. code:: bash python3 -m json.tool model_index.json .. code:: json { "version": "0.0.0.0+0bc220a", "model_configs": [ { "filename": "models/model_b1_p1_38793jda.pt", "input_idx": 0, "batch_size": 1, "pipeline_size": 1, "compile_s": 5.32 } ] } An index is useful for keeping track of your compiled artifacts and their parameters. The advantages of using ``neuronperf.[torch/tensorflow/mxnet].compile`` are clearer when we wish to compile multiple variants of our model and benchmark all of them at the same time. All of the model artifacts and the index can be destroyed using ``model_index.delete('model_index.json')``. Benchmarking ============ When benchmarking with an index, there are some important details to keep in mind. If you originally built the index using a set of inputs, the model index has associated the ``inputs`` with the compiled models by their positional index. For example: .. code:: python batch_sizes = [1, 2] inputs = [torch.zeros((b, 100)) for b in batch_sizes] Here, ``inputs[0]`` corresponds to batch size 1. Therefore, the model index will contain a reference to input 0 for that model. When you call ``benchmark``, you must pass inputs with the same shape in the same positions as at compile time. .. note:: It's only necessary that there is an input with the correct shape at``inputs[input_index]``. The example data itself is not important. Working with Indexes -------------------- The API detail below describes utilities for working with indexes. An ``index`` can be either a loaded index (JSON) or the path to an index (it will be loaded automatically). Creating ======== .. code:: python index = neuronperf.model_index.create('/path/to/model', batch_size=1) filename = neuronperf.model_index.save(index) Once you have an index, you can pass its path directly to ``benchmark``. You can also pass a custom filename instead: .. code:: python index = neuronperf.model_index.create('/path/to/model', batch_size=1) neuronperf.model_index.save(index, 'my_index.json') Appending ========= If **multiple models use the same inputs**, you can append them together. For example, if you have the same batch size with multiple pipeline sizes, the inputs are the same, but the model changes. .. code:: python pipeline_sizes = [1, 2, 3, 4] indexes = [neuronperf.model_index.create(f'/path/to/model_p{p}', pipeline_size=p, batch_size=5) for p in pipeline_sizes] index = neuronperf.model_index.append(*indexes) neuronperf.model_index.save(index, 'my_index.json') Filtering ========= You can construct a new model index that is filtered by some parameter. For example, to get a new index with only batch sizes [1, 2], you could do: .. code:: python new_index = neuronperf.model_index.filter(index, batch_sizes=[1, 2]) You can also benchmark subset of a model index by passing only the subset parameters of interest, but remember to ensure you provide the correct number of inputs for the index (even if some are not used). For example, if you an index with models at ``batch_sizes = [1, 2, 3]``, but only wish to benchmark batch size 2: .. code:: python batch_sizes = [1, 2, 3] inputs = [torch.zeros((b, 100)) for b in batch_sizes] reports = neuronperf.torch.benchmark('model_index.json', inputs, batch_sizes=2) Copying ======= You can copy an index to a new location with ``neuronperf.model_index.copy(index, new_index_name, new_index_dir)``. This is mostly useful in combination with ``filter``/``append``. Deleting ======== If you wish to keep your compiled models, just delete the model index file yourself. If you want to delete your model index and all associated artifacts, use: .. code:: python neuronperf.model_index.delete('my_index.json')</pre></body></html>
2023-09-29T20:55:25.359Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/containers/neuron-k8.rst.txt
``` .. _neuron-k8-rn: Neuron K8 Release Notes ^^^^^^^^^^^^^^^^^^^^^^^ .. contents:: Table of contents :local: :depth: 1 Introduction ============ This document lists the current release notes for AWS Neuron Kubernetes (k8) components. Neuron K8 components include a device plugin and a scheduler extension to assist with deployment and management of inf/trn nodes within Kubernetes clusters. Both components are offered as pre-built containers in Public ECR and ready for deployment. - **Device Plugin:** public.ecr.aws/neuron/neuron-device-plugin:2.x.y.z - **Neuron Scheduler:** public.ecr.aws/neuron/neuron-scheduler:2.x.y.z It's recommended to pin the version of the components used and to never use the "latest" tag. To get the list of image tags (2.x.y.z), please refer to these notes or check the image tags on the repo directly. To Pull the Images from ECR: :: docker pull public.ecr.aws/neuron/neuron-device-plugin:2.x.y.z docker pull public.ecr.aws/neuron/neuron-scheduler:2.x.y.z .. _1622: Neuron K8 release [2.16.18.0] =========================== Date: 09/01/2023 Major New Features ------------------ - Previously, the Neuron Device indexing was assigned randomly, which made programming difficult. Changed to using 0-based indexing for Neuron Devices and NeuronCores in EKS container environments; requires Neuron Driver version 2.12.14 or newer. - Improved logging when Neuron Driver not installed/present. Bug Fixes --------- - Fixed Neuron Device Plugin crash when Neuron Driver is not installed/present on the host. - Fixed issue where pods fail to deploy when multiple containers are requesting Neuron resources. - Fixed issue where launching many pods each requesting Neuron cores fails to deploy. .. _1622: Neuron K8 release [2.1.0.0] =========================== Date: 10/27/2022 Summary ------- - Added support for NeuronCore based scheduling to the Neuron Kubernetes Scheduler. Learn more about how to use NeuronCores for finer grain control over container scheduling by following the K8 tutorials documentation in the :ref:`containers section <neuron_containers>`. .. _1622: Neuron K8 release [2.0.0.0] =========================== Date: 10/10/2022 Summary ------- - Added support for TRN1 and INF1 EC2 instance types. Neuron K8 release [1.9.3.0] =========================== Date: 08/02/2022 Summary ------- - Minor updates. Neuron K8 release [1.9.2.0] =========================== Date: 05/27/2022 Summary ------- - Minor updates. Neuron K8 release [1.9.0.0] =========================== Date: 04/29/2022 Summary ------- - Minor updates. Neuron K8 release [1.8.2.0] =========================== Date: 03/25/2022 Summary ------- - Minor updates. Neuron K8 release [1.7.7.0] =========================== Date: 01/20/2022 Summary ------- Minor updates Neuron K8 release [1.7.3.0] =========================== Date: 10/27/2021 Summary ------- Minor updates [1.6.22.0] ========= Date: 08/30/2021 Summary ------- Minor updates. .. _1615: [1.6.15.0] ========= Date: 08/06/2021 Summary ------- Minor updates. .. _1670: [1.6.7.0] ========= Date: 07/26/2021 Summary ------- Minor internal enhancements. .. _1600: [1.6.0.0] ========= Date: 07/02/2021 Summary ------- Minor internal enhancements. .. _1530: [1.5.3.0] ========= Date: 05/01/2021 Summary ------- Minor internal enhancements. .. _1410: [1.4.1.0] ========= Date: 01/30/2021 Summary ------- Minor internal enhancements. .. _1320: [1.3.2.0] ========= Date: 12/23/2020 Summary ------- Minor internal enhancements. .. _1200: [1.2.0.0] ========= Date: 11/17/2020 Summary ------- Minor internal enhancements. .. _11230: [1.1.23.0] ========== Date: 10/22/2020 .. _summary-1: Summary ------- Support added for use with Neuron Runtime 1.1. More details in the Neuron Runtime release notes at :ref:`neuron-runtime-release-notes`. .. _11170: [1.1.17.0] ========== Date: 09/22/2020 Summary ------- Minor internal enhancements. .. _10110000: [1.0.11000.0] ============= Date: 08/08/2020 .. _summary-1: Summary ------- First release of the Neuron K8 Scheduler extension. Major New Features ------------------ - New scheduler extension is provided to ensure that kubelet is scheduling pods on inf1 with contiguous device ids. Additional details about the new scheduler are provided :ref:`neuron-k8-scheduler-ext`. including instructions on how to apply it. - NOTE: The scheduler is only required when using inf1.6xlarge and/or inf1.24xlarge - With this release the device plugin now requires RBAC permission changes to get/patch NODE/POD objects. Please apply the :github:`k8s-neuron-device-plugin-rbac.yml </src/k8/k8s-neuron-device-plugin-rbac.yml>` before using the new device plugin. Resolved Issues --------------- - Scheduler is intended to address https://github.com/aws/aws-neuron-sdk/issues/110 ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-k8-rn: Neuron K8 Release Notes ^^^^^^^^^^^^^^^^^^^^^^^ .. contents:: Table of contents :local: :depth: 1 Introduction ============ This document lists the current release notes for AWS Neuron Kubernetes (k8) components. Neuron K8 components include a device plugin and a scheduler extension to assist with deployment and management of inf/trn nodes within Kubernetes clusters. Both components are offered as pre-built containers in Public ECR and ready for deployment. - **Device Plugin:** public.ecr.aws/neuron/neuron-device-plugin:2.x.y.z - **Neuron Scheduler:** public.ecr.aws/neuron/neuron-scheduler:2.x.y.z It's recommended to pin the version of the components used and to never use the "latest" tag. To get the list of image tags (2.x.y.z), please refer to these notes or check the image tags on the repo directly. To Pull the Images from ECR: :: docker pull public.ecr.aws/neuron/neuron-device-plugin:2.x.y.z docker pull public.ecr.aws/neuron/neuron-scheduler:2.x.y.z .. _1622: Neuron K8 release [2.16.18.0] =========================== Date: 09/01/2023 Major New Features ------------------ - Previously, the Neuron Device indexing was assigned randomly, which made programming difficult. Changed to using 0-based indexing for Neuron Devices and NeuronCores in EKS container environments; requires Neuron Driver version 2.12.14 or newer. - Improved logging when Neuron Driver not installed/present. Bug Fixes --------- - Fixed Neuron Device Plugin crash when Neuron Driver is not installed/present on the host. - Fixed issue where pods fail to deploy when multiple containers are requesting Neuron resources. - Fixed issue where launching many pods each requesting Neuron cores fails to deploy. .. _1622: Neuron K8 release [2.1.0.0] =========================== Date: 10/27/2022 Summary ------- - Added support for NeuronCore based scheduling to the Neuron Kubernetes Scheduler. Learn more about how to use NeuronCores for finer grain control over container scheduling by following the K8 tutorials documentation in the :ref:`containers section &lt;neuron_containers&gt;`. .. _1622: Neuron K8 release [2.0.0.0] =========================== Date: 10/10/2022 Summary ------- - Added support for TRN1 and INF1 EC2 instance types. Neuron K8 release [1.9.3.0] =========================== Date: 08/02/2022 Summary ------- - Minor updates. Neuron K8 release [1.9.2.0] =========================== Date: 05/27/2022 Summary ------- - Minor updates. Neuron K8 release [1.9.0.0] =========================== Date: 04/29/2022 Summary ------- - Minor updates. Neuron K8 release [1.8.2.0] =========================== Date: 03/25/2022 Summary ------- - Minor updates. Neuron K8 release [1.7.7.0] =========================== Date: 01/20/2022 Summary ------- Minor updates Neuron K8 release [1.7.3.0] =========================== Date: 10/27/2021 Summary ------- Minor updates [1.6.22.0] ========= Date: 08/30/2021 Summary ------- Minor updates. .. _1615: [1.6.15.0] ========= Date: 08/06/2021 Summary ------- Minor updates. .. _1670: [1.6.7.0] ========= Date: 07/26/2021 Summary ------- Minor internal enhancements. .. _1600: [1.6.0.0] ========= Date: 07/02/2021 Summary ------- Minor internal enhancements. .. _1530: [1.5.3.0] ========= Date: 05/01/2021 Summary ------- Minor internal enhancements. .. _1410: [1.4.1.0] ========= Date: 01/30/2021 Summary ------- Minor internal enhancements. .. _1320: [1.3.2.0] ========= Date: 12/23/2020 Summary ------- Minor internal enhancements. .. _1200: [1.2.0.0] ========= Date: 11/17/2020 Summary ------- Minor internal enhancements. .. _11230: [1.1.23.0] ========== Date: 10/22/2020 .. _summary-1: Summary ------- Support added for use with Neuron Runtime 1.1. More details in the Neuron Runtime release notes at :ref:`neuron-runtime-release-notes`. .. _11170: [1.1.17.0] ========== Date: 09/22/2020 Summary ------- Minor internal enhancements. .. _10110000: [1.0.11000.0] ============= Date: 08/08/2020 .. _summary-1: Summary ------- First release of the Neuron K8 Scheduler extension. Major New Features ------------------ - New scheduler extension is provided to ensure that kubelet is scheduling pods on inf1 with contiguous device ids. Additional details about the new scheduler are provided :ref:`neuron-k8-scheduler-ext`. including instructions on how to apply it. - NOTE: The scheduler is only required when using inf1.6xlarge and/or inf1.24xlarge - With this release the device plugin now requires RBAC permission changes to get/patch NODE/POD objects. Please apply the :github:`k8s-neuron-device-plugin-rbac.yml &lt;/src/k8/k8s-neuron-device-plugin-rbac.yml&gt;` before using the new device plugin. Resolved Issues --------------- - Scheduler is intended to address https://github.com/aws/aws-neuron-sdk/issues/110 </pre></body></html>
2023-09-29T20:55:25.368Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_api.rst.txt
``` .. _neuronperf_api: NeuronPerf API ============== .. contents:: Table of Contents :local: :depth: 2 Due to a bug in Sphinx, some of the type annotations may be incomplete. You can :download:`download the source code here </src/neuronperf.tar.gz>`. In the future, the source will be hosted in a more browsable way. .. py:function:: compile(compile_fn, model, inputs, batch_sizes: Union[int, List[int]] = None, pipeline_sizes: Union[int, List[int]] = None, performance_levels: Union[str, List[int]] = None, models_dir: str = "models", filename: str = None, compiler_args: dict = None, verbosity: int = 1, *args, **kwargs) -> str: Compiles the provided model with each provided example input, pipeline size, and performance level. Any additional compiler_args passed will be forwarded to the compiler on every invocation. :param model: The model to compile. :param list inputs: A list of example inputs. :param batch_sizes: A list of batch sizes that correspond to the example inputs. :param pipeline_sizes: A list of pipeline sizes to use. See :ref:`neuroncore-pipeline`. :param performance_levels: A list of performance levels to try. Options are: 0 (max accuracy), 1, 2, 3 (max performance, default). See :ref:`neuron-cc-training-mixed-precision`. :param str models_dir: The directory where compilation artifacts will be stored. :param str model_name: An optional model name tag to apply to compiled artifacts. :param str filename: The name of the model index to write out. If not provided, a name will be generated and returned. :param dict compiler_args: Additional compiler arguments to be forwarded with every compilation. :param int verbosity: 0 = error, 1 = info, 2 = debug :return: A model index filename. If a configuration fails to compile, it will not be included in the index and an error will be logged. :rtype: str .. _neuronperf_api_benchmark: .. py:function:: benchmark(load_fn: Callable[[str, int], Any], model_filename: str, inputs: Any, batch_sizes: Union[int, List[int]] = None, duration: float = BENCHMARK_SECS, n_models: Union[int, List[int]] = None, pipeline_sizes: Union[int, List[int]] = None, cast_modes: Union[str, List[str]] = None, workers_per_model: Union[int, None] = None, env_setup_fn: Callable[[int, Dict], None] = None, setup_fn: Callable[[int, Dict, Any], None] = None, preprocess_fn: Callable[[Any], Any] = None, postprocess_fn: Callable[[Any], Any] = None, dataset_loader_fn: Callable[[Any, int], Any] = None, verbosity: int = 1, multiprocess: bool = True, multiinterpreter: bool = False, return_timers: bool = False, device_type: str = "neuron") -> List[Dict]: Benchmarks the model index or individiual model using the provided inputs. If a model index is provided, additional fields such as ``pipeline_sizes`` and ``performance_levels`` can be used to filter the models to benchmark. The default behavior is to benchmark all configurations in the model index. :param load_fn: A function that accepts a model filename and device id, and returns a loaded model. This is automatically passed through the subpackage calls (e.g. ``neuronperf.torch.benchmark``). :param str model_filename: A path to a model index from compile or path to an individual model. For CPU benchmarking, a class should be passed that can be instantiated with a default constructor (e.g. ``MyModelClass``). :param list inputs: A list of example inputs. If the list contains tuples, they will be destructured on inference to support multiple arguments. :param batch_sizes: A list of ints indicating batch sizes that correspond to the inputs. Assumes 1 if not provided. :param float duration: The number of seconds to benchmark each model. :param n_models: The number of models to run in parallel. Default behavior runs 1 model and the max number of models possible, determined by a best effort from ``device_type``, instance size, or other environment state. :param pipeline_sizes: A list of pipeline sizes to use. See :ref:`neuroncore-pipeline`. :param performance_levels: A list of performance levels to try. Options are: 0 (max accuracy), 1, 2, 3 (max performance, default). See :ref:`neuron-cc-training-mixed-precision`. :param workers_per_model: The number of workers to use per model loaded. If ``None``, this is automatically selected. :param env_setup_fn: A custom environment setup function to run in each subprocess before model loading. It will receive the benchmarker id and config. :param setup_fn: A function that receives the benchmarker id, config, and model to perform last minute configuration before inference. :param preprocess_fn: A custom preprocessing function to perform on each input before inference. :param postprocess_fn: A custom postprocessing function to perform on each input after inference. :param bool multiprocess: When True, model loading is dispatched to forked subprocesses. Should be left alone unless debugging. :param bool multiinterpreter: When True, benchmarking is performed in a new python interpreter per model. All parameters must be serializable. Overrides multiprocess. :param bool return_timers: When True, the return of this function is a list of tuples ``(config, results)`` with detailed information. This can be converted to reports with ``get_reports(results)``. :param float stats_interval: Collection interval (in seconds) for metrics during benchmarking, such as CPU and memory usage. :param str device_type: This will be set automatically to one of the ``SUPPORTED_DEVICE_TYPES``. :param float cost_per_hour: The price of this device / hour. Used to estimate cost / 1 million infs in reports. :param str model_name: A friendly name for the model to use in reports. :param str model_class_name: Internal use. :param str model_class_file: Internal use. :param int verbosity: 0 = error, 1 = info, 2 = debug :return: A list of benchmarking results. :rtype: list[dict] .. py:function:: get_reports(results) Summarizes and combines the detailed results from ``neuronperf.benchmark``, when run with ``return_timers=True``. One report dictionary is produced per model configuration benchmarked. The list of reports can be fed directly to other reporting utilities, such as ``neuronperf.write_csv``. :param list[tuple] results: The list of results from ``neuronperf.benchmark``. :param list[int] batch_sizes: The batch sizes that correspond to the `inputs` provided to ``compile`` and ``benchmark``. Used to correct throughput values in the reports. :return: A list of dictionaries that summarize the results for each model configuration. :rtype: list[dict] .. py:function:: print_reports(reports, cols=SUMMARY_COLS, sort_by="throughput_peak", reverse=False) Print a report to the terminal. Example of default behavior: >>> neuronperf.print_reports(reports) throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename 329.667 6.073 6.109 1 1 2 1 models/model_b1_p1_83bh3hhs.pt :param reports: Results from `get_reports`. :param cols: The columns in the report to be displayed. :param sort_by: Sort the cols by the specified key. :param reverse: Sort order. .. py:function:: write_csv(reports: list[dict], filename: str = None, cols=REPORT_COLS) Write benchmarking reports to CSV file. :param list[dict] reports: Results from `neuronperf.get_reports`. :param str filename: Filename to write. If not provided, generated from model_name in report and current timestamp. :param list[str] cols: The columns in the report to be kept. :return: The filename written. :rtype: str .. py:function:: write_json(reports: list[dict], filename: str = None) Writes benchmarking reports to a JSON file. :param list[dict] reports: Results from `neuronperf.get_reports`. :param str filename: Filename to write. If not provided, generated from model_name in report and current timestamp. :return: The filename written. :rtype: str .. py:function:: model_index.append(*model_indexes: Union[str, dict]) -> dict: Appends the model indexes non-destructively into a new model index, without modifying any of the internal data. This is useful if you have benchmarked multiple related models and wish to combine their respective model indexes into a single index. Model name will be taken from the first index provided. Duplicate configs will be filtered. :param model_indexes: Model indexes or paths to model indexes to combine. :return: A new dictionary representing the combined model index. :rtype: dict .. py:function:: model_index.copy(old_index: Union[str, dict], new_index: str, new_dir: str) -> str: Copy an index to a new location. Will rename ``old_index`` to ``new_index`` and copy all model files into ``new_dir``, updating the index paths. This is useful for pulling individual models out of a pool. Returns the path to the new index. .. py:function:: model_index.create(filename, input_idx=0, batch_size=1, pipeline_size=1, cast_mode=DEFAULT_CAST, compile_s=None) Create a new model index from a pre-compiled model. :param str filename: The path to the compiled model. :param int input_idx: The index in your inputs that this model should be run on. :param int batch_size: The batch size at compilation for this model. :param int pipeline_size: The pipeline size used at compilation for this model. :param str cast_mode: The casting option this model was compiled with. :param float compile_s: Seconds spent compiling. :return: A new dictionary representing a model index. :rtype: dict .. py:function:: model_index.delete(filename: str): Deletes the model index and all associated models referenced by the index. .. py:function:: model_index.filter(index: Union[str, dict], **kwargs) -> dict: Filters provided model index on provided criteria and returns a new index. Each kwarg is a standard (k, v) pair, where k is treated as a filter name and v may be one or more values used to filter model configs. .. py:function:: model_index.load(filename) -> dict: Load a NeuronPerf model index from a file. .. py:function:: model_index.move(old_index: str, new_index: str, new_dir: str) -> str: This is the same as ``copy`` followed by ``delete`` on the old index. .. py:function:: model_index.save(model_index, filename: str = None, root_dir=None) -> str: Save a NeuronPerf model index to a file. ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_api: NeuronPerf API ============== .. contents:: Table of Contents :local: :depth: 2 Due to a bug in Sphinx, some of the type annotations may be incomplete. You can :download:`download the source code here &lt;/src/neuronperf.tar.gz&gt;`. In the future, the source will be hosted in a more browsable way. .. py:function:: compile(compile_fn, model, inputs, batch_sizes: Union[int, List[int]] = None, pipeline_sizes: Union[int, List[int]] = None, performance_levels: Union[str, List[int]] = None, models_dir: str = "models", filename: str = None, compiler_args: dict = None, verbosity: int = 1, *args, **kwargs) -&gt; str: Compiles the provided model with each provided example input, pipeline size, and performance level. Any additional compiler_args passed will be forwarded to the compiler on every invocation. :param model: The model to compile. :param list inputs: A list of example inputs. :param batch_sizes: A list of batch sizes that correspond to the example inputs. :param pipeline_sizes: A list of pipeline sizes to use. See :ref:`neuroncore-pipeline`. :param performance_levels: A list of performance levels to try. Options are: 0 (max accuracy), 1, 2, 3 (max performance, default). See :ref:`neuron-cc-training-mixed-precision`. :param str models_dir: The directory where compilation artifacts will be stored. :param str model_name: An optional model name tag to apply to compiled artifacts. :param str filename: The name of the model index to write out. If not provided, a name will be generated and returned. :param dict compiler_args: Additional compiler arguments to be forwarded with every compilation. :param int verbosity: 0 = error, 1 = info, 2 = debug :return: A model index filename. If a configuration fails to compile, it will not be included in the index and an error will be logged. :rtype: str .. _neuronperf_api_benchmark: .. py:function:: benchmark(load_fn: Callable[[str, int], Any], model_filename: str, inputs: Any, batch_sizes: Union[int, List[int]] = None, duration: float = BENCHMARK_SECS, n_models: Union[int, List[int]] = None, pipeline_sizes: Union[int, List[int]] = None, cast_modes: Union[str, List[str]] = None, workers_per_model: Union[int, None] = None, env_setup_fn: Callable[[int, Dict], None] = None, setup_fn: Callable[[int, Dict, Any], None] = None, preprocess_fn: Callable[[Any], Any] = None, postprocess_fn: Callable[[Any], Any] = None, dataset_loader_fn: Callable[[Any, int], Any] = None, verbosity: int = 1, multiprocess: bool = True, multiinterpreter: bool = False, return_timers: bool = False, device_type: str = "neuron") -&gt; List[Dict]: Benchmarks the model index or individiual model using the provided inputs. If a model index is provided, additional fields such as ``pipeline_sizes`` and ``performance_levels`` can be used to filter the models to benchmark. The default behavior is to benchmark all configurations in the model index. :param load_fn: A function that accepts a model filename and device id, and returns a loaded model. This is automatically passed through the subpackage calls (e.g. ``neuronperf.torch.benchmark``). :param str model_filename: A path to a model index from compile or path to an individual model. For CPU benchmarking, a class should be passed that can be instantiated with a default constructor (e.g. ``MyModelClass``). :param list inputs: A list of example inputs. If the list contains tuples, they will be destructured on inference to support multiple arguments. :param batch_sizes: A list of ints indicating batch sizes that correspond to the inputs. Assumes 1 if not provided. :param float duration: The number of seconds to benchmark each model. :param n_models: The number of models to run in parallel. Default behavior runs 1 model and the max number of models possible, determined by a best effort from ``device_type``, instance size, or other environment state. :param pipeline_sizes: A list of pipeline sizes to use. See :ref:`neuroncore-pipeline`. :param performance_levels: A list of performance levels to try. Options are: 0 (max accuracy), 1, 2, 3 (max performance, default). See :ref:`neuron-cc-training-mixed-precision`. :param workers_per_model: The number of workers to use per model loaded. If ``None``, this is automatically selected. :param env_setup_fn: A custom environment setup function to run in each subprocess before model loading. It will receive the benchmarker id and config. :param setup_fn: A function that receives the benchmarker id, config, and model to perform last minute configuration before inference. :param preprocess_fn: A custom preprocessing function to perform on each input before inference. :param postprocess_fn: A custom postprocessing function to perform on each input after inference. :param bool multiprocess: When True, model loading is dispatched to forked subprocesses. Should be left alone unless debugging. :param bool multiinterpreter: When True, benchmarking is performed in a new python interpreter per model. All parameters must be serializable. Overrides multiprocess. :param bool return_timers: When True, the return of this function is a list of tuples ``(config, results)`` with detailed information. This can be converted to reports with ``get_reports(results)``. :param float stats_interval: Collection interval (in seconds) for metrics during benchmarking, such as CPU and memory usage. :param str device_type: This will be set automatically to one of the ``SUPPORTED_DEVICE_TYPES``. :param float cost_per_hour: The price of this device / hour. Used to estimate cost / 1 million infs in reports. :param str model_name: A friendly name for the model to use in reports. :param str model_class_name: Internal use. :param str model_class_file: Internal use. :param int verbosity: 0 = error, 1 = info, 2 = debug :return: A list of benchmarking results. :rtype: list[dict] .. py:function:: get_reports(results) Summarizes and combines the detailed results from ``neuronperf.benchmark``, when run with ``return_timers=True``. One report dictionary is produced per model configuration benchmarked. The list of reports can be fed directly to other reporting utilities, such as ``neuronperf.write_csv``. :param list[tuple] results: The list of results from ``neuronperf.benchmark``. :param list[int] batch_sizes: The batch sizes that correspond to the `inputs` provided to ``compile`` and ``benchmark``. Used to correct throughput values in the reports. :return: A list of dictionaries that summarize the results for each model configuration. :rtype: list[dict] .. py:function:: print_reports(reports, cols=SUMMARY_COLS, sort_by="throughput_peak", reverse=False) Print a report to the terminal. Example of default behavior: &gt;&gt;&gt; neuronperf.print_reports(reports) throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename 329.667 6.073 6.109 1 1 2 1 models/model_b1_p1_83bh3hhs.pt :param reports: Results from `get_reports`. :param cols: The columns in the report to be displayed. :param sort_by: Sort the cols by the specified key. :param reverse: Sort order. .. py:function:: write_csv(reports: list[dict], filename: str = None, cols=REPORT_COLS) Write benchmarking reports to CSV file. :param list[dict] reports: Results from `neuronperf.get_reports`. :param str filename: Filename to write. If not provided, generated from model_name in report and current timestamp. :param list[str] cols: The columns in the report to be kept. :return: The filename written. :rtype: str .. py:function:: write_json(reports: list[dict], filename: str = None) Writes benchmarking reports to a JSON file. :param list[dict] reports: Results from `neuronperf.get_reports`. :param str filename: Filename to write. If not provided, generated from model_name in report and current timestamp. :return: The filename written. :rtype: str .. py:function:: model_index.append(*model_indexes: Union[str, dict]) -&gt; dict: Appends the model indexes non-destructively into a new model index, without modifying any of the internal data. This is useful if you have benchmarked multiple related models and wish to combine their respective model indexes into a single index. Model name will be taken from the first index provided. Duplicate configs will be filtered. :param model_indexes: Model indexes or paths to model indexes to combine. :return: A new dictionary representing the combined model index. :rtype: dict .. py:function:: model_index.copy(old_index: Union[str, dict], new_index: str, new_dir: str) -&gt; str: Copy an index to a new location. Will rename ``old_index`` to ``new_index`` and copy all model files into ``new_dir``, updating the index paths. This is useful for pulling individual models out of a pool. Returns the path to the new index. .. py:function:: model_index.create(filename, input_idx=0, batch_size=1, pipeline_size=1, cast_mode=DEFAULT_CAST, compile_s=None) Create a new model index from a pre-compiled model. :param str filename: The path to the compiled model. :param int input_idx: The index in your inputs that this model should be run on. :param int batch_size: The batch size at compilation for this model. :param int pipeline_size: The pipeline size used at compilation for this model. :param str cast_mode: The casting option this model was compiled with. :param float compile_s: Seconds spent compiling. :return: A new dictionary representing a model index. :rtype: dict .. py:function:: model_index.delete(filename: str): Deletes the model index and all associated models referenced by the index. .. py:function:: model_index.filter(index: Union[str, dict], **kwargs) -&gt; dict: Filters provided model index on provided criteria and returns a new index. Each kwarg is a standard (k, v) pair, where k is treated as a filter name and v may be one or more values used to filter model configs. .. py:function:: model_index.load(filename) -&gt; dict: Load a NeuronPerf model index from a file. .. py:function:: model_index.move(old_index: str, new_index: str, new_dir: str) -&gt; str: This is the same as ``copy`` followed by ``delete`` on the old index. .. py:function:: model_index.save(model_index, filename: str = None, root_dir=None) -&gt; str: Save a NeuronPerf model index to a file. </pre></body></html>
2023-09-29T20:55:25.391Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_downloads/9c91f3c0268c772a942b742486b8c90d/handler_bert.py
``` import os import json import sys import logging from abc import ABC import torch import torch_neuron from transformers import AutoTokenizer from ts.torch_handler.base_handler import BaseHandler # one core per worker os.environ['NEURON_RT_NUM_CORES'] = '1' logger = logging.getLogger(__name__) class BertEmbeddingHandler(BaseHandler, ABC): """ Handler class for Bert Embedding computations. """ def __init__(self): super(BertEmbeddingHandler, self).__init__() self.initialized = False def initialize(self, ctx): self.manifest = ctx.manifest properties = ctx.system_properties self.device = 'cpu' model_dir = properties.get('model_dir') serialized_file = self.manifest['model']['serializedFile'] model_pt_path = os.path.join(model_dir, serialized_file) # point sys.path to our config file with open('config.json') as fp: config = json.load(fp) self.max_length = config['max_length'] self.batch_size = config['batch_size'] self.classes = ['not paraphrase', 'paraphrase'] self.model = torch.jit.load(model_pt_path) logger.debug(f'Model loaded from {model_dir}') self.model.to(self.device) self.model.eval() self.tokenizer = AutoTokenizer.from_pretrained(config['model_name']) self.initialized = True def preprocess(self, input_data): """ Tokenization pre-processing """ input_ids = [] attention_masks = [] token_type_ids = [] for row in input_data: seq_0 = row['seq_0'].decode('utf-8') seq_1 = row['seq_1'].decode('utf-8') logger.debug(f'Received text: "{seq_0}", "{seq_1}"') inputs = self.tokenizer.encode_plus( seq_0, seq_1, max_length=self.max_length, padding='max_length', truncation=True, return_tensors='pt' ) input_ids.append(inputs['input_ids']) attention_masks.append(inputs['attention_mask']) token_type_ids.append(inputs['token_type_ids']) batch = (torch.cat(input_ids, 0), torch.cat(attention_masks, 0), torch.cat(token_type_ids, 0)) return batch def inference(self, inputs): """ Predict the class of a text using a trained transformer model. """ # sanity check dimensions assert(len(inputs) == 3) num_inferences = len(inputs[0]) assert(num_inferences <= self.batch_size) # insert padding if we received a partial batch padding = self.batch_size - num_inferences if padding > 0: pad = torch.nn.ConstantPad1d((0, 0, 0, padding), value=0) inputs = [pad(x) for x in inputs] outputs = self.model(*inputs)[0] predictions = [] for i in range(num_inferences): prediction = self.classes[outputs[i].argmax().item()] predictions.append([prediction]) logger.debug("Model predicted: '%s'", prediction) return predictions def postprocess(self, inference_output): return inference_output ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">import os import json import sys import logging from abc import ABC import torch import torch_neuron from transformers import AutoTokenizer from ts.torch_handler.base_handler import BaseHandler # one core per worker os.environ['NEURON_RT_NUM_CORES'] = '1' logger = logging.getLogger(__name__) class BertEmbeddingHandler(BaseHandler, ABC): """ Handler class for Bert Embedding computations. """ def __init__(self): super(BertEmbeddingHandler, self).__init__() self.initialized = False def initialize(self, ctx): self.manifest = ctx.manifest properties = ctx.system_properties self.device = 'cpu' model_dir = properties.get('model_dir') serialized_file = self.manifest['model']['serializedFile'] model_pt_path = os.path.join(model_dir, serialized_file) # point sys.path to our config file with open('config.json') as fp: config = json.load(fp) self.max_length = config['max_length'] self.batch_size = config['batch_size'] self.classes = ['not paraphrase', 'paraphrase'] self.model = torch.jit.load(model_pt_path) logger.debug(f'Model loaded from {model_dir}') self.model.to(self.device) self.model.eval() self.tokenizer = AutoTokenizer.from_pretrained(config['model_name']) self.initialized = True def preprocess(self, input_data): """ Tokenization pre-processing """ input_ids = [] attention_masks = [] token_type_ids = [] for row in input_data: seq_0 = row['seq_0'].decode('utf-8') seq_1 = row['seq_1'].decode('utf-8') logger.debug(f'Received text: "{seq_0}", "{seq_1}"') inputs = self.tokenizer.encode_plus( seq_0, seq_1, max_length=self.max_length, padding='max_length', truncation=True, return_tensors='pt' ) input_ids.append(inputs['input_ids']) attention_masks.append(inputs['attention_mask']) token_type_ids.append(inputs['token_type_ids']) batch = (torch.cat(input_ids, 0), torch.cat(attention_masks, 0), torch.cat(token_type_ids, 0)) return batch def inference(self, inputs): """ Predict the class of a text using a trained transformer model. """ # sanity check dimensions assert(len(inputs) == 3) num_inferences = len(inputs[0]) assert(num_inferences &lt;= self.batch_size) # insert padding if we received a partial batch padding = self.batch_size - num_inferences if padding &gt; 0: pad = torch.nn.ConstantPad1d((0, 0, 0, padding), value=0) inputs = [pad(x) for x in inputs] outputs = self.model(*inputs)[0] predictions = [] for i in range(num_inferences): prediction = self.classes[outputs[i].argmax().item()] predictions.append([prediction]) logger.debug("Model predicted: '%s'", prediction) return predictions def postprocess(self, inference_output): return inference_output </pre></body></html>
2023-09-29T20:55:25.398Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb.txt
``` { "cells": [ { "cell_type": "markdown", "id": "4674f667", "metadata": {}, "source": [ "# Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container" ] }, { "cell_type": "markdown", "id": "b3e39838", "metadata": {}, "source": [ "## Overview" ] }, { "cell_type": "markdown", "id": "a92c454f", "metadata": {}, "source": [ "In this tutotial we will deploy on SageMaker a pretraine BERT Base model from HuggingFace Transformers, using the [AWS Deep Learning Containers](https://github.com/aws/deep-learning-containers). We will use the same model as shown in the [Neuron Tutorial \"PyTorch - HuggingFace Pretrained BERT Tutorial\"](../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html#). We will compile the model and build a custom AWS Deep Learning Container, to include the HuggingFace Transformers Library. \n", "\n", "This Jupyter Notebook should run on a ml.c5.4xlarge SageMaker Notebook instance. You can set up your SageMaker Notebook instance by following the [Get Started with Amazon SageMaker Notebook Instances](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-console.html) documentation. \n", "\n", "> We recommend increasing the size of the base root volume of you SM notebook instance, to accomodate the models and containers built locally. A root volume of 10Gb should suffice. \n" ] }, { "cell_type": "markdown", "id": "37445ad2", "metadata": {}, "source": [ "## Install Dependencies:" ] }, { "cell_type": "markdown", "id": "3ecd765f", "metadata": {}, "source": [ "This tutorial requires the following pip packages:" ] }, { "cell_type": "markdown", "id": "cae3092c", "metadata": {}, "source": [ "- torch-neuron\n", "- neuron-cc[tensorflow]\n", "- transformers" ] }, { "cell_type": "code", "execution_count": null, "id": "066c3731", "metadata": {}, "outputs": [], "source": [ "%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n", "!pip install --upgrade --no-cache-dir torch-neuron neuron-cc[tensorflow] torchvision torch --extra-index-url=https://pip.repos.neuron.amazonaws.com\n", "!pip install --upgrade --no-cache-dir 'transformers==4.6.0'" ] }, { "cell_type": "markdown", "id": "a4796d3a", "metadata": {}, "source": [ "## Compile the model into an AWS Neuron optimized TorchScript" ] }, { "cell_type": "code", "execution_count": null, "id": "6fe85f8e", "metadata": {}, "outputs": [], "source": [ "import torch\n", "import torch_neuron\n", "\n", "from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig" ] }, { "cell_type": "code", "execution_count": null, "id": "0c5c253a", "metadata": {}, "outputs": [], "source": [ "# Build tokenizer and model\n", "tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\n", "model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mrpc\", return_dict=False)\n", "\n", "# Setup some example inputs\n", "sequence_0 = \"The company HuggingFace is based in New York City\"\n", "sequence_1 = \"Apples are especially bad for your health\"\n", "sequence_2 = \"HuggingFace's headquarters are situated in Manhattan\"\n", "\n", "max_length=128\n", "paraphrase = tokenizer.encode_plus(sequence_0, sequence_2, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "not_paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "\n", "# Run the original PyTorch model on compilation exaple\n", "paraphrase_classification_logits = model(**paraphrase)[0]\n", "\n", "# Convert example inputs to a format that is compatible with TorchScript tracing\n", "example_inputs_paraphrase = paraphrase['input_ids'], paraphrase['attention_mask'], paraphrase['token_type_ids']\n", "example_inputs_not_paraphrase = not_paraphrase['input_ids'], not_paraphrase['attention_mask'], not_paraphrase['token_type_ids']" ] }, { "cell_type": "code", "execution_count": null, "id": "44255ada", "metadata": {}, "outputs": [], "source": [ "%%time\n", "# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron\n", "# This step may need 3-5 min\n", "model_neuron = torch.neuron.trace(model, example_inputs_paraphrase, verbose=1, compiler_workdir='./compilation_artifacts')" ] }, { "cell_type": "markdown", "id": "5c4752ac", "metadata": {}, "source": [ "You may inspect **model_neuron.graph** to see which part is running on CPU versus running on the accelerator. All native **aten** operators in the graph will be running on CPU." ] }, { "cell_type": "code", "execution_count": null, "id": "dc00889e", "metadata": {}, "outputs": [], "source": [ "# See which part is running on CPU versus running on the accelerator.\n", "print(model_neuron.graph)" ] }, { "cell_type": "markdown", "id": "775fb30d", "metadata": {}, "source": [ "Save the compiled model, so it can be packaged and sent to S3." ] }, { "cell_type": "code", "execution_count": null, "id": "027c4f53", "metadata": {}, "outputs": [], "source": [ "# Save the TorchScript for later use\n", "model_neuron.save('neuron_compiled_model.pt')" ] }, { "cell_type": "markdown", "id": "d362c579", "metadata": {}, "source": [ "### Package the pre-trained model and upload it to S3\n", "\n", "To make the model available for the SageMaker deployment, you will TAR the serialized graph and upload it to the default Amazon S3 bucket for your SageMaker session. " ] }, { "cell_type": "code", "execution_count": null, "id": "29c7f7b4", "metadata": {}, "outputs": [], "source": [ "# Now you'll create a model.tar.gz file to be used by SageMaker endpoint\n", "!tar -czvf model.tar.gz neuron_compiled_model.pt" ] }, { "cell_type": "code", "execution_count": null, "id": "1beadca0", "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import time\n", "from sagemaker.utils import name_from_base\n", "import sagemaker" ] }, { "cell_type": "code", "execution_count": null, "id": "06ad87d4", "metadata": {}, "outputs": [], "source": [ "# upload model to S3\n", "role = sagemaker.get_execution_role()\n", "sess=sagemaker.Session()\n", "region=sess.boto_region_name\n", "bucket=sess.default_bucket()\n", "sm_client=boto3.client('sagemaker')" ] }, { "cell_type": "code", "execution_count": null, "id": "5205ec55", "metadata": {}, "outputs": [], "source": [ "model_key = '{}/model/model.tar.gz'.format('inf1_compiled_model')\n", "model_path = 's3://{}/{}'.format(bucket, model_key)\n", "boto3.resource('s3').Bucket(bucket).upload_file('model.tar.gz', model_key)\n", "print(\"Uploaded model to S3:\")\n", "print(model_path)" ] }, { "cell_type": "markdown", "id": "e8b425d4", "metadata": {}, "source": [ "## Build and Push the container" ] }, { "cell_type": "markdown", "id": "430e6ed2", "metadata": {}, "source": [ "The following shell code shows how to build the container image using docker build and push the container image to ECR using docker push.\n", "The Dockerfile in this example is available in the ***container*** folder.\n", "Here's an example of the Dockerfile:\n", "\n", "```Dockerfile\n", "FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference-neuron:1.7.1-neuron-py36-ubuntu18.04\n", "\n", "# Install packages \n", "RUN pip install \"transformers==4.7.0\"\n", "```" ] }, { "cell_type": "code", "execution_count": null, "id": "3970025d", "metadata": {}, "outputs": [], "source": [ "!cat container/Dockerfile" ] }, { "cell_type": "markdown", "id": "62f78b0f", "metadata": {}, "source": [ "Before running the next cell, make sure your SageMaker IAM role has access to ECR. If not, you can attache the role `AmazonEC2ContainerRegistryPowerUser` to your IAM role ARN, which allows you to upload image layers to ECR. \n", "\n", "It takes 5 minutes to build docker images and upload image to ECR" ] }, { "cell_type": "code", "execution_count": null, "id": "ecd51acf", "metadata": {}, "outputs": [], "source": [ "%%sh\n", "\n", "# The name of our algorithm\n", "algorithm_name=neuron-py36-inference\n", "\n", "cd container\n", "\n", "account=$(aws sts get-caller-identity --query Account --output text)\n", "\n", "# Get the region defined in the current configuration (default to us-west-2 if none defined)\n", "region=$(aws configure get region)\n", "region=${region:-us-west-2}\n", "\n", "fullname=\"${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest\"\n", "\n", "# If the repository doesn't exist in ECR, create it.\n", "\n", "aws ecr describe-repositories --repository-names \"${algorithm_name}\" > /dev/null 2>&1\n", "\n", "if [ $? -ne 0 ]\n", "then\n", " aws ecr create-repository --repository-name \"${algorithm_name}\" > /dev/null\n", "fi\n", "\n", "# Get the login command from ECR in order to pull down the SageMaker PyTorch image\n", "aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com\n", "# Build the docker image locally with the image name and then push it to ECR\n", "# with the full name.\n", "docker build -t ${algorithm_name} . --build-arg REGION=${region}\n", "docker tag ${algorithm_name} ${fullname}\n", "\n", "# Get the login command from ECR and execute it directly\n", "aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${account}.dkr.ecr.${region}.amazonaws.com\n", "docker push ${fullname}" ] }, { "cell_type": "markdown", "id": "e4f6bbda", "metadata": {}, "source": [ "## Deploy Container and run inference based on the pretrained model" ] }, { "cell_type": "markdown", "id": "64e65e31", "metadata": {}, "source": [ "To deploy a pretrained PyTorch model, you'll need to use the PyTorch estimator object to create a PyTorchModel object and set a different entry_point.\n", "\n", "You'll use the PyTorchModel object to deploy a PyTorchPredictor. This creates a SageMaker Endpoint -- a hosted prediction service that we can use to perform inference." ] }, { "cell_type": "code", "execution_count": null, "id": "f343d3b1", "metadata": {}, "outputs": [], "source": [ "import sys\n", "\n", "!{sys.executable} -m pip install Transformers" ] }, { "cell_type": "code", "execution_count": null, "id": "2bd73b77", "metadata": {}, "outputs": [], "source": [ "import os\n", "import boto3\n", "import sagemaker\n", "\n", "role = sagemaker.get_execution_role()\n", "sess = sagemaker.Session()\n", "\n", "bucket = sess.default_bucket()\n", "prefix = \"inf1_compiled_model/model\"\n", "\n", "# Get container name in ECR\n", "client=boto3.client('sts')\n", "account=client.get_caller_identity()['Account']\n", "\n", "my_session=boto3.session.Session()\n", "region=my_session.region_name\n", "\n", "algorithm_name=\"neuron-py36-inference\"\n", "ecr_image='{}.dkr.ecr.{}.amazonaws.com/{}:latest'.format(account, region, algorithm_name)\n", "print(ecr_image)" ] }, { "cell_type": "markdown", "id": "9298f2a7", "metadata": {}, "source": [ "An implementation of *model_fn* is required for inference script.\n", "We are going to implement our own **model_fn** and **predict_fn** for Hugging Face Bert, and use default implementations of **input_fn** and **output_fn** defined in sagemaker-pytorch-containers.\n", "\n", "In this example, the inference script is put in ***code*** folder. Run the next cell to see it:\n" ] }, { "cell_type": "code", "execution_count": null, "id": "cfea75b6", "metadata": {}, "outputs": [], "source": [ "!pygmentize code/inference.py" ] }, { "cell_type": "markdown", "id": "1b31a7b8", "metadata": {}, "source": [ "Path of compiled pretrained model in S3:" ] }, { "cell_type": "code", "execution_count": null, "id": "61f3556e", "metadata": {}, "outputs": [], "source": [ "key = os.path.join(prefix, \"model.tar.gz\")\n", "pretrained_model_data = \"s3://{}/{}\".format(bucket, key)\n", "print(pretrained_model_data)" ] }, { "cell_type": "markdown", "id": "e7557a5f", "metadata": {}, "source": [ "The model object is defined by using the SageMaker Python SDK's PyTorchModel and pass in the model from the estimator and the entry_point. The endpoint's entry point for inference is defined by model_fn as seen in the previous code block that prints out **inference.py**. The model_fn function will load the model and required tokenizer.\n", "\n", "Note, **image_uri** must be user's own ECR images." ] }, { "cell_type": "code", "execution_count": null, "id": "0bd99768", "metadata": {}, "outputs": [], "source": [ "from sagemaker.pytorch.model import PyTorchModel\n", "\n", "pytorch_model = PyTorchModel(\n", " model_data=pretrained_model_data,\n", " role=role,\n", " source_dir=\"code\",\n", " framework_version=\"1.7.1\",\n", " entry_point=\"inference.py\",\n", " image_uri=ecr_image\n", ")\n", "\n", "# Let SageMaker know that we've already compiled the model via neuron-cc\n", "pytorch_model._is_compiled_model = True" ] }, { "cell_type": "markdown", "id": "67439fe7", "metadata": {}, "source": [ "The arguments to the deploy function allow us to set the number and type of instances that will be used for the Endpoint.\n", "\n", "Here you will deploy the model to a single **ml.inf1.2xlarge** instance.\n", "It may take 6-10 min to deploy." ] }, { "cell_type": "code", "execution_count": null, "id": "d771fc7c", "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "predictor = pytorch_model.deploy(initial_instance_count=1, instance_type=\"ml.inf1.2xlarge\")" ] }, { "cell_type": "code", "execution_count": null, "id": "ab6342f3", "metadata": {}, "outputs": [], "source": [ "print(predictor.endpoint_name)" ] }, { "cell_type": "markdown", "id": "059537d9", "metadata": {}, "source": [ "Since in the input_fn we declared that the incoming requests are json-encoded, we need to use a json serializer, to encode the incoming data into a json string. Also, we declared the return content type to be json string, we Need to use a json deserializer to parse the response." ] }, { "cell_type": "code", "execution_count": null, "id": "29e82f90", "metadata": {}, "outputs": [], "source": [ "predictor.serializer = sagemaker.serializers.JSONSerializer()\n", "predictor.deserializer = sagemaker.deserializers.JSONDeserializer()" ] }, { "cell_type": "markdown", "id": "d006ea03", "metadata": {}, "source": [ "Using a list of sentences, now SageMaker endpoint is invoked to get predictions." ] }, { "cell_type": "code", "execution_count": null, "id": "325a87f8", "metadata": {}, "outputs": [], "source": [ "%%time\n", "result = predictor.predict(\n", " [\n", " \"Never allow the same bug to bite you twice.\",\n", " \"The best part of Amazon SageMaker is that it makes machine learning easy.\",\n", " ]\n", ")\n", "print(result)" ] }, { "cell_type": "code", "execution_count": null, "id": "4a12410d", "metadata": {}, "outputs": [], "source": [ "%%time\n", "result = predictor.predict(\n", " [\n", " \"The company HuggingFace is based in New York City\",\n", " \"HuggingFace's headquarters are situated in Manhattan\",\n", " ]\n", ")\n", "print(result)" ] }, { "cell_type": "markdown", "id": "a72dfd16", "metadata": {}, "source": [ "## Benchmarking your endpoint\n", "\n", "The following cells create a load test for your endpoint. You first define some helper functions: `inference_latency` runs the endpoint request, collects cliend side latency and any errors, `random_sentence` builds random to be sent to the endpoint. " ] }, { "cell_type": "code", "execution_count": null, "id": "088d0e75", "metadata": {}, "outputs": [], "source": [ "import numpy as np \n", "import datetime\n", "import math\n", "import time\n", "import boto3 \n", "import matplotlib.pyplot as plt\n", "from joblib import Parallel, delayed\n", "import numpy as np\n", "from tqdm import tqdm\n", "import random" ] }, { "cell_type": "code", "execution_count": null, "id": "038d9953", "metadata": {}, "outputs": [], "source": [ "def inference_latency(model,*inputs):\n", " \"\"\"\n", " infetence_time is a simple method to return the latency of a model inference.\n", "\n", " Parameters:\n", " model: torch model onbject loaded using torch.jit.load\n", " inputs: model() args\n", "\n", " Returns:\n", " latency in seconds\n", " \"\"\"\n", " error = False\n", " start = time.time()\n", " try:\n", " results = model(*inputs)\n", " except:\n", " error = True\n", " results = []\n", " return {'latency':time.time() - start, 'error': error, 'result': results}" ] }, { "cell_type": "code", "execution_count": null, "id": "d6b200ac", "metadata": {}, "outputs": [], "source": [ "def random_sentence():\n", " \n", " s_nouns = [\"A dude\", \"My mom\", \"The king\", \"Some guy\", \"A cat with rabies\", \"A sloth\", \"Your homie\", \"This cool guy my gardener met yesterday\", \"Superman\"]\n", " p_nouns = [\"These dudes\", \"Both of my moms\", \"All the kings of the world\", \"Some guys\", \"All of a cattery's cats\", \"The multitude of sloths living under your bed\", \"Your homies\", \"Like, these, like, all these people\", \"Supermen\"]\n", " s_verbs = [\"eats\", \"kicks\", \"gives\", \"treats\", \"meets with\", \"creates\", \"hacks\", \"configures\", \"spies on\", \"retards\", \"meows on\", \"flees from\", \"tries to automate\", \"explodes\"]\n", " p_verbs = [\"eat\", \"kick\", \"give\", \"treat\", \"meet with\", \"create\", \"hack\", \"configure\", \"spy on\", \"retard\", \"meow on\", \"flee from\", \"try to automate\", \"explode\"]\n", " infinitives = [\"to make a pie.\", \"for no apparent reason.\", \"because the sky is green.\", \"for a disease.\", \"to be able to make toast explode.\", \"to know more about archeology.\"]\n", " \n", " return (random.choice(s_nouns) + ' ' + random.choice(s_verbs) + ' ' + random.choice(s_nouns).lower() or random.choice(p_nouns).lower() + ' ' + random.choice(infinitives))\n", "\n", "print([random_sentence(), random_sentence()])" ] }, { "cell_type": "markdown", "id": "e2945dde", "metadata": {}, "source": [ "The following cell creates `number_of_clients` concurrent threads to run `number_of_runs` requests. Once completed, a `boto3` CloudWatch client will query for the server side latency metrics for comparison. " ] }, { "cell_type": "code", "execution_count": null, "id": "69c047e3", "metadata": {}, "outputs": [], "source": [ "# Defining Auxiliary variables\n", "number_of_clients = 2\n", "number_of_runs = 1000\n", "t = tqdm(range(number_of_runs),position=0, leave=True)\n", "\n", "# Starting parallel clients\n", "cw_start = datetime.datetime.utcnow()\n", "\n", "results = Parallel(n_jobs=number_of_clients,prefer=\"threads\")(delayed(inference_latency)(predictor.predict,[random_sentence(), random_sentence()]) for mod in t)\n", "avg_throughput = t.total/t.format_dict['elapsed']\n", "\n", "cw_end = datetime.datetime.utcnow() \n", "\n", "# Computing metrics and print\n", "latencies = [res['latency'] for res in results]\n", "errors = [res['error'] for res in results]\n", "error_p = sum(errors)/len(errors) *100\n", "p50 = np.quantile(latencies[-1000:],0.50) * 1000\n", "p90 = np.quantile(latencies[-1000:],0.95) * 1000\n", "p95 = np.quantile(latencies[-1000:],0.99) * 1000\n", "\n", "print(f'Avg Throughput: :{avg_throughput:.1f}\\n')\n", "print(f'50th Percentile Latency:{p50:.1f} ms')\n", "print(f'90th Percentile Latency:{p90:.1f} ms')\n", "print(f'95th Percentile Latency:{p95:.1f} ms\\n')\n", "print(f'Errors percentage: {error_p:.1f} %\\n')\n", "\n", "# Querying CloudWatch\n", "print('Getting Cloudwatch:')\n", "cloudwatch = boto3.client('cloudwatch')\n", "statistics=['SampleCount', 'Average', 'Minimum', 'Maximum']\n", "extended=['p50', 'p90', 'p95', 'p100']\n", "\n", "# Give 5 minute buffer to end\n", "cw_end += datetime.timedelta(minutes=5)\n", "\n", "# Period must be 1, 5, 10, 30, or multiple of 60\n", "# Calculate closest multiple of 60 to the total elapsed time\n", "factor = math.ceil((cw_end - cw_start).total_seconds() / 60)\n", "period = factor * 60\n", "print('Time elapsed: {} seconds'.format((cw_end - cw_start).total_seconds()))\n", "print('Using period of {} seconds\\n'.format(period))\n", "\n", "cloudwatch_ready = False\n", "# Keep polling CloudWatch metrics until datapoints are available\n", "while not cloudwatch_ready:\n", " time.sleep(30)\n", " print('Waiting 30 seconds ...')\n", " # Must use default units of microseconds\n", " model_latency_metrics = cloudwatch.get_metric_statistics(MetricName='ModelLatency',\n", " Dimensions=[{'Name': 'EndpointName',\n", " 'Value': predictor.endpoint_name},\n", " {'Name': 'VariantName',\n", " 'Value': \"AllTraffic\"}],\n", " Namespace=\"AWS/SageMaker\",\n", " StartTime=cw_start,\n", " EndTime=cw_end,\n", " Period=period,\n", " Statistics=statistics,\n", " ExtendedStatistics=extended\n", " )\n", " # Should be 1000\n", " if len(model_latency_metrics['Datapoints']) > 0:\n", " print('{} latency datapoints ready'.format(model_latency_metrics['Datapoints'][0]['SampleCount']))\n", " side_avg = model_latency_metrics['Datapoints'][0]['Average'] / number_of_runs\n", " side_p50 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p50'] / number_of_runs\n", " side_p90 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p90'] / number_of_runs\n", " side_p95 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p95'] / number_of_runs\n", " side_p100 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p100'] / number_of_runs\n", " \n", " print(f'50th Percentile Latency:{side_p50:.1f} ms')\n", " print(f'90th Percentile Latency:{side_p90:.1f} ms')\n", " print(f'95th Percentile Latency:{side_p95:.1f} ms\\n')\n", "\n", " cloudwatch_ready = True\n", "\n", "\n" ] }, { "cell_type": "markdown", "id": "9035e681", "metadata": {}, "source": [ "### Cleanup\n", "Endpoints should be deleted when no longer in use, to avoid costs." ] }, { "cell_type": "code", "execution_count": null, "id": "1284ef3f", "metadata": {}, "outputs": [], "source": [ "predictor.delete_endpoint(predictor.endpoint)" ] }, { "cell_type": "code", "execution_count": null, "id": "5af53873", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.9 64-bit", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.9" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 5 } ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{ "cells": [ { "cell_type": "markdown", "id": "4674f667", "metadata": {}, "source": [ "# Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container" ] }, { "cell_type": "markdown", "id": "b3e39838", "metadata": {}, "source": [ "## Overview" ] }, { "cell_type": "markdown", "id": "a92c454f", "metadata": {}, "source": [ "In this tutotial we will deploy on SageMaker a pretraine BERT Base model from HuggingFace Transformers, using the [AWS Deep Learning Containers](https://github.com/aws/deep-learning-containers). We will use the same model as shown in the [Neuron Tutorial \"PyTorch - HuggingFace Pretrained BERT Tutorial\"](../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html#). We will compile the model and build a custom AWS Deep Learning Container, to include the HuggingFace Transformers Library. \n", "\n", "This Jupyter Notebook should run on a ml.c5.4xlarge SageMaker Notebook instance. You can set up your SageMaker Notebook instance by following the [Get Started with Amazon SageMaker Notebook Instances](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-console.html) documentation. \n", "\n", "&gt; We recommend increasing the size of the base root volume of you SM notebook instance, to accomodate the models and containers built locally. A root volume of 10Gb should suffice. \n" ] }, { "cell_type": "markdown", "id": "37445ad2", "metadata": {}, "source": [ "## Install Dependencies:" ] }, { "cell_type": "markdown", "id": "3ecd765f", "metadata": {}, "source": [ "This tutorial requires the following pip packages:" ] }, { "cell_type": "markdown", "id": "cae3092c", "metadata": {}, "source": [ "- torch-neuron\n", "- neuron-cc[tensorflow]\n", "- transformers" ] }, { "cell_type": "code", "execution_count": null, "id": "066c3731", "metadata": {}, "outputs": [], "source": [ "%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n", "!pip install --upgrade --no-cache-dir torch-neuron neuron-cc[tensorflow] torchvision torch --extra-index-url=https://pip.repos.neuron.amazonaws.com\n", "!pip install --upgrade --no-cache-dir 'transformers==4.6.0'" ] }, { "cell_type": "markdown", "id": "a4796d3a", "metadata": {}, "source": [ "## Compile the model into an AWS Neuron optimized TorchScript" ] }, { "cell_type": "code", "execution_count": null, "id": "6fe85f8e", "metadata": {}, "outputs": [], "source": [ "import torch\n", "import torch_neuron\n", "\n", "from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig" ] }, { "cell_type": "code", "execution_count": null, "id": "0c5c253a", "metadata": {}, "outputs": [], "source": [ "# Build tokenizer and model\n", "tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\n", "model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mrpc\", return_dict=False)\n", "\n", "# Setup some example inputs\n", "sequence_0 = \"The company HuggingFace is based in New York City\"\n", "sequence_1 = \"Apples are especially bad for your health\"\n", "sequence_2 = \"HuggingFace's headquarters are situated in Manhattan\"\n", "\n", "max_length=128\n", "paraphrase = tokenizer.encode_plus(sequence_0, sequence_2, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "not_paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "\n", "# Run the original PyTorch model on compilation exaple\n", "paraphrase_classification_logits = model(**paraphrase)[0]\n", "\n", "# Convert example inputs to a format that is compatible with TorchScript tracing\n", "example_inputs_paraphrase = paraphrase['input_ids'], paraphrase['attention_mask'], paraphrase['token_type_ids']\n", "example_inputs_not_paraphrase = not_paraphrase['input_ids'], not_paraphrase['attention_mask'], not_paraphrase['token_type_ids']" ] }, { "cell_type": "code", "execution_count": null, "id": "44255ada", "metadata": {}, "outputs": [], "source": [ "%%time\n", "# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron\n", "# This step may need 3-5 min\n", "model_neuron = torch.neuron.trace(model, example_inputs_paraphrase, verbose=1, compiler_workdir='./compilation_artifacts')" ] }, { "cell_type": "markdown", "id": "5c4752ac", "metadata": {}, "source": [ "You may inspect **model_neuron.graph** to see which part is running on CPU versus running on the accelerator. All native **aten** operators in the graph will be running on CPU." ] }, { "cell_type": "code", "execution_count": null, "id": "dc00889e", "metadata": {}, "outputs": [], "source": [ "# See which part is running on CPU versus running on the accelerator.\n", "print(model_neuron.graph)" ] }, { "cell_type": "markdown", "id": "775fb30d", "metadata": {}, "source": [ "Save the compiled model, so it can be packaged and sent to S3." ] }, { "cell_type": "code", "execution_count": null, "id": "027c4f53", "metadata": {}, "outputs": [], "source": [ "# Save the TorchScript for later use\n", "model_neuron.save('neuron_compiled_model.pt')" ] }, { "cell_type": "markdown", "id": "d362c579", "metadata": {}, "source": [ "### Package the pre-trained model and upload it to S3\n", "\n", "To make the model available for the SageMaker deployment, you will TAR the serialized graph and upload it to the default Amazon S3 bucket for your SageMaker session. " ] }, { "cell_type": "code", "execution_count": null, "id": "29c7f7b4", "metadata": {}, "outputs": [], "source": [ "# Now you'll create a model.tar.gz file to be used by SageMaker endpoint\n", "!tar -czvf model.tar.gz neuron_compiled_model.pt" ] }, { "cell_type": "code", "execution_count": null, "id": "1beadca0", "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import time\n", "from sagemaker.utils import name_from_base\n", "import sagemaker" ] }, { "cell_type": "code", "execution_count": null, "id": "06ad87d4", "metadata": {}, "outputs": [], "source": [ "# upload model to S3\n", "role = sagemaker.get_execution_role()\n", "sess=sagemaker.Session()\n", "region=sess.boto_region_name\n", "bucket=sess.default_bucket()\n", "sm_client=boto3.client('sagemaker')" ] }, { "cell_type": "code", "execution_count": null, "id": "5205ec55", "metadata": {}, "outputs": [], "source": [ "model_key = '{}/model/model.tar.gz'.format('inf1_compiled_model')\n", "model_path = 's3://{}/{}'.format(bucket, model_key)\n", "boto3.resource('s3').Bucket(bucket).upload_file('model.tar.gz', model_key)\n", "print(\"Uploaded model to S3:\")\n", "print(model_path)" ] }, { "cell_type": "markdown", "id": "e8b425d4", "metadata": {}, "source": [ "## Build and Push the container" ] }, { "cell_type": "markdown", "id": "430e6ed2", "metadata": {}, "source": [ "The following shell code shows how to build the container image using docker build and push the container image to ECR using docker push.\n", "The Dockerfile in this example is available in the ***container*** folder.\n", "Here's an example of the Dockerfile:\n", "\n", "```Dockerfile\n", "FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference-neuron:1.7.1-neuron-py36-ubuntu18.04\n", "\n", "# Install packages \n", "RUN pip install \"transformers==4.7.0\"\n", "```" ] }, { "cell_type": "code", "execution_count": null, "id": "3970025d", "metadata": {}, "outputs": [], "source": [ "!cat container/Dockerfile" ] }, { "cell_type": "markdown", "id": "62f78b0f", "metadata": {}, "source": [ "Before running the next cell, make sure your SageMaker IAM role has access to ECR. If not, you can attache the role `AmazonEC2ContainerRegistryPowerUser` to your IAM role ARN, which allows you to upload image layers to ECR. \n", "\n", "It takes 5 minutes to build docker images and upload image to ECR" ] }, { "cell_type": "code", "execution_count": null, "id": "ecd51acf", "metadata": {}, "outputs": [], "source": [ "%%sh\n", "\n", "# The name of our algorithm\n", "algorithm_name=neuron-py36-inference\n", "\n", "cd container\n", "\n", "account=$(aws sts get-caller-identity --query Account --output text)\n", "\n", "# Get the region defined in the current configuration (default to us-west-2 if none defined)\n", "region=$(aws configure get region)\n", "region=${region:-us-west-2}\n", "\n", "fullname=\"${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest\"\n", "\n", "# If the repository doesn't exist in ECR, create it.\n", "\n", "aws ecr describe-repositories --repository-names \"${algorithm_name}\" &gt; /dev/null 2&gt;&amp;1\n", "\n", "if [ $? -ne 0 ]\n", "then\n", " aws ecr create-repository --repository-name \"${algorithm_name}\" &gt; /dev/null\n", "fi\n", "\n", "# Get the login command from ECR in order to pull down the SageMaker PyTorch image\n", "aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com\n", "# Build the docker image locally with the image name and then push it to ECR\n", "# with the full name.\n", "docker build -t ${algorithm_name} . --build-arg REGION=${region}\n", "docker tag ${algorithm_name} ${fullname}\n", "\n", "# Get the login command from ECR and execute it directly\n", "aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${account}.dkr.ecr.${region}.amazonaws.com\n", "docker push ${fullname}" ] }, { "cell_type": "markdown", "id": "e4f6bbda", "metadata": {}, "source": [ "## Deploy Container and run inference based on the pretrained model" ] }, { "cell_type": "markdown", "id": "64e65e31", "metadata": {}, "source": [ "To deploy a pretrained PyTorch model, you'll need to use the PyTorch estimator object to create a PyTorchModel object and set a different entry_point.\n", "\n", "You'll use the PyTorchModel object to deploy a PyTorchPredictor. This creates a SageMaker Endpoint -- a hosted prediction service that we can use to perform inference." ] }, { "cell_type": "code", "execution_count": null, "id": "f343d3b1", "metadata": {}, "outputs": [], "source": [ "import sys\n", "\n", "!{sys.executable} -m pip install Transformers" ] }, { "cell_type": "code", "execution_count": null, "id": "2bd73b77", "metadata": {}, "outputs": [], "source": [ "import os\n", "import boto3\n", "import sagemaker\n", "\n", "role = sagemaker.get_execution_role()\n", "sess = sagemaker.Session()\n", "\n", "bucket = sess.default_bucket()\n", "prefix = \"inf1_compiled_model/model\"\n", "\n", "# Get container name in ECR\n", "client=boto3.client('sts')\n", "account=client.get_caller_identity()['Account']\n", "\n", "my_session=boto3.session.Session()\n", "region=my_session.region_name\n", "\n", "algorithm_name=\"neuron-py36-inference\"\n", "ecr_image='{}.dkr.ecr.{}.amazonaws.com/{}:latest'.format(account, region, algorithm_name)\n", "print(ecr_image)" ] }, { "cell_type": "markdown", "id": "9298f2a7", "metadata": {}, "source": [ "An implementation of *model_fn* is required for inference script.\n", "We are going to implement our own **model_fn** and **predict_fn** for Hugging Face Bert, and use default implementations of **input_fn** and **output_fn** defined in sagemaker-pytorch-containers.\n", "\n", "In this example, the inference script is put in ***code*** folder. Run the next cell to see it:\n" ] }, { "cell_type": "code", "execution_count": null, "id": "cfea75b6", "metadata": {}, "outputs": [], "source": [ "!pygmentize code/inference.py" ] }, { "cell_type": "markdown", "id": "1b31a7b8", "metadata": {}, "source": [ "Path of compiled pretrained model in S3:" ] }, { "cell_type": "code", "execution_count": null, "id": "61f3556e", "metadata": {}, "outputs": [], "source": [ "key = os.path.join(prefix, \"model.tar.gz\")\n", "pretrained_model_data = \"s3://{}/{}\".format(bucket, key)\n", "print(pretrained_model_data)" ] }, { "cell_type": "markdown", "id": "e7557a5f", "metadata": {}, "source": [ "The model object is defined by using the SageMaker Python SDK's PyTorchModel and pass in the model from the estimator and the entry_point. The endpoint's entry point for inference is defined by model_fn as seen in the previous code block that prints out **inference.py**. The model_fn function will load the model and required tokenizer.\n", "\n", "Note, **image_uri** must be user's own ECR images." ] }, { "cell_type": "code", "execution_count": null, "id": "0bd99768", "metadata": {}, "outputs": [], "source": [ "from sagemaker.pytorch.model import PyTorchModel\n", "\n", "pytorch_model = PyTorchModel(\n", " model_data=pretrained_model_data,\n", " role=role,\n", " source_dir=\"code\",\n", " framework_version=\"1.7.1\",\n", " entry_point=\"inference.py\",\n", " image_uri=ecr_image\n", ")\n", "\n", "# Let SageMaker know that we've already compiled the model via neuron-cc\n", "pytorch_model._is_compiled_model = True" ] }, { "cell_type": "markdown", "id": "67439fe7", "metadata": {}, "source": [ "The arguments to the deploy function allow us to set the number and type of instances that will be used for the Endpoint.\n", "\n", "Here you will deploy the model to a single **ml.inf1.2xlarge** instance.\n", "It may take 6-10 min to deploy." ] }, { "cell_type": "code", "execution_count": null, "id": "d771fc7c", "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "predictor = pytorch_model.deploy(initial_instance_count=1, instance_type=\"ml.inf1.2xlarge\")" ] }, { "cell_type": "code", "execution_count": null, "id": "ab6342f3", "metadata": {}, "outputs": [], "source": [ "print(predictor.endpoint_name)" ] }, { "cell_type": "markdown", "id": "059537d9", "metadata": {}, "source": [ "Since in the input_fn we declared that the incoming requests are json-encoded, we need to use a json serializer, to encode the incoming data into a json string. Also, we declared the return content type to be json string, we Need to use a json deserializer to parse the response." ] }, { "cell_type": "code", "execution_count": null, "id": "29e82f90", "metadata": {}, "outputs": [], "source": [ "predictor.serializer = sagemaker.serializers.JSONSerializer()\n", "predictor.deserializer = sagemaker.deserializers.JSONDeserializer()" ] }, { "cell_type": "markdown", "id": "d006ea03", "metadata": {}, "source": [ "Using a list of sentences, now SageMaker endpoint is invoked to get predictions." ] }, { "cell_type": "code", "execution_count": null, "id": "325a87f8", "metadata": {}, "outputs": [], "source": [ "%%time\n", "result = predictor.predict(\n", " [\n", " \"Never allow the same bug to bite you twice.\",\n", " \"The best part of Amazon SageMaker is that it makes machine learning easy.\",\n", " ]\n", ")\n", "print(result)" ] }, { "cell_type": "code", "execution_count": null, "id": "4a12410d", "metadata": {}, "outputs": [], "source": [ "%%time\n", "result = predictor.predict(\n", " [\n", " \"The company HuggingFace is based in New York City\",\n", " \"HuggingFace's headquarters are situated in Manhattan\",\n", " ]\n", ")\n", "print(result)" ] }, { "cell_type": "markdown", "id": "a72dfd16", "metadata": {}, "source": [ "## Benchmarking your endpoint\n", "\n", "The following cells create a load test for your endpoint. You first define some helper functions: `inference_latency` runs the endpoint request, collects cliend side latency and any errors, `random_sentence` builds random to be sent to the endpoint. " ] }, { "cell_type": "code", "execution_count": null, "id": "088d0e75", "metadata": {}, "outputs": [], "source": [ "import numpy as np \n", "import datetime\n", "import math\n", "import time\n", "import boto3 \n", "import matplotlib.pyplot as plt\n", "from joblib import Parallel, delayed\n", "import numpy as np\n", "from tqdm import tqdm\n", "import random" ] }, { "cell_type": "code", "execution_count": null, "id": "038d9953", "metadata": {}, "outputs": [], "source": [ "def inference_latency(model,*inputs):\n", " \"\"\"\n", " infetence_time is a simple method to return the latency of a model inference.\n", "\n", " Parameters:\n", " model: torch model onbject loaded using torch.jit.load\n", " inputs: model() args\n", "\n", " Returns:\n", " latency in seconds\n", " \"\"\"\n", " error = False\n", " start = time.time()\n", " try:\n", " results = model(*inputs)\n", " except:\n", " error = True\n", " results = []\n", " return {'latency':time.time() - start, 'error': error, 'result': results}" ] }, { "cell_type": "code", "execution_count": null, "id": "d6b200ac", "metadata": {}, "outputs": [], "source": [ "def random_sentence():\n", " \n", " s_nouns = [\"A dude\", \"My mom\", \"The king\", \"Some guy\", \"A cat with rabies\", \"A sloth\", \"Your homie\", \"This cool guy my gardener met yesterday\", \"Superman\"]\n", " p_nouns = [\"These dudes\", \"Both of my moms\", \"All the kings of the world\", \"Some guys\", \"All of a cattery's cats\", \"The multitude of sloths living under your bed\", \"Your homies\", \"Like, these, like, all these people\", \"Supermen\"]\n", " s_verbs = [\"eats\", \"kicks\", \"gives\", \"treats\", \"meets with\", \"creates\", \"hacks\", \"configures\", \"spies on\", \"retards\", \"meows on\", \"flees from\", \"tries to automate\", \"explodes\"]\n", " p_verbs = [\"eat\", \"kick\", \"give\", \"treat\", \"meet with\", \"create\", \"hack\", \"configure\", \"spy on\", \"retard\", \"meow on\", \"flee from\", \"try to automate\", \"explode\"]\n", " infinitives = [\"to make a pie.\", \"for no apparent reason.\", \"because the sky is green.\", \"for a disease.\", \"to be able to make toast explode.\", \"to know more about archeology.\"]\n", " \n", " return (random.choice(s_nouns) + ' ' + random.choice(s_verbs) + ' ' + random.choice(s_nouns).lower() or random.choice(p_nouns).lower() + ' ' + random.choice(infinitives))\n", "\n", "print([random_sentence(), random_sentence()])" ] }, { "cell_type": "markdown", "id": "e2945dde", "metadata": {}, "source": [ "The following cell creates `number_of_clients` concurrent threads to run `number_of_runs` requests. Once completed, a `boto3` CloudWatch client will query for the server side latency metrics for comparison. " ] }, { "cell_type": "code", "execution_count": null, "id": "69c047e3", "metadata": {}, "outputs": [], "source": [ "# Defining Auxiliary variables\n", "number_of_clients = 2\n", "number_of_runs = 1000\n", "t = tqdm(range(number_of_runs),position=0, leave=True)\n", "\n", "# Starting parallel clients\n", "cw_start = datetime.datetime.utcnow()\n", "\n", "results = Parallel(n_jobs=number_of_clients,prefer=\"threads\")(delayed(inference_latency)(predictor.predict,[random_sentence(), random_sentence()]) for mod in t)\n", "avg_throughput = t.total/t.format_dict['elapsed']\n", "\n", "cw_end = datetime.datetime.utcnow() \n", "\n", "# Computing metrics and print\n", "latencies = [res['latency'] for res in results]\n", "errors = [res['error'] for res in results]\n", "error_p = sum(errors)/len(errors) *100\n", "p50 = np.quantile(latencies[-1000:],0.50) * 1000\n", "p90 = np.quantile(latencies[-1000:],0.95) * 1000\n", "p95 = np.quantile(latencies[-1000:],0.99) * 1000\n", "\n", "print(f'Avg Throughput: :{avg_throughput:.1f}\\n')\n", "print(f'50th Percentile Latency:{p50:.1f} ms')\n", "print(f'90th Percentile Latency:{p90:.1f} ms')\n", "print(f'95th Percentile Latency:{p95:.1f} ms\\n')\n", "print(f'Errors percentage: {error_p:.1f} %\\n')\n", "\n", "# Querying CloudWatch\n", "print('Getting Cloudwatch:')\n", "cloudwatch = boto3.client('cloudwatch')\n", "statistics=['SampleCount', 'Average', 'Minimum', 'Maximum']\n", "extended=['p50', 'p90', 'p95', 'p100']\n", "\n", "# Give 5 minute buffer to end\n", "cw_end += datetime.timedelta(minutes=5)\n", "\n", "# Period must be 1, 5, 10, 30, or multiple of 60\n", "# Calculate closest multiple of 60 to the total elapsed time\n", "factor = math.ceil((cw_end - cw_start).total_seconds() / 60)\n", "period = factor * 60\n", "print('Time elapsed: {} seconds'.format((cw_end - cw_start).total_seconds()))\n", "print('Using period of {} seconds\\n'.format(period))\n", "\n", "cloudwatch_ready = False\n", "# Keep polling CloudWatch metrics until datapoints are available\n", "while not cloudwatch_ready:\n", " time.sleep(30)\n", " print('Waiting 30 seconds ...')\n", " # Must use default units of microseconds\n", " model_latency_metrics = cloudwatch.get_metric_statistics(MetricName='ModelLatency',\n", " Dimensions=[{'Name': 'EndpointName',\n", " 'Value': predictor.endpoint_name},\n", " {'Name': 'VariantName',\n", " 'Value': \"AllTraffic\"}],\n", " Namespace=\"AWS/SageMaker\",\n", " StartTime=cw_start,\n", " EndTime=cw_end,\n", " Period=period,\n", " Statistics=statistics,\n", " ExtendedStatistics=extended\n", " )\n", " # Should be 1000\n", " if len(model_latency_metrics['Datapoints']) &gt; 0:\n", " print('{} latency datapoints ready'.format(model_latency_metrics['Datapoints'][0]['SampleCount']))\n", " side_avg = model_latency_metrics['Datapoints'][0]['Average'] / number_of_runs\n", " side_p50 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p50'] / number_of_runs\n", " side_p90 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p90'] / number_of_runs\n", " side_p95 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p95'] / number_of_runs\n", " side_p100 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p100'] / number_of_runs\n", " \n", " print(f'50th Percentile Latency:{side_p50:.1f} ms')\n", " print(f'90th Percentile Latency:{side_p90:.1f} ms')\n", " print(f'95th Percentile Latency:{side_p95:.1f} ms\\n')\n", "\n", " cloudwatch_ready = True\n", "\n", "\n" ] }, { "cell_type": "markdown", "id": "9035e681", "metadata": {}, "source": [ "### Cleanup\n", "Endpoints should be deleted when no longer in use, to avoid costs." ] }, { "cell_type": "code", "execution_count": null, "id": "1284ef3f", "metadata": {}, "outputs": [], "source": [ "predictor.delete_endpoint(predictor.endpoint)" ] }, { "cell_type": "code", "execution_count": null, "id": "5af53873", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.9 64-bit", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.9" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 5 } </pre></body></html>
2023-09-29T20:55:25.430Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb.txt
``` { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Compiling and Deploying HuggingFace Pretrained BERT\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Introduction\n", "\n", "In this tutorial we will compile and deploy BERT-base version of HuggingFace 🤗 Transformers BERT for Inferentia. The full list of HuggingFace's pretrained BERT models can be found in the BERT section on this page https://huggingface.co/transformers/pretrained_models.html. \n", "\n", "This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger. The compile part of this tutorial requires inf1.6xlarge and not the inference itself. For simplicity we will run this tutorial on inf1.6xlarge but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs.\n", "\n", "Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install Dependencies:\n", "This tutorial requires the following pip packages:\n", "\n", "- `torch-neuron`\n", "- `neuron-cc[tensorflow]`\n", "- `transformers`\n", "\n", "Most of these packages will be installed when configuring your environment using the Neuron PyTorch setup guide. The additional dependencies must be installed here." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n", "!pip install --upgrade \"transformers==4.6.0\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Compile the model into an AWS Neuron optimized TorchScript\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import tensorflow # to workaround a protobuf version conflict issue\n", "import torch\n", "import torch.neuron\n", "from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig\n", "import transformers\n", "import os\n", "import warnings\n", "\n", "# Setting up NeuronCore groups for inf1.6xlarge with 16 cores\n", "num_cores = 16 # This value should be 4 on inf1.xlarge and inf1.2xlarge\n", "os.environ['NEURON_RT_NUM_CORES'] = str(num_cores)\n", "\n", "# Build tokenizer and model\n", "tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\n", "model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mrpc\", return_dict=False)\n", "\n", "# Setup some example inputs\n", "sequence_0 = \"The company HuggingFace is based in New York City\"\n", "sequence_1 = \"Apples are especially bad for your health\"\n", "sequence_2 = \"HuggingFace's headquarters are situated in Manhattan\"\n", "\n", "max_length=128\n", "paraphrase = tokenizer.encode_plus(sequence_0, sequence_2, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "not_paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "\n", "# Run the original PyTorch model on compilation exaple\n", "paraphrase_classification_logits = model(**paraphrase)[0]\n", "\n", "# Convert example inputs to a format that is compatible with TorchScript tracing\n", "example_inputs_paraphrase = paraphrase['input_ids'], paraphrase['attention_mask'], paraphrase['token_type_ids']\n", "example_inputs_not_paraphrase = not_paraphrase['input_ids'], not_paraphrase['attention_mask'], not_paraphrase['token_type_ids']\n", "\n", "# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron\n", "model_neuron = torch.neuron.trace(model, example_inputs_paraphrase)\n", "\n", "# Verify the TorchScript works on both example inputs\n", "paraphrase_classification_logits_neuron = model_neuron(*example_inputs_paraphrase)\n", "not_paraphrase_classification_logits_neuron = model_neuron(*example_inputs_not_paraphrase)\n", "\n", "# Save the TorchScript for later use\n", "model_neuron.save('bert_neuron.pt')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may inspect `model_neuron.graph` to see which part is running on CPU versus running on the accelerator. All native `aten` operators in the graph will be running on CPU." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(model_neuron.graph)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Deploy the AWS Neuron optimized TorchScript\n", "\n", "To deploy the AWS Neuron optimized TorchScript, you may choose to load the saved TorchScript from disk and skip the slow compilation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load TorchScript back\n", "model_neuron = torch.jit.load('bert_neuron.pt')\n", "# Verify the TorchScript works on both example inputs\n", "paraphrase_classification_logits_neuron = model_neuron(*example_inputs_paraphrase)\n", "not_paraphrase_classification_logits_neuron = model_neuron(*example_inputs_not_paraphrase)\n", "classes = ['not paraphrase', 'paraphrase']\n", "paraphrase_prediction = paraphrase_classification_logits_neuron[0][0].argmax().item()\n", "not_paraphrase_prediction = not_paraphrase_classification_logits_neuron[0][0].argmax().item()\n", "print('BERT says that \"{}\" and \"{}\" are {}'.format(sequence_0, sequence_2, classes[paraphrase_prediction]))\n", "print('BERT says that \"{}\" and \"{}\" are {}'.format(sequence_0, sequence_1, classes[not_paraphrase_prediction]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's run the model in parallel on four cores" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_input_with_padding(batch, batch_size, max_length):\n", " ## Reformulate the batch into three batch tensors - default batch size batches the outer dimension\n", " encoded = batch['encoded']\n", " inputs = torch.squeeze(encoded['input_ids'], 1)\n", " attention = torch.squeeze(encoded['attention_mask'], 1)\n", " token_type = torch.squeeze(encoded['token_type_ids'], 1)\n", " quality = list(map(int, batch['quality']))\n", "\n", " if inputs.size()[0] != batch_size:\n", " print(\"Input size = {} - padding\".format(inputs.size()))\n", " remainder = batch_size - inputs.size()[0]\n", " zeros = torch.zeros( [remainder, max_length], dtype=torch.long )\n", " inputs = torch.cat( [inputs, zeros] )\n", " attention = torch.cat( [attention, zeros] )\n", " token_type = torch.cat( [token_type, zeros] )\n", "\n", " assert(inputs.size()[0] == batch_size and inputs.size()[1] == max_length)\n", " assert(attention.size()[0] == batch_size and attention.size()[1] == max_length)\n", " assert(token_type.size()[0] == batch_size and token_type.size()[1] == max_length)\n", "\n", " return (inputs, attention, token_type), quality\n", "\n", "def count(output, quality):\n", " assert output.size(0) >= len(quality)\n", " correct_count = 0\n", " count = len(quality)\n", " \n", " batch_predictions = [ row.argmax().item() for row in output ]\n", "\n", " for a, b in zip(batch_predictions, quality):\n", " if int(a)==int(b):\n", " correct_count += 1\n", "\n", " return correct_count, count" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data parallel inference\n", "In the below cell, we use the data parallel approach for inference. In this approach, we load multiple models, all of them running in parallel. Each model is loaded onto a single NeuronCore. In the below implementation, we launch 16 models, thereby utilizing all the 16 cores on an inf1.6xlarge.\n", "\n", "> Note: Now if you try to decrease the num_cores in the above cells, please restart the notebook and run `!sudo rmmod neuron; sudo modprobe neuron` step in cell 2 to clear the Neuron cores.\n", "\n", "Since, we can run more than 1 model concurrently, the throughput for the system goes up. To achieve maximum gain in throughput, we need to efficiently feed the models so as to keep them busy at all times. In the below setup, this is done by using a producer-consumer model. We maintain a common python queue shared across all the models. The common queue enables feeding data continuously to the models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from parallel import NeuronSimpleDataParallel\n", "from bert_benchmark_utils import BertTestDataset, BertResults\n", "import time\n", "import functools\n", "\n", "max_length = 128\n", "num_cores = 16\n", "batch_size = 1\n", "\n", "tsv_file=\"glue_mrpc_dev.tsv\"\n", "\n", "data_set = BertTestDataset( tsv_file=tsv_file, tokenizer=tokenizer, max_length=max_length )\n", "data_loader = torch.utils.data.DataLoader(data_set, batch_size=batch_size, shuffle=True)\n", "\n", "#Result aggregation class (code in bert_benchmark_utils.py)\n", "results = BertResults(batch_size, num_cores)\n", "def result_handler(output, result_id, start, end, input_dict):\n", " correct_count, inference_count = count(output[0], input_dict.pop(result_id))\n", " elapsed = end - start\n", " results.add_result(correct_count, inference_count, [elapsed], [end], [start])\n", "\n", "parallel_neuron_model = NeuronSimpleDataParallel('bert_neuron.pt', num_cores)\n", "\n", "#Starting the inference threads\n", "parallel_neuron_model.start_continuous_inference()\n", "\n", "# Warm up the cores\n", "z = torch.zeros( [batch_size, max_length], dtype=torch.long )\n", "batch = (z, z, z)\n", "for _ in range(num_cores*4):\n", " parallel_neuron_model.infer(batch, -1, None)\n", " \n", "input_dict = {}\n", "input_id = 0\n", "for _ in range(30):\n", " for batch in data_loader:\n", " batch, quality = get_input_with_padding(batch, batch_size, max_length)\n", " input_dict[input_id] = quality\n", " callback_fn = functools.partial(result_handler, input_dict=input_dict)\n", " parallel_neuron_model.infer(batch, input_id, callback_fn)\n", " input_id+=1\n", "\n", "# Stop inference \n", "parallel_neuron_model.stop()\n", "\n", "\n", "with open(\"benchmark.txt\", \"w\") as f:\n", " results.report(f, window_size=1)\n", "\n", "with open(\"benchmark.txt\", \"r\") as f:\n", " for line in f:\n", " print(line)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now recompile with a larger batch size of six sentence pairs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "batch_size = 6\n", "\n", "example_inputs_paraphrase = (\n", " torch.cat([paraphrase['input_ids']] * batch_size,0), \n", " torch.cat([paraphrase['attention_mask']] * batch_size,0), \n", " torch.cat([paraphrase['token_type_ids']] * batch_size,0)\n", ")\n", "\n", "# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron\n", "model_neuron_batch = torch.neuron.trace(model, example_inputs_paraphrase)\n", "\n", "## Save the batched model\n", "model_neuron_batch.save('bert_neuron_b{}.pt'.format(batch_size))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rerun inference with batch 6" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from parallel import NeuronSimpleDataParallel\n", "from bert_benchmark_utils import BertTestDataset, BertResults\n", "import time\n", "import functools\n", "\n", "max_length = 128\n", "num_cores = 16\n", "batch_size = 6\n", "\n", "data_set = BertTestDataset( tsv_file=tsv_file, tokenizer=tokenizer, max_length=max_length )\n", "data_loader = torch.utils.data.DataLoader(data_set, batch_size=batch_size, shuffle=True)\n", "\n", "#Result aggregation class (code in bert_benchmark_utils.py)\n", "results = BertResults(batch_size, num_cores)\n", "def result_handler(output, result_id, start, end, input_dict):\n", " correct_count, inference_count = count(output[0], input_dict.pop(result_id))\n", " elapsed = end - start\n", " results.add_result(correct_count, inference_count, [elapsed], [end], [start])\n", "\n", "parallel_neuron_model = NeuronSimpleDataParallel('bert_neuron_b{}.pt'.format(batch_size), num_cores)\n", "\n", "#Starting the inference threads\n", "parallel_neuron_model.start_continuous_inference()\n", "\n", "# Adding to the input queue to warm all cores\n", "z = torch.zeros( [batch_size, max_length], dtype=torch.long )\n", "batch = (z, z, z)\n", "for _ in range(num_cores*4):\n", " parallel_neuron_model.infer(batch, -1, None)\n", "\n", "input_dict = {}\n", "input_id = 0\n", "for _ in range(30):\n", " for batch in data_loader:\n", " batch, quality = get_input_with_padding(batch, batch_size, max_length)\n", " input_dict[input_id] = quality\n", " callback_fn = functools.partial(result_handler, input_dict=input_dict)\n", " parallel_neuron_model.infer(batch, input_id, callback_fn)\n", " input_id+=1\n", "\n", "# Stop inference \n", "parallel_neuron_model.stop()\n", "\n", "with open(\"benchmark_b{}.txt\".format(batch_size), \"w\") as f:\n", " results.report(f, window_size=1)\n", "\n", "with open(\"benchmark_b{}.txt\".format(batch_size), \"r\") as f:\n", " for line in f:\n", " print(line)\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.9 64-bit", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.9" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 4 } ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Compiling and Deploying HuggingFace Pretrained BERT\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Introduction\n", "\n", "In this tutorial we will compile and deploy BERT-base version of HuggingFace 🤗 Transformers BERT for Inferentia. The full list of HuggingFace's pretrained BERT models can be found in the BERT section on this page https://huggingface.co/transformers/pretrained_models.html. \n", "\n", "This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger. The compile part of this tutorial requires inf1.6xlarge and not the inference itself. For simplicity we will run this tutorial on inf1.6xlarge but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs.\n", "\n", "Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -&gt; Change Kernel\" option on the top of this Jupyter notebook page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install Dependencies:\n", "This tutorial requires the following pip packages:\n", "\n", "- `torch-neuron`\n", "- `neuron-cc[tensorflow]`\n", "- `transformers`\n", "\n", "Most of these packages will be installed when configuring your environment using the Neuron PyTorch setup guide. The additional dependencies must be installed here." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n", "!pip install --upgrade \"transformers==4.6.0\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Compile the model into an AWS Neuron optimized TorchScript\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import tensorflow # to workaround a protobuf version conflict issue\n", "import torch\n", "import torch.neuron\n", "from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig\n", "import transformers\n", "import os\n", "import warnings\n", "\n", "# Setting up NeuronCore groups for inf1.6xlarge with 16 cores\n", "num_cores = 16 # This value should be 4 on inf1.xlarge and inf1.2xlarge\n", "os.environ['NEURON_RT_NUM_CORES'] = str(num_cores)\n", "\n", "# Build tokenizer and model\n", "tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\n", "model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mrpc\", return_dict=False)\n", "\n", "# Setup some example inputs\n", "sequence_0 = \"The company HuggingFace is based in New York City\"\n", "sequence_1 = \"Apples are especially bad for your health\"\n", "sequence_2 = \"HuggingFace's headquarters are situated in Manhattan\"\n", "\n", "max_length=128\n", "paraphrase = tokenizer.encode_plus(sequence_0, sequence_2, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "not_paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, max_length=max_length, padding='max_length', truncation=True, return_tensors=\"pt\")\n", "\n", "# Run the original PyTorch model on compilation exaple\n", "paraphrase_classification_logits = model(**paraphrase)[0]\n", "\n", "# Convert example inputs to a format that is compatible with TorchScript tracing\n", "example_inputs_paraphrase = paraphrase['input_ids'], paraphrase['attention_mask'], paraphrase['token_type_ids']\n", "example_inputs_not_paraphrase = not_paraphrase['input_ids'], not_paraphrase['attention_mask'], not_paraphrase['token_type_ids']\n", "\n", "# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron\n", "model_neuron = torch.neuron.trace(model, example_inputs_paraphrase)\n", "\n", "# Verify the TorchScript works on both example inputs\n", "paraphrase_classification_logits_neuron = model_neuron(*example_inputs_paraphrase)\n", "not_paraphrase_classification_logits_neuron = model_neuron(*example_inputs_not_paraphrase)\n", "\n", "# Save the TorchScript for later use\n", "model_neuron.save('bert_neuron.pt')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may inspect `model_neuron.graph` to see which part is running on CPU versus running on the accelerator. All native `aten` operators in the graph will be running on CPU." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(model_neuron.graph)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Deploy the AWS Neuron optimized TorchScript\n", "\n", "To deploy the AWS Neuron optimized TorchScript, you may choose to load the saved TorchScript from disk and skip the slow compilation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load TorchScript back\n", "model_neuron = torch.jit.load('bert_neuron.pt')\n", "# Verify the TorchScript works on both example inputs\n", "paraphrase_classification_logits_neuron = model_neuron(*example_inputs_paraphrase)\n", "not_paraphrase_classification_logits_neuron = model_neuron(*example_inputs_not_paraphrase)\n", "classes = ['not paraphrase', 'paraphrase']\n", "paraphrase_prediction = paraphrase_classification_logits_neuron[0][0].argmax().item()\n", "not_paraphrase_prediction = not_paraphrase_classification_logits_neuron[0][0].argmax().item()\n", "print('BERT says that \"{}\" and \"{}\" are {}'.format(sequence_0, sequence_2, classes[paraphrase_prediction]))\n", "print('BERT says that \"{}\" and \"{}\" are {}'.format(sequence_0, sequence_1, classes[not_paraphrase_prediction]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's run the model in parallel on four cores" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_input_with_padding(batch, batch_size, max_length):\n", " ## Reformulate the batch into three batch tensors - default batch size batches the outer dimension\n", " encoded = batch['encoded']\n", " inputs = torch.squeeze(encoded['input_ids'], 1)\n", " attention = torch.squeeze(encoded['attention_mask'], 1)\n", " token_type = torch.squeeze(encoded['token_type_ids'], 1)\n", " quality = list(map(int, batch['quality']))\n", "\n", " if inputs.size()[0] != batch_size:\n", " print(\"Input size = {} - padding\".format(inputs.size()))\n", " remainder = batch_size - inputs.size()[0]\n", " zeros = torch.zeros( [remainder, max_length], dtype=torch.long )\n", " inputs = torch.cat( [inputs, zeros] )\n", " attention = torch.cat( [attention, zeros] )\n", " token_type = torch.cat( [token_type, zeros] )\n", "\n", " assert(inputs.size()[0] == batch_size and inputs.size()[1] == max_length)\n", " assert(attention.size()[0] == batch_size and attention.size()[1] == max_length)\n", " assert(token_type.size()[0] == batch_size and token_type.size()[1] == max_length)\n", "\n", " return (inputs, attention, token_type), quality\n", "\n", "def count(output, quality):\n", " assert output.size(0) &gt;= len(quality)\n", " correct_count = 0\n", " count = len(quality)\n", " \n", " batch_predictions = [ row.argmax().item() for row in output ]\n", "\n", " for a, b in zip(batch_predictions, quality):\n", " if int(a)==int(b):\n", " correct_count += 1\n", "\n", " return correct_count, count" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data parallel inference\n", "In the below cell, we use the data parallel approach for inference. In this approach, we load multiple models, all of them running in parallel. Each model is loaded onto a single NeuronCore. In the below implementation, we launch 16 models, thereby utilizing all the 16 cores on an inf1.6xlarge.\n", "\n", "&gt; Note: Now if you try to decrease the num_cores in the above cells, please restart the notebook and run `!sudo rmmod neuron; sudo modprobe neuron` step in cell 2 to clear the Neuron cores.\n", "\n", "Since, we can run more than 1 model concurrently, the throughput for the system goes up. To achieve maximum gain in throughput, we need to efficiently feed the models so as to keep them busy at all times. In the below setup, this is done by using a producer-consumer model. We maintain a common python queue shared across all the models. The common queue enables feeding data continuously to the models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from parallel import NeuronSimpleDataParallel\n", "from bert_benchmark_utils import BertTestDataset, BertResults\n", "import time\n", "import functools\n", "\n", "max_length = 128\n", "num_cores = 16\n", "batch_size = 1\n", "\n", "tsv_file=\"glue_mrpc_dev.tsv\"\n", "\n", "data_set = BertTestDataset( tsv_file=tsv_file, tokenizer=tokenizer, max_length=max_length )\n", "data_loader = torch.utils.data.DataLoader(data_set, batch_size=batch_size, shuffle=True)\n", "\n", "#Result aggregation class (code in bert_benchmark_utils.py)\n", "results = BertResults(batch_size, num_cores)\n", "def result_handler(output, result_id, start, end, input_dict):\n", " correct_count, inference_count = count(output[0], input_dict.pop(result_id))\n", " elapsed = end - start\n", " results.add_result(correct_count, inference_count, [elapsed], [end], [start])\n", "\n", "parallel_neuron_model = NeuronSimpleDataParallel('bert_neuron.pt', num_cores)\n", "\n", "#Starting the inference threads\n", "parallel_neuron_model.start_continuous_inference()\n", "\n", "# Warm up the cores\n", "z = torch.zeros( [batch_size, max_length], dtype=torch.long )\n", "batch = (z, z, z)\n", "for _ in range(num_cores*4):\n", " parallel_neuron_model.infer(batch, -1, None)\n", " \n", "input_dict = {}\n", "input_id = 0\n", "for _ in range(30):\n", " for batch in data_loader:\n", " batch, quality = get_input_with_padding(batch, batch_size, max_length)\n", " input_dict[input_id] = quality\n", " callback_fn = functools.partial(result_handler, input_dict=input_dict)\n", " parallel_neuron_model.infer(batch, input_id, callback_fn)\n", " input_id+=1\n", "\n", "# Stop inference \n", "parallel_neuron_model.stop()\n", "\n", "\n", "with open(\"benchmark.txt\", \"w\") as f:\n", " results.report(f, window_size=1)\n", "\n", "with open(\"benchmark.txt\", \"r\") as f:\n", " for line in f:\n", " print(line)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now recompile with a larger batch size of six sentence pairs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "batch_size = 6\n", "\n", "example_inputs_paraphrase = (\n", " torch.cat([paraphrase['input_ids']] * batch_size,0), \n", " torch.cat([paraphrase['attention_mask']] * batch_size,0), \n", " torch.cat([paraphrase['token_type_ids']] * batch_size,0)\n", ")\n", "\n", "# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron\n", "model_neuron_batch = torch.neuron.trace(model, example_inputs_paraphrase)\n", "\n", "## Save the batched model\n", "model_neuron_batch.save('bert_neuron_b{}.pt'.format(batch_size))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rerun inference with batch 6" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from parallel import NeuronSimpleDataParallel\n", "from bert_benchmark_utils import BertTestDataset, BertResults\n", "import time\n", "import functools\n", "\n", "max_length = 128\n", "num_cores = 16\n", "batch_size = 6\n", "\n", "data_set = BertTestDataset( tsv_file=tsv_file, tokenizer=tokenizer, max_length=max_length )\n", "data_loader = torch.utils.data.DataLoader(data_set, batch_size=batch_size, shuffle=True)\n", "\n", "#Result aggregation class (code in bert_benchmark_utils.py)\n", "results = BertResults(batch_size, num_cores)\n", "def result_handler(output, result_id, start, end, input_dict):\n", " correct_count, inference_count = count(output[0], input_dict.pop(result_id))\n", " elapsed = end - start\n", " results.add_result(correct_count, inference_count, [elapsed], [end], [start])\n", "\n", "parallel_neuron_model = NeuronSimpleDataParallel('bert_neuron_b{}.pt'.format(batch_size), num_cores)\n", "\n", "#Starting the inference threads\n", "parallel_neuron_model.start_continuous_inference()\n", "\n", "# Adding to the input queue to warm all cores\n", "z = torch.zeros( [batch_size, max_length], dtype=torch.long )\n", "batch = (z, z, z)\n", "for _ in range(num_cores*4):\n", " parallel_neuron_model.infer(batch, -1, None)\n", "\n", "input_dict = {}\n", "input_id = 0\n", "for _ in range(30):\n", " for batch in data_loader:\n", " batch, quality = get_input_with_padding(batch, batch_size, max_length)\n", " input_dict[input_id] = quality\n", " callback_fn = functools.partial(result_handler, input_dict=input_dict)\n", " parallel_neuron_model.infer(batch, input_id, callback_fn)\n", " input_id+=1\n", "\n", "# Stop inference \n", "parallel_neuron_model.stop()\n", "\n", "with open(\"benchmark_b{}.txt\".format(batch_size), \"w\") as f:\n", " results.report(f, window_size=1)\n", "\n", "with open(\"benchmark_b{}.txt\".format(batch_size), \"r\") as f:\n", " for line in f:\n", " print(line)\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.9 64-bit", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.9" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 4 } </pre></body></html>
2023-09-29T20:55:25.441Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/pytorch/resnet50.ipynb.txt
``` { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# ResNet50 model for Inferentia\n", "\n", "\n", "## Introduction:\n", "\n", "In this tutorial we will compile and deploy a ResNet50 model for inference on Inferentia. \n", "\n", "This Jupyter notebook should run on an inf1.6xlarge instance. The inference part of this tutorial requires an inf1 instance, not the compilation stage. For simplicity we will run this tutorial on an inf1.6xlarge, but in real life scenarios the compilation should be done on a compute instance and the deployment on an inf1 instance to save costs. \n", "\n", "In this tutorial we provide three main sections:\n", "\n", "1. Compile the ResNet50 model and infer with a batch size of 1\n", "\n", "2. Run the same compiled model on multiple NeuronCores using `torch.neuron.DataParallel` and dynamic batching\n", "\n", "3. Compile the ResNet50 model with a batch size of 5 and run it on multiple NeuronCores using `torch.neuron.DataParallel` for optimal performance on Inferentia\n", "\n", "Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -> Change Kernel\" option on the top of this Jupyter notebook page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install Dependencies:\n", "This tutorial requires the following pip packages:\n", "\n", "- `torch>=1.8`\n", "- `torch-neuron`\n", "- `torchvision`\n", "- `neuron-cc[tensorflow]`\n", "\n", "These will be installed by default when configuring your environment using the Neuron PyTorch setup guide." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compile model for Neuron\n", "\n", "The following step will compile the ResNet50 model for Inferentia. This will take a few minutes. At the end of script execution, the compiled model is saved as `resnet50_neuron.pt` in your local directory" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "from torchvision import models, transforms, datasets\n", "import torch_neuron\n", "\n", "# Create an example input for compilation\n", "image = torch.zeros([1, 3, 224, 224], dtype=torch.float32)\n", "\n", "# Load a pretrained ResNet50 model\n", "model = models.resnet50(pretrained=True)\n", "\n", "# Tell the model we are using it for evaluation (not training)\n", "model.eval()\n", "\n", "# Analyze the model - this will show operator support and operator count\n", "torch.neuron.analyze_model(model, example_inputs=[image])\n", "\n", "# Compile the model using torch.neuron.trace to create a Neuron model\n", "# that that is optimized for the Inferentia hardware\n", "model_neuron = torch.neuron.trace(model, example_inputs=[image])\n", "\n", "# The output of the compilation step will report the percentage of operators that \n", "# are compiled to Neuron, for example:\n", "#\n", "# INFO:Neuron:The neuron partitioner created 1 sub-graphs\n", "# INFO:Neuron:Neuron successfully compiled 1 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 100.0%\n", "# \n", "# We will also be warned if there are operators that are not placed on the Inferentia hardware\n", "\n", "# Save the compiled model\n", "model_neuron.save(\"resnet50_neuron.pt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run inference on Inferentia\n", "\n", "We can use the compiled Neuron model to run inference on Inferentia.\n", "\n", "In the following example, we preprocess a sample image for inference using the CPU model and Neuron model. We compare the predicted labels from the CPU model and Neuron model to verify that they are the same.\n", "\n", "Important: Do not perform inference with a Neuron traced model on a non-Neuron supported instance, as the results will not be calculated properly." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define a preprocessing function\n", "\n", "We define a basic image preprocessing function that loads a sample image and labels, normalizes and batches the image, and transforms the image into a tensor for inference using the compiled Neuron model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "import os\n", "from urllib import request\n", "\n", "# Create an image directory containing a sample image of a small kitten\n", "os.makedirs(\"./torch_neuron_test/images\", exist_ok=True)\n", "request.urlretrieve(\"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg\",\n", " \"./torch_neuron_test/images/kitten_small.jpg\")\n", "\n", "# Fetch labels to output the top classifications\n", "request.urlretrieve(\"https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json\",\"imagenet_class_index.json\")\n", "idx2label = []\n", "\n", "# Read the labels and create a list to hold them for classification \n", "with open(\"imagenet_class_index.json\", \"r\") as read_file:\n", " class_idx = json.load(read_file)\n", " idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "def preprocess(batch_size=1, num_neuron_cores=1):\n", " # Define a normalization function using the ImageNet mean and standard deviation\n", " normalize = transforms.Normalize(\n", " mean=[0.485, 0.456, 0.406],\n", " std=[0.229, 0.224, 0.225])\n", "\n", " # Resize the sample image to [1, 3, 224, 224], normalize it, and turn it into a tensor\n", " eval_dataset = datasets.ImageFolder(\n", " os.path.dirname(\"./torch_neuron_test/\"),\n", " transforms.Compose([\n", " transforms.Resize([224, 224]),\n", " transforms.ToTensor(),\n", " normalize,\n", " ])\n", " )\n", " image, _ = eval_dataset[0]\n", " image = torch.tensor(image.numpy()[np.newaxis, ...])\n", "\n", " # Create a \"batched\" image with enough images to go on each of the available NeuronCores\n", " # batch_size is the per-core batch size\n", " # num_neuron_cores is the number of NeuronCores being used\n", " batch_image = image\n", " for i in range(batch_size * num_neuron_cores - 1):\n", " batch_image = torch.cat([batch_image, image], 0)\n", " \n", " return batch_image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run inference using the Neuron model\n", "\n", "We import the necessary python modules, load the torch-neuron compiled model, and run inference on Inferentia. \n", "\n", "By default, the Neuron model will run on a single NeuronCore. In the next section, we will see how to run the Neuron model on multiple NeuronCores to fully saturate our hardware for optimal performance on Inferentia. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "from torchvision import models, transforms, datasets\n", "import torch_neuron\n", "\n", "# Get a sample image\n", "image = preprocess()\n", "\n", "# Run inference using the CPU model\n", "output_cpu = model(image)\n", "\n", "# Load the compiled Neuron model\n", "model_neuron = torch.jit.load('resnet50_neuron.pt')\n", "\n", "# Run inference using the Neuron model\n", "output_neuron = model_neuron(image)\n", "\n", "# Verify that the CPU and Neuron predictions are the same by comparing\n", "# the top-5 results\n", "top5_cpu = output_cpu[0].sort()[1][-5:]\n", "top5_neuron = output_neuron[0].sort()[1][-5:]\n", "\n", "# Lookup and print the top-5 labels\n", "top5_labels_cpu = [idx2label[idx] for idx in top5_cpu]\n", "top5_labels_neuron = [idx2label[idx] for idx in top5_neuron]\n", "print(\"CPU top-5 labels: {}\".format(top5_labels_cpu))\n", "print(\"Neuron top-5 labels: {}\".format(top5_labels_neuron))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run Inference using torch.neuron.DataParallel\n", "\n", "To fully leverage the Inferentia hardware we want to use all avaialable NeuronCores. An inf1.xlarge and inf1.2xlarge have four NeuronCores, an inf1.6xlarge has 16 NeuronCores, and an inf1.24xlarge has 64 NeuronCores. For maximum performance on Inferentia hardware, we can use `torch.neuron.DataParallel` to utilize all available NeuronCores.\n", "\n", "`torch.neuron.DataParallel` implements data parallelism at the module level by duplicating the Neuron model on all available NeuronCores and distributing data across the different cores for parallelized inference.\n", "\n", "In the following section, we will run inference using the `torch.neuron.DataParallel` module to fully saturate the Inferentia hardware. We benchmark the model to collect throughput and latency statistics.\n", "\n", "Note: `torch.neuron.DataParallel` is new with Neuron 1.16.0. Please ensure you are using the latest Neuron package to run the following sections. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define a benchmarking function\n", "\n", "We create a function that handles benchmarking the Neuron model to collect throughput and latency metrics. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from time import time\n", "\n", "def benchmark(model, image):\n", " print('Input image shape is {}'.format(list(image.shape)))\n", " \n", " # The first inference loads the model so exclude it from timing \n", " results = model(image)\n", " \n", " # Collect throughput and latency metrics\n", " latency = []\n", " throughput = []\n", "\n", " # Run inference for 100 iterations and calculate metrics\n", " num_infers = 100\n", " for _ in range(num_infers):\n", " delta_start = time()\n", " results = model(image)\n", " delta = time() - delta_start\n", " latency.append(delta)\n", " throughput.append(image.size(0)/delta)\n", " \n", " # Calculate and print the model throughput and latency\n", " print(\"Avg. Throughput: {:.0f}, Max Throughput: {:.0f}\".format(np.mean(throughput), np.max(throughput)))\n", " print(\"Latency P50: {:.0f}\".format(np.percentile(latency, 50)*1000.0))\n", " print(\"Latency P90: {:.0f}\".format(np.percentile(latency, 90)*1000.0))\n", " print(\"Latency P95: {:.0f}\".format(np.percentile(latency, 95)*1000.0))\n", " print(\"Latency P99: {:.0f}\\n\".format(np.percentile(latency, 99)*1000.0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run Inference using torch.neuron.DataParallel\n", "\n", "We create the `torch.neuron.DataParallel` module using the compiled Neuron model, get a sample image, and benchmark the parallelized model on Neuron." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create a torch.neuron.DataParallel module using the compiled Neuron model\n", "# By default, torch.neuron.DataParallel will use four cores on an inf1.xlarge\n", "# or inf1.2xlarge, 16 cores on an inf1.6xlarge, and 24 cores on an inf1.24xlarge\n", "model_neuron_parallel = torch.neuron.DataParallel(model_neuron)\n", "\n", "# Get sample image with batch size=1 per NeuronCore\n", "batch_size = 1\n", "\n", "# For an inf1.xlarge or inf1.2xlarge, set num_neuron_cores = 4\n", "num_neuron_cores = 16\n", "\n", "image = preprocess(batch_size=batch_size, num_neuron_cores=num_neuron_cores)\n", "\n", "# Benchmark the model\n", "benchmark(model_neuron_parallel, image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run inference with dynamic batch sizes\n", "\n", "Batch size has a direct impact on model performance. The Inferentia chip is optimized to run with small batch sizes. This means that a Neuron compiled model can outperform a GPU model, even if running single digit batch sizes.\n", "\n", "As a general best practice, we recommend optimizing your model's throughput by compiling the model with a small batch size and gradually increasing it to find the peak throughput on Inferentia.\n", "\n", "Dynamic batching is a feature that allows you to use tensor batch sizes that the Neuron model was not originally compiled against. This is necessary because the underlying Inferentia hardware will always execute inferences with the batch size used during compilation. Fixed batch size execution allows tuning the input batch size for optimal performance. For example, batch size 1 may be best suited for an ultra-low latency on-demand inference application, while batch size > 1 can be used to maximize throughput for offline inferencing. Dynamic batching is implemented by slicing large input tensors into chunks that match the batch size used during the `torch.neuron.trace` compilation call. \n", "\n", "The `torch.neuron.DataParallel` class automatically enables dynamic batching on eligible models. This allows us to run inference in applications that have inputs with a variable batch size without needing to recompile the model.\n", "\n", "In the following example, we use the same `torch.neuron.DataParallel` module to run inference using several different batch sizes. Notice that latency increases consistently as the batch size increases. Throughput increases as well, up until a certain point where the input size becomes too large to be efficient." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# using the same DataParallel model_neuron_parallel model, we can run\n", "# inference on inputs with a variable batch size without recompiling\n", "batch_sizes = [2, 3, 4, 5, 6, 7]\n", "for batch_size in batch_sizes:\n", " print('Batch size: {}'.format(batch_size))\n", " image = preprocess(batch_size=batch_size, num_neuron_cores=num_neuron_cores)\n", " \n", " # Benchmark the model for each input batch size\n", " benchmark(model_neuron_parallel, image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compile and Infer with different batch sizes on multiple NeuronCores\n", "\n", "Dynamic batching using small batch sizes can result in sub-optimal throughput because it involves slicing tensors into chunks and iteratively sending data to the hardware. Using a larger batch size at compilation time can use the Inferentia hardware more efficiently in order to maximize throughput. You can test the tradeoff between individual request latency and total throughput by fine-tuning the input batch size.\n", "\n", "In the following example, we recompile our model using a batch size of 5 and run the model using `torch.neuron.DataParallel` to fully saturate our Inferentia hardware for optimal performance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an input with batch size 5 for compilation\n", "batch_size = 5\n", "image = torch.zeros([batch_size, 3, 224, 224], dtype=torch.float32)\n", "\n", "# Recompile the ResNet50 model for inference with batch size 5\n", "model_neuron = torch.neuron.trace(model, example_inputs=[image])\n", "\n", "# Export to saved model\n", "model_neuron.save(\"resnet50_neuron_b{}.pt\".format(batch_size))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run inference with batch size of 5 using the Neuron model compiled for a batch size of 5." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch_size = 5\n", "\n", "# Load compiled Neuron model\n", "model_neuron = torch.jit.load(\"resnet50_neuron_b{}.pt\".format(batch_size))\n", "\n", "# Create DataParallel model\n", "model_neuron_parallel = torch.neuron.DataParallel(model_neuron)\n", "\n", "# Get sample image with batch size=5\n", "image = preprocess(batch_size=batch_size, num_neuron_cores=num_neuron_cores)\n", "\n", "# Benchmark the model\n", "benchmark(model_neuron_parallel, image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can experiment with different batch size values to see what gives the best overall throughput on Inferentia." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.9 64-bit", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.9" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 4 } ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# ResNet50 model for Inferentia\n", "\n", "\n", "## Introduction:\n", "\n", "In this tutorial we will compile and deploy a ResNet50 model for inference on Inferentia. \n", "\n", "This Jupyter notebook should run on an inf1.6xlarge instance. The inference part of this tutorial requires an inf1 instance, not the compilation stage. For simplicity we will run this tutorial on an inf1.6xlarge, but in real life scenarios the compilation should be done on a compute instance and the deployment on an inf1 instance to save costs. \n", "\n", "In this tutorial we provide three main sections:\n", "\n", "1. Compile the ResNet50 model and infer with a batch size of 1\n", "\n", "2. Run the same compiled model on multiple NeuronCores using `torch.neuron.DataParallel` and dynamic batching\n", "\n", "3. Compile the ResNet50 model with a batch size of 5 and run it on multiple NeuronCores using `torch.neuron.DataParallel` for optimal performance on Inferentia\n", "\n", "Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the \"Kernel -&gt; Change Kernel\" option on the top of this Jupyter notebook page." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install Dependencies:\n", "This tutorial requires the following pip packages:\n", "\n", "- `torch&gt;=1.8`\n", "- `torch-neuron`\n", "- `torchvision`\n", "- `neuron-cc[tensorflow]`\n", "\n", "These will be installed by default when configuring your environment using the Neuron PyTorch setup guide." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compile model for Neuron\n", "\n", "The following step will compile the ResNet50 model for Inferentia. This will take a few minutes. At the end of script execution, the compiled model is saved as `resnet50_neuron.pt` in your local directory" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "from torchvision import models, transforms, datasets\n", "import torch_neuron\n", "\n", "# Create an example input for compilation\n", "image = torch.zeros([1, 3, 224, 224], dtype=torch.float32)\n", "\n", "# Load a pretrained ResNet50 model\n", "model = models.resnet50(pretrained=True)\n", "\n", "# Tell the model we are using it for evaluation (not training)\n", "model.eval()\n", "\n", "# Analyze the model - this will show operator support and operator count\n", "torch.neuron.analyze_model(model, example_inputs=[image])\n", "\n", "# Compile the model using torch.neuron.trace to create a Neuron model\n", "# that that is optimized for the Inferentia hardware\n", "model_neuron = torch.neuron.trace(model, example_inputs=[image])\n", "\n", "# The output of the compilation step will report the percentage of operators that \n", "# are compiled to Neuron, for example:\n", "#\n", "# INFO:Neuron:The neuron partitioner created 1 sub-graphs\n", "# INFO:Neuron:Neuron successfully compiled 1 sub-graphs, Total fused subgraphs = 1, Percent of model sub-graphs successfully compiled = 100.0%\n", "# \n", "# We will also be warned if there are operators that are not placed on the Inferentia hardware\n", "\n", "# Save the compiled model\n", "model_neuron.save(\"resnet50_neuron.pt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run inference on Inferentia\n", "\n", "We can use the compiled Neuron model to run inference on Inferentia.\n", "\n", "In the following example, we preprocess a sample image for inference using the CPU model and Neuron model. We compare the predicted labels from the CPU model and Neuron model to verify that they are the same.\n", "\n", "Important: Do not perform inference with a Neuron traced model on a non-Neuron supported instance, as the results will not be calculated properly." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define a preprocessing function\n", "\n", "We define a basic image preprocessing function that loads a sample image and labels, normalizes and batches the image, and transforms the image into a tensor for inference using the compiled Neuron model." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "import os\n", "from urllib import request\n", "\n", "# Create an image directory containing a sample image of a small kitten\n", "os.makedirs(\"./torch_neuron_test/images\", exist_ok=True)\n", "request.urlretrieve(\"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg\",\n", " \"./torch_neuron_test/images/kitten_small.jpg\")\n", "\n", "# Fetch labels to output the top classifications\n", "request.urlretrieve(\"https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json\",\"imagenet_class_index.json\")\n", "idx2label = []\n", "\n", "# Read the labels and create a list to hold them for classification \n", "with open(\"imagenet_class_index.json\", \"r\") as read_file:\n", " class_idx = json.load(read_file)\n", " idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "def preprocess(batch_size=1, num_neuron_cores=1):\n", " # Define a normalization function using the ImageNet mean and standard deviation\n", " normalize = transforms.Normalize(\n", " mean=[0.485, 0.456, 0.406],\n", " std=[0.229, 0.224, 0.225])\n", "\n", " # Resize the sample image to [1, 3, 224, 224], normalize it, and turn it into a tensor\n", " eval_dataset = datasets.ImageFolder(\n", " os.path.dirname(\"./torch_neuron_test/\"),\n", " transforms.Compose([\n", " transforms.Resize([224, 224]),\n", " transforms.ToTensor(),\n", " normalize,\n", " ])\n", " )\n", " image, _ = eval_dataset[0]\n", " image = torch.tensor(image.numpy()[np.newaxis, ...])\n", "\n", " # Create a \"batched\" image with enough images to go on each of the available NeuronCores\n", " # batch_size is the per-core batch size\n", " # num_neuron_cores is the number of NeuronCores being used\n", " batch_image = image\n", " for i in range(batch_size * num_neuron_cores - 1):\n", " batch_image = torch.cat([batch_image, image], 0)\n", " \n", " return batch_image" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run inference using the Neuron model\n", "\n", "We import the necessary python modules, load the torch-neuron compiled model, and run inference on Inferentia. \n", "\n", "By default, the Neuron model will run on a single NeuronCore. In the next section, we will see how to run the Neuron model on multiple NeuronCores to fully saturate our hardware for optimal performance on Inferentia. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "from torchvision import models, transforms, datasets\n", "import torch_neuron\n", "\n", "# Get a sample image\n", "image = preprocess()\n", "\n", "# Run inference using the CPU model\n", "output_cpu = model(image)\n", "\n", "# Load the compiled Neuron model\n", "model_neuron = torch.jit.load('resnet50_neuron.pt')\n", "\n", "# Run inference using the Neuron model\n", "output_neuron = model_neuron(image)\n", "\n", "# Verify that the CPU and Neuron predictions are the same by comparing\n", "# the top-5 results\n", "top5_cpu = output_cpu[0].sort()[1][-5:]\n", "top5_neuron = output_neuron[0].sort()[1][-5:]\n", "\n", "# Lookup and print the top-5 labels\n", "top5_labels_cpu = [idx2label[idx] for idx in top5_cpu]\n", "top5_labels_neuron = [idx2label[idx] for idx in top5_neuron]\n", "print(\"CPU top-5 labels: {}\".format(top5_labels_cpu))\n", "print(\"Neuron top-5 labels: {}\".format(top5_labels_neuron))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run Inference using torch.neuron.DataParallel\n", "\n", "To fully leverage the Inferentia hardware we want to use all avaialable NeuronCores. An inf1.xlarge and inf1.2xlarge have four NeuronCores, an inf1.6xlarge has 16 NeuronCores, and an inf1.24xlarge has 64 NeuronCores. For maximum performance on Inferentia hardware, we can use `torch.neuron.DataParallel` to utilize all available NeuronCores.\n", "\n", "`torch.neuron.DataParallel` implements data parallelism at the module level by duplicating the Neuron model on all available NeuronCores and distributing data across the different cores for parallelized inference.\n", "\n", "In the following section, we will run inference using the `torch.neuron.DataParallel` module to fully saturate the Inferentia hardware. We benchmark the model to collect throughput and latency statistics.\n", "\n", "Note: `torch.neuron.DataParallel` is new with Neuron 1.16.0. Please ensure you are using the latest Neuron package to run the following sections. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Define a benchmarking function\n", "\n", "We create a function that handles benchmarking the Neuron model to collect throughput and latency metrics. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from time import time\n", "\n", "def benchmark(model, image):\n", " print('Input image shape is {}'.format(list(image.shape)))\n", " \n", " # The first inference loads the model so exclude it from timing \n", " results = model(image)\n", " \n", " # Collect throughput and latency metrics\n", " latency = []\n", " throughput = []\n", "\n", " # Run inference for 100 iterations and calculate metrics\n", " num_infers = 100\n", " for _ in range(num_infers):\n", " delta_start = time()\n", " results = model(image)\n", " delta = time() - delta_start\n", " latency.append(delta)\n", " throughput.append(image.size(0)/delta)\n", " \n", " # Calculate and print the model throughput and latency\n", " print(\"Avg. Throughput: {:.0f}, Max Throughput: {:.0f}\".format(np.mean(throughput), np.max(throughput)))\n", " print(\"Latency P50: {:.0f}\".format(np.percentile(latency, 50)*1000.0))\n", " print(\"Latency P90: {:.0f}\".format(np.percentile(latency, 90)*1000.0))\n", " print(\"Latency P95: {:.0f}\".format(np.percentile(latency, 95)*1000.0))\n", " print(\"Latency P99: {:.0f}\\n\".format(np.percentile(latency, 99)*1000.0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run Inference using torch.neuron.DataParallel\n", "\n", "We create the `torch.neuron.DataParallel` module using the compiled Neuron model, get a sample image, and benchmark the parallelized model on Neuron." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create a torch.neuron.DataParallel module using the compiled Neuron model\n", "# By default, torch.neuron.DataParallel will use four cores on an inf1.xlarge\n", "# or inf1.2xlarge, 16 cores on an inf1.6xlarge, and 24 cores on an inf1.24xlarge\n", "model_neuron_parallel = torch.neuron.DataParallel(model_neuron)\n", "\n", "# Get sample image with batch size=1 per NeuronCore\n", "batch_size = 1\n", "\n", "# For an inf1.xlarge or inf1.2xlarge, set num_neuron_cores = 4\n", "num_neuron_cores = 16\n", "\n", "image = preprocess(batch_size=batch_size, num_neuron_cores=num_neuron_cores)\n", "\n", "# Benchmark the model\n", "benchmark(model_neuron_parallel, image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run inference with dynamic batch sizes\n", "\n", "Batch size has a direct impact on model performance. The Inferentia chip is optimized to run with small batch sizes. This means that a Neuron compiled model can outperform a GPU model, even if running single digit batch sizes.\n", "\n", "As a general best practice, we recommend optimizing your model's throughput by compiling the model with a small batch size and gradually increasing it to find the peak throughput on Inferentia.\n", "\n", "Dynamic batching is a feature that allows you to use tensor batch sizes that the Neuron model was not originally compiled against. This is necessary because the underlying Inferentia hardware will always execute inferences with the batch size used during compilation. Fixed batch size execution allows tuning the input batch size for optimal performance. For example, batch size 1 may be best suited for an ultra-low latency on-demand inference application, while batch size &gt; 1 can be used to maximize throughput for offline inferencing. Dynamic batching is implemented by slicing large input tensors into chunks that match the batch size used during the `torch.neuron.trace` compilation call. \n", "\n", "The `torch.neuron.DataParallel` class automatically enables dynamic batching on eligible models. This allows us to run inference in applications that have inputs with a variable batch size without needing to recompile the model.\n", "\n", "In the following example, we use the same `torch.neuron.DataParallel` module to run inference using several different batch sizes. Notice that latency increases consistently as the batch size increases. Throughput increases as well, up until a certain point where the input size becomes too large to be efficient." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# using the same DataParallel model_neuron_parallel model, we can run\n", "# inference on inputs with a variable batch size without recompiling\n", "batch_sizes = [2, 3, 4, 5, 6, 7]\n", "for batch_size in batch_sizes:\n", " print('Batch size: {}'.format(batch_size))\n", " image = preprocess(batch_size=batch_size, num_neuron_cores=num_neuron_cores)\n", " \n", " # Benchmark the model for each input batch size\n", " benchmark(model_neuron_parallel, image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compile and Infer with different batch sizes on multiple NeuronCores\n", "\n", "Dynamic batching using small batch sizes can result in sub-optimal throughput because it involves slicing tensors into chunks and iteratively sending data to the hardware. Using a larger batch size at compilation time can use the Inferentia hardware more efficiently in order to maximize throughput. You can test the tradeoff between individual request latency and total throughput by fine-tuning the input batch size.\n", "\n", "In the following example, we recompile our model using a batch size of 5 and run the model using `torch.neuron.DataParallel` to fully saturate our Inferentia hardware for optimal performance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an input with batch size 5 for compilation\n", "batch_size = 5\n", "image = torch.zeros([batch_size, 3, 224, 224], dtype=torch.float32)\n", "\n", "# Recompile the ResNet50 model for inference with batch size 5\n", "model_neuron = torch.neuron.trace(model, example_inputs=[image])\n", "\n", "# Export to saved model\n", "model_neuron.save(\"resnet50_neuron_b{}.pt\".format(batch_size))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run inference with batch size of 5 using the Neuron model compiled for a batch size of 5." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "batch_size = 5\n", "\n", "# Load compiled Neuron model\n", "model_neuron = torch.jit.load(\"resnet50_neuron_b{}.pt\".format(batch_size))\n", "\n", "# Create DataParallel model\n", "model_neuron_parallel = torch.neuron.DataParallel(model_neuron)\n", "\n", "# Get sample image with batch size=5\n", "image = preprocess(batch_size=batch_size, num_neuron_cores=num_neuron_cores)\n", "\n", "# Benchmark the model\n", "benchmark(model_neuron_parallel, image)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can experiment with different batch size values to see what gives the best overall throughput on Inferentia." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.8.9 64-bit", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.9" }, "vscode": { "interpreter": { "hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6" } } }, "nbformat": 4, "nbformat_minor": 4 } </pre></body></html>
2023-09-29T20:55:25.501Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-hardware/neuron-core-v2.rst.txt
``` .. _neuroncores-v2-arch: NeuronCore-v2 Architecture -------------------------- NeuronCore-v2 is the second generation of the NeuronCore engine, powering the Trainium NeuronDevices. Each NeuronCore-v2 is a fully-independent heterogenous compute-unit, with 4 main engines (Tensor/Vector/Scalar/GPSIMD Engines), and on-chip software-managed SRAM memory, for maximizing data locality (compiler managed, for maximum data locality and optimized data prefetch). .. image:: /images/nc-v2.png Just like in NeuronCore-v1, The ScalarEngine is optimized for scalar-computations, in which every element of the output is dependent on one element of the input. The ScalarEngine is highly parallelized, and can perform 1,600 floating point operations per cycle (3x speedup relative to NeuronCore-v1). The NeuronCore-v2 ScalarEngine can handle various data-types, including cFP8, FP16, BF16, TF32, FP32, INT8, INT16 and INT32. The VectorEngine is optimized for vector-computations, in which every element of the output is dependent on multiple input elements. Examples include ‘axpy’ operations (Z=aX+Y), Layer Normalization, Pooling operations, and many more. The VectorEngine is also highly parallelized, and can perform 2,500 floating points operations per cycle (10x speedup vs NeuronCore-v1). The NeuronCore-v2 VectorEngine can handle various data-types, including cFP8, FP16, BF16, TF32, FP32, INT8, INT16 and INT32. The TensorEngine is based on a power-optimized systolic-array which is highly optimized for tensor computations (e.g. GEMM, CONV, Reshape, Transpose), and supports mixed-precision computations (cFP8 / FP16 / BF16 / TF32 / FP32 / INT8 inputs, FP32 / INT32 outputs). Each NeuronCore-v2 TensorEngine delivers over 100 TFLOPS of FP16/BF16 tensor computations (a 6x speedup from NeuronCore-v1). NeuronCore-v2 also introduces a new engine, called GPSIMD-Engine. This engine consists of 8 fully programmable 512-bit wide general-purpose processors, which can execute straight-line C-code, and have direct access to the other NeuronCore-v2 engines, as well as the embedded on-chip SRAM memory. With these cores, customers can implement custom-operators and execute them directly on the NeuronCore engines. NeuronCore-v2 also adds support for control-flow, dynamic-shapes, and programmable :ref:`rounding mode <neuron-rounding-modes>` (RNE & Stochastic-rounding). ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuroncores-v2-arch: NeuronCore-v2 Architecture -------------------------- NeuronCore-v2 is the second generation of the NeuronCore engine, powering the Trainium NeuronDevices. Each NeuronCore-v2 is a fully-independent heterogenous compute-unit, with 4 main engines (Tensor/Vector/Scalar/GPSIMD Engines), and on-chip software-managed SRAM memory, for maximizing data locality (compiler managed, for maximum data locality and optimized data prefetch). .. image:: /images/nc-v2.png Just like in NeuronCore-v1, The ScalarEngine is optimized for scalar-computations, in which every element of the output is dependent on one element of the input. The ScalarEngine is highly parallelized, and can perform 1,600 floating point operations per cycle (3x speedup relative to NeuronCore-v1). The NeuronCore-v2 ScalarEngine can handle various data-types, including cFP8, FP16, BF16, TF32, FP32, INT8, INT16 and INT32. The VectorEngine is optimized for vector-computations, in which every element of the output is dependent on multiple input elements. Examples include ‘axpy’ operations (Z=aX+Y), Layer Normalization, Pooling operations, and many more. The VectorEngine is also highly parallelized, and can perform 2,500 floating points operations per cycle (10x speedup vs NeuronCore-v1). The NeuronCore-v2 VectorEngine can handle various data-types, including cFP8, FP16, BF16, TF32, FP32, INT8, INT16 and INT32. The TensorEngine is based on a power-optimized systolic-array which is highly optimized for tensor computations (e.g. GEMM, CONV, Reshape, Transpose), and supports mixed-precision computations (cFP8 / FP16 / BF16 / TF32 / FP32 / INT8 inputs, FP32 / INT32 outputs). Each NeuronCore-v2 TensorEngine delivers over 100 TFLOPS of FP16/BF16 tensor computations (a 6x speedup from NeuronCore-v1). NeuronCore-v2 also introduces a new engine, called GPSIMD-Engine. This engine consists of 8 fully programmable 512-bit wide general-purpose processors, which can execute straight-line C-code, and have direct access to the other NeuronCore-v2 engines, as well as the embedded on-chip SRAM memory. With these cores, customers can implement custom-operators and execute them directly on the NeuronCore engines. NeuronCore-v2 also adds support for control-flow, dynamic-shapes, and programmable :ref:`rounding mode &lt;neuron-rounding-modes&gt;` (RNE &amp; Stochastic-rounding). </pre></body></html>
2023-09-29T20:55:25.537Z
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/tutorials/tutorial-torchserve.rst.txt
``` .. _pytorch-tutorials-torchserve: BERT TorchServe Tutorial ======================== .. contents:: Table of Contents :local: :depth: 2 Overview -------- This tutorial demonstrates the use of `TorchServe <https://pytorch.org/serve>`_ with Neuron, the SDK for Amazon Inf1 instances. By the end of this tutorial, you will understand how TorchServe can be used to serve a model backed by EC2 Inf1 instances. We will use a pretrained BERT-Base model to determine if one sentence is a paraphrase of another. .. _torchserve-compile: Run the tutorial ---------------- Open a terminal, log into your remote instance, and activate a Pytorch virtual environment setup (see the :ref:`Pytorch Installation Guide <install-neuron-pytorch>`). To complete this tutorial, you will need a compiled BERT model. If you have already completed the HuggingFace Pretrained BERT tutorial :ref:`[html] </src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb>` :pytorch-neuron-src:`[notebook] <bert_tutorial/tutorial_pretrained_bert.ipynb>` then you already have the necessary file. Otherwise, you can setup your environment as shown below and then run :download:`trace_bert_neuron.py </src/examples/pytorch/torchserve/trace_bert_neuron.py>` to obtain a traced BERT model. You should now have a compiled ``bert_neuron_b6.pt`` file, which is required going forward. Open a shell on the instance you prepared earlier, create a new directory named ``torchserve``. Copy your compiled model from the previous tutorial into this new directory. .. code:: bash cd torchserve ls :: bert_neuron_b6.pt Prepare a new Python virtual environment with the necessary Neuron and TorchServe components. Use a virtual environment to keep (most of) the various tutorial components isolated from the rest of the system in a controlled way. .. code:: bash pip install transformers==4.20.1 torchserve==0.7.0 torch-model-archiver==0.7.0 captum==0.6.0 Install the system requirements for TorchServe. .. tab-set:: .. tab-item:: Amazon Linux 2 DLAMI Base .. code-block:: bash sudo yum install jq java-11-amazon-corretto-headless sudo alternatives --config java sudo alternatives --config javac .. tab-item:: Ubuntu 20 DLAMI Base .. code-block:: bash sudo apt install openjdk-11-jdk .. code:: bash java -version :: openjdk version "11.0.17" 2022-10-18 OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu218.04) OpenJDK 64-Bit Server VM (build 11.0.17+8-post-Ubuntu-1ubuntu218.04, mixed mode, sharing) .. code:: bash javac -version :: javac 11.0.17 Verify that TorchServe is now available. .. code:: bash torchserve --version :: TorchServe Version is 0.7.0 .. _torchserve-setup: Setup TorchServe ---------------- During this tutorial you will need to download a few files onto your instance. The simplest way to accomplish this is to paste the download links provided above each file into a ``wget`` command. (We don't provide the links directly because they are subject to change.) For example, right-click and copy the download link for ``config.json`` shown below. .. literalinclude:: /src/examples/pytorch/torchserve/config.json :language: JSON :caption: :download:`config.json </src/examples/pytorch/torchserve/config.json>` Now execute the following in your shell: .. code:: bash wget <paste link here> ls :: bert_neuron_b6.pt config.json Download the `custom handler script <https://pytorch.org/serve/custom_service.html>`_ that will eventually respond to inference requests. .. literalinclude:: /src/examples/pytorch/torchserve/handler_bert.py :language: python :caption: :download:`handler_bert.py </src/examples/pytorch/torchserve/handler_bert.py>` :linenos: Next, we need to associate the handler script with the compiled model using ``torch-model-archiver``. Run the following commands in your terminal: .. code:: bash mkdir model_store MAX_LENGTH=$(jq '.max_length' config.json) BATCH_SIZE=$(jq '.batch_size' config.json) MODEL_NAME=bert-max_length$MAX_LENGTH-batch_size$BATCH_SIZE torch-model-archiver --model-name "$MODEL_NAME" --version 1.0 --serialized-file ./bert_neuron_b6.pt --handler "./handler_bert.py" --extra-files "./config.json" --export-path model_store .. note:: If you modify your model or a dependency, you will need to rerun the archiver command with the ``-f`` flag appended to update the archive. The result of the above will be a ``mar`` file inside the ``model_store`` directory. .. code:: bash $ ls model_store :: bert-max_length128-batch_size6.mar This file is essentially an archive associated with a fixed version of your model along with its dependencies (e.g. the handler code). .. note:: The version specified in the ``torch-model-archiver`` command can be appended to REST API requests to access a specific version of your model. For example, if your model was hosted locally on port 8080 and named "bert", the latest version of your model would be available at ``http://localhost:8080/predictions/bert``, while version 1.0 would be accessible at ``http://localhost:8080/predictions/bert/1.0``. We will see how to perform inference using this API in Step 6. Create a `custom config <https://pytorch.org/serve/configuration.html>`_ file to set some parameters. This file will be used to configure the server at launch when we run ``torchserve --start``. .. literalinclude:: /src/examples/pytorch/torchserve/torchserve.config :language: properties :caption: :download:`torchserve.config </src/examples/pytorch/torchserve/torchserve.config>` .. note:: This will cause TorchServe to bind on all interfaces. For security in real-world applications, you’ll probably want to use port 8443 and `enable SSL <https://pytorch.org/serve/configuration.html#enable-ssl>`_. .. _torchserve-run: Run TorchServe -------------- It's time to start the server. Typically we'd want to launch this in a separate console, but for this demo we’ll just redirect output to a file. .. code:: bash torchserve --start --ncs --model-store model_store --ts-config torchserve.config 2>&1 >torchserve.log Verify that the server seems to have started okay. .. code:: bash curl http://127.0.0.1:8080/ping :: { "status": "Healthy" } .. note:: If you get an error when trying to ping the server, you may have tried before the server was fully launched. Check ``torchserve.log`` for details. Use the Management API to instruct TorchServe to load our model. .. code:: bash $ MAX_BATCH_DELAY=5000 # ms timeout before a partial batch is processed $ INITIAL_WORKERS=4 # number of models that will be loaded at launch $ curl -X POST "http://localhost:8081/models?url=$MODEL_NAME.mar&batch_size=$BATCH_SIZE&initial_workers=$INITIAL_WORKERS&max_batch_delay=$MAX_BATCH_DELAY" :: { "status": "Model \"bert-max_length128-batch_size6\" Version: 1.0 registered with 4 initial workers" } .. note:: Any additional attempts to configure the model after the initial curl request will cause the server to return a 409 error. You’ll need to stop/start/configure the server to realize any changes. The ``MAX_BATCH_DELAY`` is a timeout value that determines how long to wait before processing a partial batch. This is why the handler code needs to check the batch dimension and potentially add padding. TorchServe will instantiate the number of model handlers indicated by ``INITIAL_WORKERS``, so this value controls how many models we will load onto Inferentia in parallel. This tutorial was performed on an inf1.xlarge instance (one Inferentia chip), so there are four NeuronCores available. If you want to control worker scaling more dynamically, `see the docs <https://pytorch.org/serve/management_api.html#scale-workers>`_. .. warning:: If you attempt to load more models than NeuronCores available, one of two things will occur. Either the extra models will fit in device memory but performance will suffer, or you will encounter an error on your initial inference. You shouldn't set ``INITIAL_WORKERS`` above the number of NeuronCores. However, you may want to use fewer cores if you are using the :ref:`neuroncore-pipeline` feature. It looks like everything is running successfully at this point, so it's time for an inference. Create the ``infer_bert.py`` file below on your instance. .. literalinclude:: /src/examples/pytorch/torchserve/infer_bert.py :language: python :caption: :download:`infer_bert.py </src/examples/pytorch/torchserve/infer_bert.py>` :linenos: This script will send a ``batch_size`` number of requests to our model. In this example, we are using a model that estimates the probability that one sentence is a paraphrase of another. The script sends positive examples in the first half of the batch and negative examples in the second half. Execute the script in your terminal. .. code:: bash $ python infer_bert.py :: 1 ['paraphrase'] 3 ['not paraphrase'] 4 ['not paraphrase'] 0 ['paraphrase'] 5 ['not paraphrase'] 2 ['paraphrase'] We can see that the first three threads (0, 1, 2) all report ``paraphrase``, as expected. If we instead modify the script to send an incomplete batch and then wait for the timeout to expire, the excess padding results will be discarded. .. _torchserve-benchmark: Benchmark TorchServe -------------------- We've seen how to perform a single batched inference, but how many inferences can we process per second? A separate upcoming tutorial will document performance tuning to maximize throughput. In the meantime, we can still perform a simple naïve stress test. The code below will spawn 64 worker threads, with each thread repeatedly sending a full batch of data to process. A separate thread will periodically print throughput and latency measurements. .. literalinclude:: /src/examples/pytorch/torchserve/benchmark_bert.py :language: python :caption: :download:`benchmark_bert.py </src/examples/pytorch/torchserve/benchmark_bert.py>` :linenos: Run the benchmarking script. .. code:: bash python benchmark_bert.py :: pid 28523: current throughput 0.0, latency p50=0.000 p90=0.000 pid 28523: current throughput 617.7, latency p50=0.092 p90=0.156 pid 28523: current throughput 697.3, latency p50=0.082 p90=0.154 pid 28523: current throughput 702.8, latency p50=0.081 p90=0.149 pid 28523: current throughput 699.1, latency p50=0.085 p90=0.147 pid 28523: current throughput 703.8, latency p50=0.083 p90=0.148 pid 28523: current throughput 699.3, latency p50=0.083 p90=0.148 ... **Congratulations!** By now you should have successfully served a batched model over TorchServe. You can now shutdown torchserve. .. code:: bash torchserve --stop ```
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-tutorials-torchserve: BERT TorchServe Tutorial ======================== .. contents:: Table of Contents :local: :depth: 2 Overview -------- This tutorial demonstrates the use of `TorchServe &lt;https://pytorch.org/serve&gt;`_ with Neuron, the SDK for Amazon Inf1 instances. By the end of this tutorial, you will understand how TorchServe can be used to serve a model backed by EC2 Inf1 instances. We will use a pretrained BERT-Base model to determine if one sentence is a paraphrase of another. .. _torchserve-compile: Run the tutorial ---------------- Open a terminal, log into your remote instance, and activate a Pytorch virtual environment setup (see the :ref:`Pytorch Installation Guide &lt;install-neuron-pytorch&gt;`). To complete this tutorial, you will need a compiled BERT model. If you have already completed the HuggingFace Pretrained BERT tutorial :ref:`[html] &lt;/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb&gt;` :pytorch-neuron-src:`[notebook] &lt;bert_tutorial/tutorial_pretrained_bert.ipynb&gt;` then you already have the necessary file. Otherwise, you can setup your environment as shown below and then run :download:`trace_bert_neuron.py &lt;/src/examples/pytorch/torchserve/trace_bert_neuron.py&gt;` to obtain a traced BERT model. You should now have a compiled ``bert_neuron_b6.pt`` file, which is required going forward. Open a shell on the instance you prepared earlier, create a new directory named ``torchserve``. Copy your compiled model from the previous tutorial into this new directory. .. code:: bash cd torchserve ls :: bert_neuron_b6.pt Prepare a new Python virtual environment with the necessary Neuron and TorchServe components. Use a virtual environment to keep (most of) the various tutorial components isolated from the rest of the system in a controlled way. .. code:: bash pip install transformers==4.20.1 torchserve==0.7.0 torch-model-archiver==0.7.0 captum==0.6.0 Install the system requirements for TorchServe. .. tab-set:: .. tab-item:: Amazon Linux 2 DLAMI Base .. code-block:: bash sudo yum install jq java-11-amazon-corretto-headless sudo alternatives --config java sudo alternatives --config javac .. tab-item:: Ubuntu 20 DLAMI Base .. code-block:: bash sudo apt install openjdk-11-jdk .. code:: bash java -version :: openjdk version "11.0.17" 2022-10-18 OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu218.04) OpenJDK 64-Bit Server VM (build 11.0.17+8-post-Ubuntu-1ubuntu218.04, mixed mode, sharing) .. code:: bash javac -version :: javac 11.0.17 Verify that TorchServe is now available. .. code:: bash torchserve --version :: TorchServe Version is 0.7.0 .. _torchserve-setup: Setup TorchServe ---------------- During this tutorial you will need to download a few files onto your instance. The simplest way to accomplish this is to paste the download links provided above each file into a ``wget`` command. (We don't provide the links directly because they are subject to change.) For example, right-click and copy the download link for ``config.json`` shown below. .. literalinclude:: /src/examples/pytorch/torchserve/config.json :language: JSON :caption: :download:`config.json &lt;/src/examples/pytorch/torchserve/config.json&gt;` Now execute the following in your shell: .. code:: bash wget &lt;paste link here&gt; ls :: bert_neuron_b6.pt config.json Download the `custom handler script &lt;https://pytorch.org/serve/custom_service.html&gt;`_ that will eventually respond to inference requests. .. literalinclude:: /src/examples/pytorch/torchserve/handler_bert.py :language: python :caption: :download:`handler_bert.py &lt;/src/examples/pytorch/torchserve/handler_bert.py&gt;` :linenos: Next, we need to associate the handler script with the compiled model using ``torch-model-archiver``. Run the following commands in your terminal: .. code:: bash mkdir model_store MAX_LENGTH=$(jq '.max_length' config.json) BATCH_SIZE=$(jq '.batch_size' config.json) MODEL_NAME=bert-max_length$MAX_LENGTH-batch_size$BATCH_SIZE torch-model-archiver --model-name "$MODEL_NAME" --version 1.0 --serialized-file ./bert_neuron_b6.pt --handler "./handler_bert.py" --extra-files "./config.json" --export-path model_store .. note:: If you modify your model or a dependency, you will need to rerun the archiver command with the ``-f`` flag appended to update the archive. The result of the above will be a ``mar`` file inside the ``model_store`` directory. .. code:: bash $ ls model_store :: bert-max_length128-batch_size6.mar This file is essentially an archive associated with a fixed version of your model along with its dependencies (e.g. the handler code). .. note:: The version specified in the ``torch-model-archiver`` command can be appended to REST API requests to access a specific version of your model. For example, if your model was hosted locally on port 8080 and named "bert", the latest version of your model would be available at ``http://localhost:8080/predictions/bert``, while version 1.0 would be accessible at ``http://localhost:8080/predictions/bert/1.0``. We will see how to perform inference using this API in Step 6. Create a `custom config &lt;https://pytorch.org/serve/configuration.html&gt;`_ file to set some parameters. This file will be used to configure the server at launch when we run ``torchserve --start``. .. literalinclude:: /src/examples/pytorch/torchserve/torchserve.config :language: properties :caption: :download:`torchserve.config &lt;/src/examples/pytorch/torchserve/torchserve.config&gt;` .. note:: This will cause TorchServe to bind on all interfaces. For security in real-world applications, you’ll probably want to use port 8443 and `enable SSL &lt;https://pytorch.org/serve/configuration.html#enable-ssl&gt;`_. .. _torchserve-run: Run TorchServe -------------- It's time to start the server. Typically we'd want to launch this in a separate console, but for this demo we’ll just redirect output to a file. .. code:: bash torchserve --start --ncs --model-store model_store --ts-config torchserve.config 2&gt;&amp;1 &gt;torchserve.log Verify that the server seems to have started okay. .. code:: bash curl http://127.0.0.1:8080/ping :: { "status": "Healthy" } .. note:: If you get an error when trying to ping the server, you may have tried before the server was fully launched. Check ``torchserve.log`` for details. Use the Management API to instruct TorchServe to load our model. .. code:: bash $ MAX_BATCH_DELAY=5000 # ms timeout before a partial batch is processed $ INITIAL_WORKERS=4 # number of models that will be loaded at launch $ curl -X POST "http://localhost:8081/models?url=$MODEL_NAME.mar&amp;batch_size=$BATCH_SIZE&amp;initial_workers=$INITIAL_WORKERS&amp;max_batch_delay=$MAX_BATCH_DELAY" :: { "status": "Model \"bert-max_length128-batch_size6\" Version: 1.0 registered with 4 initial workers" } .. note:: Any additional attempts to configure the model after the initial curl request will cause the server to return a 409 error. You’ll need to stop/start/configure the server to realize any changes. The ``MAX_BATCH_DELAY`` is a timeout value that determines how long to wait before processing a partial batch. This is why the handler code needs to check the batch dimension and potentially add padding. TorchServe will instantiate the number of model handlers indicated by ``INITIAL_WORKERS``, so this value controls how many models we will load onto Inferentia in parallel. This tutorial was performed on an inf1.xlarge instance (one Inferentia chip), so there are four NeuronCores available. If you want to control worker scaling more dynamically, `see the docs &lt;https://pytorch.org/serve/management_api.html#scale-workers&gt;`_. .. warning:: If you attempt to load more models than NeuronCores available, one of two things will occur. Either the extra models will fit in device memory but performance will suffer, or you will encounter an error on your initial inference. You shouldn't set ``INITIAL_WORKERS`` above the number of NeuronCores. However, you may want to use fewer cores if you are using the :ref:`neuroncore-pipeline` feature. It looks like everything is running successfully at this point, so it's time for an inference. Create the ``infer_bert.py`` file below on your instance. .. literalinclude:: /src/examples/pytorch/torchserve/infer_bert.py :language: python :caption: :download:`infer_bert.py &lt;/src/examples/pytorch/torchserve/infer_bert.py&gt;` :linenos: This script will send a ``batch_size`` number of requests to our model. In this example, we are using a model that estimates the probability that one sentence is a paraphrase of another. The script sends positive examples in the first half of the batch and negative examples in the second half. Execute the script in your terminal. .. code:: bash $ python infer_bert.py :: 1 ['paraphrase'] 3 ['not paraphrase'] 4 ['not paraphrase'] 0 ['paraphrase'] 5 ['not paraphrase'] 2 ['paraphrase'] We can see that the first three threads (0, 1, 2) all report ``paraphrase``, as expected. If we instead modify the script to send an incomplete batch and then wait for the timeout to expire, the excess padding results will be discarded. .. _torchserve-benchmark: Benchmark TorchServe -------------------- We've seen how to perform a single batched inference, but how many inferences can we process per second? A separate upcoming tutorial will document performance tuning to maximize throughput. In the meantime, we can still perform a simple naïve stress test. The code below will spawn 64 worker threads, with each thread repeatedly sending a full batch of data to process. A separate thread will periodically print throughput and latency measurements. .. literalinclude:: /src/examples/pytorch/torchserve/benchmark_bert.py :language: python :caption: :download:`benchmark_bert.py &lt;/src/examples/pytorch/torchserve/benchmark_bert.py&gt;` :linenos: Run the benchmarking script. .. code:: bash python benchmark_bert.py :: pid 28523: current throughput 0.0, latency p50=0.000 p90=0.000 pid 28523: current throughput 617.7, latency p50=0.092 p90=0.156 pid 28523: current throughput 697.3, latency p50=0.082 p90=0.154 pid 28523: current throughput 702.8, latency p50=0.081 p90=0.149 pid 28523: current throughput 699.1, latency p50=0.085 p90=0.147 pid 28523: current throughput 703.8, latency p50=0.083 p90=0.148 pid 28523: current throughput 699.3, latency p50=0.083 p90=0.148 ... **Congratulations!** By now you should have successfully served a batched model over TorchServe. You can now shutdown torchserve. .. code:: bash torchserve --stop </pre></body></html>
2023-09-29T20:55:25.599Z